Leveraging Data Duplication to Improve The Performance of Storage System with CLD and EHD Image...

5
International Journ International ISSN No: 2456 - 64 @ IJTSRD | Available Online @ www.i Leveraging Data Dup Storage System with CL Pooja M. Khandar 1 , Prof. S. C Department of Computer Scie ABSTRACT With the explosive growth in data vo bottleneck has become an increasin challenge for big data analytics in t existing paper, propose POD, a perform deduplication scheme, to improve the p primary storage systems in the Cloud data deduplication on the I/O path redundant write requests while also s space. This research works aims to duplication in the cloud. Improve the p storage system. We use concept of ima to utilize the space. In this paper we di the design and implementation of data improve the efficiency of storage in system, implements wireless data acce An alternative method for us is rem duplication in storage system by usin application in which we can use two ma CLD (color layout descriptor) and E histogram descriptor). User can brows upload the image on web page then we EHD technic and then see uploaded im store on cloud or not, if there is matchi uploaded image then we extract referen store image then send to the receiver an receive the image. If there is no matchi upload new image to database. B reference of already store image there upload again same image to database remove data duplication, improve the efficiency and utilize network bandw system more effective than the data d improve the performance of primary stor Key Words: Java JDK 6.0, Eclipse, A server, MY-SQL Database. nal of Trend in Scientific Research and De l Open Access Journal | www.ijtsrd.com 470 | Volume - 3 | Issue – 1 | Nov – Dec ijtsrd.com | Volume – 3 | Issue – 1 | Nov-Dec 20 plication to Improve The Perfo LD and EHD Image Matching C. Tawalare 2 , Prof. Dr. H. R. Deshmukh 3 , 1 Student, 2 HOD, 3 Professor ence & Engineering, DRGIT&R, Amravati, Mah olume, the I/O ngly daunting the Cloud. In mance-oriented performance of by leveraging h to remove saving storage remove data performance of age processing iscussed about duplication to n cloud. This ess to servers. move the data ng web based atching technic EHD (enhance wse image and apply CLD & mage is already ing image like nced of already nd receiver can ing image then By extracting is no need to e so, we can storage space width so, our duplication to rage system. Apache tomcat 1. INTRODUCTION Data duplication often called or single instance storage. eliminates redundant copies storage overhead. Data deduplication technique unique instance of data is reta such as 1) disk 2) flash or tape been demonstrated to be an Cloud backup and archiving ap backup window, improve the and network bandwidth util reveal that moderate to high exists in virtual machine (VM [6], [7] and high-performa storage systems [8]. CLD performance oriented dedu improve the performance of Cloud by leveraging data dedu also saving storage space. In about the design and im duplication to improve the e cloud. 2. Literature Review: In existing system when we a to the system, if that file is system then that file will not b of that the reference will be c of times one file referenced to that file has deleted then we w the all files so for that reas copies of that files in the m system memory. So if one f system memory other location of that file. By using Technique[1]. evelopment (IJTSRD) m 2018 018 Page: 979 ormance of g in the Cloud , Prof. S. Dhole 3 harashtra, India intelligent compression it is processes that s of data and reduce e insures that only one ained on storage media, e. Data deduplication has n effective technique in pplications to reduce the storage-space efficiency lization. Recent studies data redundancy clearly M) enterprise [3], [4], [5], ance computing (HPC) and EHD techniques, uplication scheme, to storage systems in the uplication requests while this paper we discussed mplementation of data efficiency of storage in are uploading the files in already existed in that be uploaded and instead created so that if number o many files if by chance will loss the reference of on we are creating the multiple locations of the file is deleted from the ns will maintain the copy Secure Hash Table

description

With the explosive growth in data volume, the I O bottleneck has become an increasingly daunting challenge for big data analytics in the Cloud. In existing paper, propose POD, a performance oriented deduplication scheme, to improve the performance of primary storage systems in the Cloud by leveraging data deduplication on the I O path to remove redundant write requests while also saving storage space. This research works aims to remove data duplication in the cloud. Improve the performance of storage system. We use concept of image processing to utilize the space. In this paper we discussed about the design and implementation of data duplication to improve the efficiency of storage in cloud. This system, implements wireless data access to servers. An alternative method for us is remove the data duplication in storage system by using web based application in which we can use two matching technic CLD color layout descriptor and EHD enhance histogram descriptor . User can browse image and upload the image on web page then we apply CLD and EHD technic and then see uploaded image is already store on cloud or not, if there is matching image like uploaded image then we extract referenced of already store image then send to the receiver and receiver can receive the image. If there is no matching image then upload new image to database. By extracting reference of already store image there is no need to upload again same image to database so, we can remove data duplication, improve the storage space efficiency and utilize network bandwidth so, our system more effective than the data duplication to improve the performance of primary storage system. Pooja M. Khandar | Prof. S. C. Tawalare | Prof. Dr. H. R. Deshmukh | Prof. S. Dhole "Leveraging Data Duplication to Improve The Performance of Storage System with CLD and EHD Image Matching in the Cloud" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-1 , December 2018, URL: https://www.ijtsrd.com/papers/ijtsrd19163.pdf Paper URL: http://www.ijtsrd.com/engineering/computer-engineering/19163/leveraging-data-duplication-to-improve-the-performance-of-storage-system-with-cld-and-ehd-image-matching-in-the-cloud/pooja-m-khandar

Transcript of Leveraging Data Duplication to Improve The Performance of Storage System with CLD and EHD Image...

Page 1: Leveraging Data Duplication to Improve The Performance of Storage System with CLD and EHD Image Matching in the Cloud

International Journal of Trend in

International Open Access Journal

ISSN No: 2456 - 6470

@ IJTSRD | Available Online @ www.ijtsrd.com

Leveraging Data DuplicatStorage System with CLD

Pooja M. Khandar1, Prof. S. C. Tawalare

Department of Computer Science &

ABSTRACT With the explosive growth in data volume, the I/O bottleneck has become an increasingly daunting challenge for big data analytics in the Cloud. In existing paper, propose POD, a performancededuplication scheme, to improve the performance of primary storage systems in the Cloud by leveraging data deduplication on the I/O path to remove redundant write requests while also saving storage space. This research works aims to remove data duplication in the cloud. Improve the performance of storage system. We use concept of image processing to utilize the space. In this paper we discussed about the design and implementation of data duplication to improve the efficiency of storage in cloud. This system, implements wireless data access to servers. An alternative method for us is remove the data duplication in storage system by using web based application in which we can use two matching technic CLD (color layout descriptor) and EHDhistogram descriptor). User can browse image and upload the image on web page then we apply CLD & EHD technic and then see uploaded image is already store on cloud or not, if there is matching image like uploaded image then we extract referenced of already store image then send to the receiver and receiver can receive the image. If there is no matching image thenupload new image to database. By extracting reference of already store image there is no need to upload again same image to database so, we can remove data duplication, improve the storage space efficiency and utilize network bandwidth so, our system more effective than the data duplication to improve the performance of primary storage system. Key Words: Java JDK 6.0, Eclipse, Apache tomcat server, MY-SQL Database.

International Journal of Trend in Scientific Research and Development (IJTSRD)

International Open Access Journal | www.ijtsrd.com

6470 | Volume - 3 | Issue – 1 | Nov – Dec 2018

www.ijtsrd.com | Volume – 3 | Issue – 1 | Nov-Dec 2018

Leveraging Data Duplication to Improve The Performance ofith CLD and EHD Image Matching

Prof. S. C. Tawalare2, Prof. Dr. H. R. Deshmukh3, 1Student, 2HOD, 3Professor

f Computer Science & Engineering, DRGIT&R, Amravati, Maharashtra

With the explosive growth in data volume, the I/O bottleneck has become an increasingly daunting

nge for big data analytics in the Cloud. In existing paper, propose POD, a performance-oriented deduplication scheme, to improve the performance of primary storage systems in the Cloud by leveraging data deduplication on the I/O path to remove

ite requests while also saving storage space. This research works aims to remove data duplication in the cloud. Improve the performance of storage system. We use concept of image processing to utilize the space. In this paper we discussed about

and implementation of data duplication to improve the efficiency of storage in cloud. This system, implements wireless data access to servers. An alternative method for us is remove the data duplication in storage system by using web based

hich we can use two matching technic (color layout descriptor) and EHD (enhance

histogram descriptor). User can browse image and upload the image on web page then we apply CLD &

and then see uploaded image is already store on cloud or not, if there is matching image like uploaded image then we extract referenced of already store image then send to the receiver and receiver can receive the image. If there is no matching image then upload new image to database. By extracting reference of already store image there is no need to upload again same image to database so, we can remove data duplication, improve the storage space efficiency and utilize network bandwidth so, our

effective than the data duplication to improve the performance of primary storage system.

Java JDK 6.0, Eclipse, Apache tomcat

1. INTRODUCTION Data duplication often called intelligent compression or single instance storage.eliminates redundant copies of data and reduce storage overhead. Data deduplication technique insures that only one unique instance of data is retained on storage media, such as 1) disk 2) flash or tape. Data deduplication habeen demonstrated to be an effective technique in Cloud backup and archiving applications to reduce the backup window, improve the storageand network bandwidth utilization. Recent studies reveal that moderate to high data redundancy cleexists in virtual machine (VM) enterprise[6], [7] and high-performance computing (HPC) storage systems [8]. CLD and EHD techniques, performance oriented deduplicationimprove the performance of storage systems in the Cloud by leveraging data deduplication requests while also saving storage space. In this paper we discussed about the design and implementation of data duplication to improve the efficiency of scloud. 2. Literature Review: In existing system when we are uploading the files in to the system, if that file is already existed in that system then that file will not be uploaded and instead of that the reference will be created so that if numberof times one file referenced to many files if by chance that file has deleted then we will loss the reference of the all files so for that reason we are creating the copies of that files in the multiple locations of the system memory. So if one file is desystem memory other locations will maintain the copy of that file. By using SecureTechnique[1].

Research and Development (IJTSRD)

www.ijtsrd.com

Dec 2018

Dec 2018 Page: 979

to Improve The Performance of EHD Image Matching in the Cloud

, Prof. S. Dhole3

Maharashtra, India

Data duplication often called intelligent compression it is processes that

eliminates redundant copies of data and reduce

Data deduplication technique insures that only one unique instance of data is retained on storage media, such as 1) disk 2) flash or tape. Data deduplication has been demonstrated to be an effective technique in Cloud backup and archiving applications to reduce the backup window, improve the storage-space efficiency and network bandwidth utilization. Recent studies reveal that moderate to high data redundancy clearly exists in virtual machine (VM) enterprise [3], [4], [5],

performance computing (HPC) storage systems [8]. CLD and EHD techniques, performance oriented deduplication scheme, to improve the performance of storage systems in the Cloud by leveraging data deduplication requests while also saving storage space. In this paper we discussed about the design and implementation of data duplication to improve the efficiency of storage in

In existing system when we are uploading the files in to the system, if that file is already existed in that system then that file will not be uploaded and instead of that the reference will be created so that if number of times one file referenced to many files if by chance that file has deleted then we will loss the reference of the all files so for that reason we are creating the copies of that files in the multiple locations of the system memory. So if one file is deleted from the system memory other locations will maintain the copy of that file. By using Secure Hash Table

Page 2: Leveraging Data Duplication to Improve The Performance of Storage System with CLD and EHD Image Matching in the Cloud

International Journal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456

@ IJTSRD | Available Online @ www.ijtsrd.com

In another existing paper, propose POD, a performance-oriented duplication scheme, to improve the performance of primary storage systCloud by leveraging data duplication on the I/O path to remove redundant write requests while also saving storage space

Fig 1: System architecture of POD POD resides in the storage node and interacts with the File Systems via the standard read/write interface. Thus, POD can be easily incorporated into any HDDbased primary storage systems to accelerate their system performance. POD is independent of the upper file systems, which makes POD more flexible [5], [6]. POD has two main components: SelectiCache. The request-based Select-Dedupe includes two individual modules: Data Deduplicator and Request Redirector. The Data Deduplicator module is responsible for splitting the incoming write data into data chunks, calculating the hash value of each data chunk, and identifying whether a data chunk is redundant and popular. The Request Redirector module decides whether the write request should be deduplicated, and maintains data consistency to prevent the referenced data from being overwritteand updated. The iCache module also includes two individual modules: Access Monitor and Swap Module [2]. The Access Monitor module is responsible for monitoring the intensity and hit rate of the incoming read and write requests. The Swap module dynamically adjusts the cache space partition between the index cache and read cache. Moreover, it swaps in/out the cached data from/to the backstorage. 3. Proposed Objective In this paper we used two techniques for finding duplication of the image. There are two techniques:

urnal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456

www.ijtsrd.com | Volume – 3 | Issue – 1 | Nov-Dec 2018

In another existing paper, propose POD, a oriented duplication scheme, to improve

the performance of primary storage systems in the Cloud by leveraging data duplication on the I/O path to remove redundant write requests while also saving

1: System architecture of POD

POD resides in the storage node and interacts with the ad/write interface.

Thus, POD can be easily incorporated into any HDD-based primary storage systems to accelerate their system performance. POD is independent of the upper

flexible [5], [6].

lect-Dedupe and Dedupe includes

two individual modules: Data Deduplicator and Request Redirector. The Data Deduplicator module is responsible for splitting the incoming write data into

e of each data chunk, and identifying whether a data chunk is redundant and popular. The Request Redirector module decides whether the write request should be deduplicated, and maintains data consistency to prevent the referenced data from being overwritten and updated. The iCache module also includes two individual modules: Access Monitor and Swap Module [2]. The Access Monitor module is responsible for monitoring the intensity and hit rate of the incoming read and write requests. The Swap

ly adjusts the cache space partition between the index cache and read cache. Moreover, it swaps in/out the cached data from/to the back-end

In this paper we used two techniques for finding two techniques:

1. Color Layout Descriptor 2. Edge Histogram descriptor Color layout descriptor:- Is designed to capture the spatial distribution of color in an image .the feature extraction process consist of two parts; 1. Grid based representative color selecti2. Discrete cosine transform with contization. The functionality of CLD is basically the matching -Image to image matching CLD is one of the most precise and fast color descriptor [8].

Fig 2: color layout descriptor Edge histogram descriptor:- The edge histogram descriptor (EHD) is one of the widely used methods for shape detection. It basically represents the relative frequency of occurrence of 5 types of edges in each local area called a subimage block. The sub image is defined by partitthe image space into 4x4 Nonshown in figure 1. So, the partition of image definitely creates 16 equal-sized blocks regardless of the size of the original image. To define the characteristics of the image block, we then generatdistribution for each image block. The edges of the image block are categorized into 5 types: vertical, horizontal, 45-degree diagonal, 135and non-directional edges, as shown in Figure 2. Thus, the histogram for each the relative distribution of the 5 types of edges in the corresponding sub-image [8].

urnal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456-6470

Dec 2018 Page: 980

Edge Histogram descriptor

Is designed to capture the spatial distribution of color in an image .the feature extraction process consist of

Grid based representative color selection. Discrete cosine transform with contization.

The functionality of CLD is basically the matching

CLD is one of the most precise and fast color

Fig 2: color layout descriptor

edge histogram descriptor (EHD) is one of the

widely used methods for shape detection. It basically represents the relative frequency of occurrence of 5 types of edges in each local area called a sub-image or image block. The sub image is defined by partitioning the image space into 4x4 Non-overlapping blocks as shown in figure 1. So, the partition of image definitely

sized blocks regardless of the size of the original image. To define the characteristics of the image block, we then generate a histogram of edge distribution for each image block. The edges of the image block are categorized into 5 types: vertical,

degree diagonal, 135-degree diagonal directional edges, as shown in Figure 2.

Thus, the histogram for each image block represents the relative distribution of the 5 types of edges in the

Page 3: Leveraging Data Duplication to Improve The Performance of Storage System with CLD and EHD Image Matching in the Cloud

International Journal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456

@ IJTSRD | Available Online @ www.ijtsrd.com

Fig 3: Definition of Sub-image and Image-block in the EHD

Fig2. Five Types of Edges in EHD 4. Implementation:-

Fig4: flowchart

urnal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456

www.ijtsrd.com | Volume – 3 | Issue – 1 | Nov-Dec 2018

image and

Fig2. Five Types of Edges in EHD

Module 1: We have to select the image first and

Screenshot 1: Select image Module 2: Then apply Color Layout Descriptor on

Screenshot 2: Apply CLD Technique Module 3: In this partitioning the original image into matrixes

Screenshot 3: Image Partitioning Module 4: It generates RGB value of the targeted image and check values with the existing images in the database.

Screenshot 4: Check RGB Values

urnal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456-6470

Dec 2018 Page: 981

have to select the image first and upload it.

Screenshot 1: Select image

Then apply Color Layout Descriptor on image.

Screenshot 2: Apply CLD Technique

In this partitioning the original image into 4*4

Image Partitioning

RGB value of the targeted image and check values with the existing images in the database.

Screenshot 4: Check RGB Values

Page 4: Leveraging Data Duplication to Improve The Performance of Storage System with CLD and EHD Image Matching in the Cloud

International Journal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456

@ IJTSRD | Available Online @ www.ijtsrd.com

Module 5: It gives us result of topmost matching images using RGB values and apply EHD.

Screenshot 5: CLD result and Apply EHD Module 6: After applying edge histogram descriptor it generates the edges values of targeted images and match with existing image in database.

Screenshot 6: Check edges value Module 7: It gives us result of topmost matching images using edges values.

Screenshot 7: EHD result

5. Technical Specifications And Result AnalysisThe technologies which are used to implement the system are: � Java jdk.6.0 � Eclipse: In computer programming, Eclipse is an

integrated development environmentcontains a base workspace and an extensible in system for customizing the environment. Written mostly in Java, Eclipse can be used to develop applications. By means of various plugins, Eclipse may also be used to develop applications in other programming languagesAda, ABAP, C, C++, COBOL, FortranJavaScript, Lasso, Lua, Natural,

urnal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456

www.ijtsrd.com | Volume – 3 | Issue – 1 | Nov-Dec 2018

It gives us result of topmost matching images using

Screenshot 5: CLD result and Apply EHD

After applying edge histogram descriptor it generates the edges values of targeted images and match with

Screenshot 6: Check edges value

topmost matching images using

Technical Specifications And Result Analysis: The technologies which are used to implement the

In computer programming, Eclipse is an integrated development environment (IDE). It

nd an extensible plug-system for customizing the environment.

, Eclipse can be used to develop applications. By means of various plug-ins, Eclipse may also be used to develop

programming languages: Fortran, Haskell,

, Perl, PHP,

Prolog, Python, R, Ruby (including framework), Scala, ClojureErlang.

� MySQL is open source relational database system. It is static. Database size is unlimited in MySQL. MySQL support Java. MySQL does not support except & intersect operation. MySQL does not have resource limit. MySQL is available under GPL proprietary license. The MySQL development project has made its source code available under the term of the GNU General Public License, as well as under a variety of proprietary agreements. MySQL is a popular choice of database for used in web application. MySQL is written in C and C++.

6. Advantages: � It require less storage as it is data duplication

application. � It saves time. � Efficient and fast access. 7. Disadvantages: � Required Internet: For the total execution of thisinternet. 8. Conclusion: In this paper, we propose CLD and EHD techniques, a performance oriented duplication scheme, to improve the performance of storage systems in the Cloud by leveraging data duplication requests while also saving storage space. In this paper we discussed aboudesign and implementation of data duplication to improve the efficiency of storage in cloud. This system, implements wireless data access to servers. An alternative method for us is remove the data duplication in storage system by using web based application. 9. References: 1. k. Lavanya, Dr. A. Sureshbabu, “Data Reduction

using A Dedplication Aware Resemblance Detection & Elimination Scheme” International Journal of Advance Research in Computer Science and ManagementAugust 2017.

2. Bo Mao, Hong Jiang, Suzhen Wu and Lei Tian , “Leveraging Data Deduplication to Improve the Performance of Primary Storage Systems in the Cloud ” IEEE TRANSACTIONS ON COMPUTERS, VOL. 65, NO. 6, JUNE 2016.

urnal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456-6470

Dec 2018 Page: 982

(including Ruby on Rails Clojure, Groovy, Scheme, and

relational database system. It is static. Database size is unlimited in MySQL. MySQL support Java. MySQL does not support except & intersect operation. MySQL does not have resource limit. MySQL is available under GPL proprietary license. The MySQL

ent project has made its source code available under the term of the GNU General Public License, as well as under a variety of proprietary agreements. MySQL is a popular choice of database for used in web application. MySQL is written in C and C++.

It require less storage as it is data duplication

For the total execution of this project required the

In this paper, we propose CLD and EHD techniques, a performance oriented duplication scheme, to improve the performance of storage systems in the Cloud by leveraging data duplication requests while also saving storage space. In this paper we discussed about the design and implementation of data duplication to improve the efficiency of storage in cloud. This system, implements wireless data access to servers. An alternative method for us is remove the data duplication in storage system by using web based

k. Lavanya, Dr. A. Sureshbabu, “Data Reduction using A Dedplication Aware Resemblance Detection & Elimination Scheme” International Journal of Advance Research in Computer Science and Management Volume 5, Issue 8,

Suzhen Wu and Lei Tian , “Leveraging Data Deduplication to Improve the Performance of Primary Storage Systems in the Cloud ” IEEE TRANSACTIONS ON COMPUTERS, VOL. 65, NO. 6, JUNE 2016.

Page 5: Leveraging Data Duplication to Improve The Performance of Storage System with CLD and EHD Image Matching in the Cloud

International Journal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456

@ IJTSRD | Available Online @ www.ijtsrd.com

3. A. T. Clements, I. Ahmad, M. VilayannurLi, “Decentralized deduplication in SAN cluster file systems,” in Proc. Conf. USENIX Annu. Tech. Conf., Jun. 2009.

4. K. Jinand and E. L. Miller, “The effectiveness of deduplication on virtual machine disk images,” in Proc. The Israeli Exp. Syst. Conf., May 2009.

5. D. T. Meyer and W. J. Bolosky, “A study of practical deduplication,” in Proc. 9th USENIX Conf. File Stroage Technol., Feb. 2011.

6. K. Srinivasan, T. Bisson, G. Goodson, and K. Voruganti, “iDedup: Latency-aware, inline data

urnal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456

www.ijtsrd.com | Volume – 3 | Issue – 1 | Nov-Dec 2018

Vilayannur, and J. Li, “Decentralized deduplication in SAN cluster file systems,” in Proc. Conf. USENIX Annu.

K. Jinand and E. L. Miller, “The effectiveness of deduplication on virtual machine disk images,” in

f., May 2009.

D. T. Meyer and W. J. Bolosky, “A study of practical deduplication,” in Proc. 9th USENIX Conf. File Stroage Technol., Feb. 2011.

K. Srinivasan, T. Bisson, G. Goodson, and K. aware, inline data

deduplication for primary storage, ”in Proc. 10th USENIX Conf. File Storage Technol., Feb. 2012.

7. A. El-Shimi, R. Kalach, A. Kumar, A. Oltean, J. Li, and S. Sengupta, “Primary data deduplicationlarge scale study and system design,” in Proc. USENIX Conf. Annu. Tech. Con

8. D. Meister, J. Kaiser, A. Brinkmann, T. Cortes, M. Kuhn, and J. Kunkel, “A study on data deduplication in HPC storage systems,Int. Conf. High Perform. Comput., Netw., Storage Anal., Nov. 2012.

9. www.wikipedia.com

urnal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456-6470

Dec 2018 Page: 983

for primary storage, ”in Proc. 10th USENIX Conf. File Storage Technol., Feb. 2012.

Shimi, R. Kalach, A. Kumar, A. Oltean, J. “Primary data deduplication-

large scale study and system design,” in Proc. USENIX Conf. Annu. Tech. Conf., Jun. 2012.

D. Meister, J. Kaiser, A. Brinkmann, T. Cortes, M. Kuhn, and J. Kunkel, “A study on data deduplication in HPC storage systems, ”in Proc. Int. Conf. High Perform. Comput., Netw., Storage