ملف المستخدم
صورة الملف الشخصي

هلا عبدالسلام جاسم

إرسال رسالة

التخصص: علوم الحاسوب

الجامعة: جامعة بغداد

النقاط:

8
معامل الإنتاج البحثي

الخبرات العلمية

  • ماجستير علوم في علوم الحاسوب/ جامعة بغداد كلية العلوم

الأبحاث المنشورة

New techniques to enhance data deduplication using content based-TTTD chunking algorithm

المجلة: International Journal of Advanced Computer Science and Applications

سنة النشر: 2018

تاريخ النشر: 2018-07-21

Due to the fast indiscriminate increase of digital data, data reduction has acquired increasing concentration and became a popular approach in large-scale storage systems. One of the most effective approaches for data reduction is Data Deduplication technique in which the redundant data at the file or sub-file level is detected and identifies by using a hash algorithm. Data Deduplication showed that it was much more efficient than the conventional compression technique in largescale storage systems in terms of space reduction. Two Threshold Two Divisor (TTTD) chunking algorithm is one of the popular chunking algorithm used in deduplication. This algorithm needs time and many system resources to compute its chunk boundary. This paper presents new techniques to enhance TTTD chunking algorithm using a new fingerprint function, a multi-level hashing and matching technique, new indexing technique to store the Metadata. These new techniques consist of four hashing algorithm to solve the collision problem and adding a new chunk condition to the TTTD chunking conditions in order to increase the number of the small chunks which leads to increasing the Deduplication Ratio. This enhancement improves the Deduplication Ratio produced by TTTD algorithm and reduces the system resources needed by this algorithm. The proposed algorithm is tested in terms of Deduplication Ratio, execution time, and Metadata size

Evaluation of Two Thresholds Two Divisor Chunking Algorithm Using Rabin Finger print, Adler, and SHA1 Hashing Algorithms

المجلة: Iraqi Journal of Science

سنة النشر: 2017

تاريخ النشر: 2017-07-21

Data deduplication is a data reduction technology that is worked by detecting and eliminating data redundancy and keep only one copy of these data, and is often used to reduce the storage space and network bandwidth. While our main motivation has been low band-width synchronization applications such as Low Bandwidth Network File System (LBNFS), deduplication is also useful in archival file systems. A number of researchers have advocated a scheme for archival. Data deduplication now is one of the hottest research topics in the backup storage area. In this paper, A survey on different chunking algorithms of data deduplication are discussed, and studying the most popular used chunking algorithm Two Threshold Two Divisor (TTTD), and evaluated this algorithm using three different hashing functions that can be used with it (Rabin Finger print, Adler, and SHA1) implemented each one as a fingerprinting and hashing algorithm and then compared the execution time and deduplication elimination ratio which was the first time this comparison performed and the result is shown below.

Frame-Based Change Detection Using Histogram and Threshold to Separate Moving Objects from Dynamic Background

المجلة: Iraqi Journal of Science

سنة النشر: 2024

تاريخ النشر: 2024-07-17

Detecting and subtracting the Motion objects from backgrounds is one of the most important areas. The development of cameras and their widespread use in most areas of security, surveillance, and others made face this problem. The difficulty of this area is unstable in the classification of the pixels (foreground or background). This paper proposed a suggested background subtraction algorithm based on the histogram. The classification threshold is adaptively calculated according to many tests. The performance of the proposed algorithms was compared with state-of-the-art methods in complex dynamic scenes.

Hybrid Color Image Compression Using Signals Decomposition with Lossy and Lossless Coding Schemes

المجلة: 2023 International Conference on Information Technology, Applied Mathematics and Statistics (ICITAMS)

سنة النشر: 2024

تاريخ النشر: 2023-03-28

This paper proposes a new, improved compression technique to compress color images. Compression knows days have become more critical because of the increasing demand for internet use and the exchange of many images, videos, and audio. This technique converts an image to YUV. The signal decomposition divides the image into significant components (C1) and small components (C2) parts; big indicates the energetic Component, and the detail is the small Component. Each Component is passed through different processing steps. The most vibrant part is passed through sequences of lossless coding steps to preserve the quality of the main features. The included coding steps are Bit plane Coding followed by Quadtree Coding. Meanwhile, the detailed Component was passed through a series of lossy coding steps using a controlled wavelet-based error criterion and quantization. Also, delta pulse code …