Sunday, March 9, 2025

Data Structure

 A Data Structure is a specialized way to organize, manage, and store data in a computer so that it can be used efficiently. It defines the relationship between data, how the data is organized, and the operations that can be performed on the data.  

Types of Data Structures  

1. Linear Data Structure 

In Linear Data Structures, elements are arranged sequentially or linearly, where each element is connected to its previous and next element.  

Array:  

Collection of elements of the same data type.  

Fixed size and stored in contiguous memory locations.  

Example: `int arr[5] = {1, 2, 3, 4, 5}`  


Linked List:  

Consists of nodes where each node contains data and a pointer to the next node.  

Types:  

  1. Singly Linked List – Each node points to the next node.  
  2. Doubly Linked List – Each node points to the previous and next node.  
  3. Circular Linked List – The last node connects to the first node.  


Stack (LIFO - Last In First Out):  

Linear data structure where elements are added and removed from the same end called the top.  

Operations:  

  • Push – Add element to the stack.  
  • Pop – Remove element from the stack.  
  • Peek – Get the top element without removing it.  
  • Example: Stack of plates.  


Queue (FIFO - First In First Out): 

Elements are inserted from the rear and removed from the front.  

Types:  

  1. Simple Queue – Insertion at rear, deletion from front.  
  2. Circular Queue – Last element is connected to the first.  
  3. Deque (Double Ended Queue) – Insertion and deletion can happen from both ends.  
  4. Priority Queue – Elements are dequeued based on priority.  


2. Non-Linear Data Structure  

In Non-Linear Data Structures, elements are not arranged sequentially, and there can be multiple relationships between elements.  

Tree: 

Hierarchical data structure consisting of nodes.  

Types:  

  1. Binary Tree – Each node has at most two children.  
  2. Binary Search Tree (BST) – Left child is smaller, right child is larger.  
  3. Heap – Complete binary tree, used for priority queues.  



Example: Family tree, File directory system.  


Graph:  

Consists of nodes (vertices) and edges (connections).  

Types:  

  1. Directed Graph (Digraph) – Edges have direction.  
  2. Undirected Graph – Edges do not have direction.  
  3. Weighted Graph – Edges have weights (cost).  

Example: Social network, Google Maps.  


3. Hashing  

Technique to convert large data into a small index (key) for faster access.  

Hash Table stores data in the form of key-value pairs.  

Example: Storing student records with roll numbers as keys.  


4. File Structure  

Used to store data in secondary storage (hard drive, SSD, etc.).  

Example: Files, databases.  


Why Are Data Structures Important? 

  • Efficient data management.
  • Faster searching, sorting, and processing.  
  • Used in algorithms, databases, operating systems, etc. 


Advantages and Disadvantages of Data Structures

1. Array (Linear Data Structure)

Advantages: 

  • Simple and easy to implement.  
  • Fast access to elements using index (random access).  
  • Can be used to implement other data structures like stack, queue, etc.  

Disadvantages: 

  • Fixed size (in static arrays).  
  • Insertion and deletion are costly operations.  
  • Wastage of memory if the array size is larger than required.  


2. Linked List (Linear Data Structure)

Advantages:

  • Dynamic size (can grow or shrink during runtime).  
  • Efficient memory utilization (no wastage of memory).  
  • Insertion and deletion are easy compared to arrays.  

Disadvantages: 

  • No direct access to elements (sequential access).  
  • Requires extra memory for storing pointers.  
  • Traversing the list takes more time.  


3. Stack (LIFO - Last In First Out)

Advantages:

  • Simple and easy to use.  
  • Useful in function call management, recursion, and backtracking.  
  • Memory is efficiently managed using stacks.  

Disadvantages: 

  • Fixed size if implemented using an array.  
  • Stack overflow may occur if the memory limit is exceeded.  
  • Difficult to access elements other than the top.  


4. Queue (FIFO - First In First Out)

Advantages:  

  • Simple and easy to implement.  
  • Useful in task scheduling, CPU scheduling, and resource sharing.  
  • Can manage data in sequential order.  

Disadvantages:

  • Fixed size in case of arrays.  
  • Insertion and deletion take time in linear queues.  
  • Circular queues are complex to implement.  


5. Tree (Non-Linear Data Structure)

Advantages:  

  • Hierarchical structure is useful for organizing data.  
  • Quick searching, insertion, and deletion in Binary Search Tree (BST).  
  • Useful in file systems, databases, and decision-making systems.  

Disadvantages:  

  • Complex to implement and manage.  
  • Requires extra memory for storing pointers.  
  • Balancing the tree (in AVL, Red-Black Trees) can be complex.  


6. Graph (Non-Linear Data Structure)

Advantages:

  • Can represent complex relationships (like social networks, maps, web pages).  
  • Efficient for finding shortest paths (Dijkstra's algorithm).  
  • Used in networks, navigation, and recommendation systems.  

Disadvantages 

  • Complex to implement.  
  • Requires large memory to store edges and vertices.  
  • Traversal algorithms can be time-consuming.  


7. Hashing (Data Structure)

Advantages: 

  • Provides fast access to data using keys (O(1) in ideal case).  
  • Useful in databases and caching mechanisms.  
  • Efficient for searching large datasets.  

Disadvantages:  

  • Hash collisions can occur (two keys having the same hash value).  
  • Requires more memory to resolve collisions.  
  • Rehashing may be required when the hash table becomes full.  


8. File Structure (Storage Data Structure)

Advantages:  

  • Provides a way to store large amounts of data.  
  • Easy to access, read, and write data.  
  • Used in databases, operating systems, etc.  

Disadvantages:  

  • Slow access compared to RAM.  
  • Requires file management systems.  
  • Data corruption or loss can occur.  


Importance of Data Structures in Computer Science  


1. Efficient Data Management  

Data structures help in organizing and managing large volumes of data efficiently.  

Example:  

  • Array – Store elements in a fixed size.  
  • Linked List – Store dynamic data without memory wastage.  
  • Graph – Represent complex relationships like social networks, maps, etc.  


2. Improved Performance of Algorithms  

Using the right data structure improves the time complexity of algorithms.  

Example:  

  • Searching in an Unsorted Array → O(n) (linear time).  
  • Searching in a Binary Search Tree (BST → O(log n) (logarithmic time).  
  • Hashing → O(1) (constant time in the best case).  


3. Memory Utilization

Proper data structures ensure optimal use of memory without wastage.  

Example:  

  • Linked List – Uses exact memory as needed without fixed size.  
  • Dynamic Arrays – Can increase/decrease size based on demand.  


4. Easy Data Retrieval and Access  

Data structures like Hash Tables, Binary Search Trees (BST), and Graphs allow fast data retrieval.  

Example:  

  • Hash Table – Search in constant time O(1).  
  • Tree – Search in logarithmic time O(log n).  


5. Application in Real-World Problems  

Data structures are widely used in solving real-world problems.  

Examples:

  • Social Networks: Use Graphs to connect people.  
  • Google Maps: Use Graphs and Trees for navigation.  
  • E-commerce websites: Use Hash Tables for fast product searches.  



6. Efficient Algorithm Design  

Data structures play a major role in designing efficient algorithms.  

Example:  

  • Dijkstra's Algorithm (Shortest Path) → Uses Graph.  
  • Merge Sort, Quick Sort (Sorting Algorithms) → Use Arrays.  
  • Recursion, Backtracking → Use Stack.  


7. Data Organization in Database Systems  

In databases, data structures are crucial for organizing and retrieving data.  

Example:  

  • B-Tree, B+ Tree – Used in database indexing.  
  • Hash Tables – Used for fast searching in databases.  


8. Support in Artificial Intelligence (AI) and Machine Learning (ML) 

In AI and ML, large datasets are processed using advanced data structures like:  

  • Graphs – Neural Networks, Social Networks.  
  • Trees – Decision Trees, Random Forests.  
  • Hash Tables – Data Caching and Lookup.  


9. Operating System Functionality  

Data structures are the core part of operating systems:  

  • Process Scheduling: Uses Queues.  
  • Memory Management: Uses Linked Lists.  
  • File Management: Uses Trees and Hash Tables.  


10. Problem Solving and Competitive Programming  

In competitive programming, efficient data structures help solve complex problems quickly.  

Example:  

  • Stack – Used in solving recursion problems.  
  • Heap – Used in priority-based problems.  
  • Graph – Used in path-finding problems.  


Conclusion

Data Structures are the backbone of computer science. They provide a way to store, organize, and manage data efficiently.  Without efficient data structures, software development, problem-solving, and algorithm performance would be slow and inefficient.  Every field like Machine Learning, AI, Data Science, Operating Systems, and Databases heavily relies on data structures.  


Wednesday, February 12, 2025

Internet of Things (IoT)

 IoT stands for Internet of Things, a network of devices that are connected to the internet and can share data. IoT devices can be used in many different ways, including in homes, agriculture, and supply chains. 

Internet of Things, refers to the collective network of connected devices and the technology that facilitates communication between devices and the cloud, as well as between the devices themselves.

Internet of Things (IoT) technology has a wide range of applications and the use of the Internet of Things is growing so faster. It is the networking of physical objects that contain electronics embedded within their architecture to communicate and sense interactions amongst each other or to the external environment.


Architecture of IoT

The architecture of IoT is divided into 4 different layers i.e. Sensing Layer, Network Layer, Data processing Layer, and Application Layer. 



Sensing Layer: The sensing layer is the first layer of the Internet of Things architecture and is responsible for collecting data from different sources. This layer includes sensors and actuators that are placed in the environment to gather information about temperature, humidity, light, sound, and other physical parameters. Wired or wireless communication protocols connect these devices to the network layer.

Network Layer: The network layer of an IoT architecture is responsible for providing communication and connectivity between devices in the IoT system. It includes protocols and technologies that enable devices to connect and communicate with each other and with the wider internet. Examples of network technologies that are commonly used in IoT include WiFi, Bluetooth, Zigbee, and cellular networks such as 4G and 5G technology. Additionally, the network layer may include gateways and routers that act as intermediaries between devices and the wider internet, and may also include security features such as encryption and authentication to protect against unauthorized access.

Data processing Layer: The data processing layer of IoT architecture refers to the software and hardware components that are responsible for collecting, analyzing, and interpreting data from IoT devices. This layer is responsible for receiving raw data from the devices, processing it, and making it available for further analysis or action. The data processing layer includes a variety of technologies and tools, such as data management systems, analytics platforms, and machine learning algorithms. These tools are used to extract meaningful insights from the data and make decisions based on that data. Example of a technology used in the data processing layer is a data lake, which is a centralized repository for storing raw data from IoT devices.

Application Layer: The application layer of IoT architecture is the topmost layer that interacts directly with the end-user. It is responsible for providing user-friendly interfaces and functionalities that enable users to access and control IoT devices. This layer includes various software and applications such as mobile apps, web portals, and other user interfaces that are designed to interact with the underlying IoT infrastructure. It also includes middleware services that allow different IoT devices and systems to communicate and share data seamlessly. The application layer also includes analytics and processing capabilities that allow data to be analyzed and transformed into meaningful insights. This can include machine learning algorithms, data visualization tools, and other advanced analytics capabilities.


Why is IoT important?

Improved efficiency

By using IoT devices to automate and optimize processes, businesses can improve efficiency and productivity. For example, IoT sensors can be used to monitor equipment performance and detect or even resolve potential issues before they cause downtime, reducing maintenance costs and improving uptime.

Data-driven decision-making

IoT devices generate vast amounts of data that can be used to make better-informed business decisions and new business models. By analyzing this data, businesses can gain insights into customer behavior, market trends, and operational performance, allowing them to make more informed decisions about strategy, product development, and resource allocation.

Cost-savings

By reducing manual processes and automating repetitive tasks, IoT can help businesses reduce costs and improve profitability. For example, IoT devices can be used to monitor energy usage and optimize consumption, reducing energy costs and improving sustainability.

Enhanced customer experience

By using IoT technology to gather data about customer behavior, businesses can create more personalized and engaging experiences for their customers. For example, retailers can use IoT sensors to track customer movements in stores and deliver personalized offers based on their behavior.


Technologies that make IoT possible:

Sensors and actuators: Sensors are devices that can detect changes in the environment, such as temperature, humidity, light, motion, or pressure. Actuators are devices that can cause physical changes in the environment, such as opening or closing a valve or turning on a motor. These devices are at the heart of IoT, as they allow machines and devices to interact with the physical world. Automation is possible when sensors and actuators work to resolve issues without human intervention.

Connectivity technologies: To transmit IoT data from sensors and actuators to the cloud, IoT devices need to be connected to the internet. There are several connectivity technologies that are used in IoT, including wifi, Bluetooth, cellular, Zigbee, and LoRaWAN.

Cloud computing: The cloud is where the vast amounts of data that is generated by IoT devices are stored, processed, and analyzed. Cloud computing platforms provide the infrastructure and tools that are needed to store and analyze this data, as well as to build and deploy IoT applications.

Big data analytics: To make sense of the vast amounts of data generated by IoT devices, businesses need to use advanced analytics tools to extract insights and identify patterns. These tools can include machine learning algorithms, data visualization tools and predictive analytics models.

Security and privacy technologies: As IoT deployments become more widespread, IoT security and privacy become increasingly important. Technologies such as encryption, access controls and intrusion detection systems are used to protect IoT devices and the data they generate from cyberthreats.


Characteristics of IoT

  • Massively scalable and efficient
  • IP-based addressing will no longer be suitable in the upcoming future.
  • An abundance of physical objects is present that do not use IP, so IoT is made possible.
  • Devices typically consume less power. When not in use, they should be automatically programmed to sleep.
  • A device that is connected to another device right now may not be connected in another instant of time.
  • Intermittent connectivity – IoT devices aren’t always connected. In order to save bandwidth and battery consumption, devices will be powered off periodically when not in use. Otherwise, connections might turn unreliable and thus prove to be inefficient.


Advantages of IoT

  • Improved efficiency and automation of tasks.
  • Increased convenience and accessibility of information.
  • Better monitoring and control of devices and systems.
  • Greater ability to gather and analyze data.
  • Improved decision-making.
  • Cost savings.


Disadvantages of IoT

  • Security concerns and potential for hacking or data breaches.
  • Privacy issues related to the collection and use of personal data.
  • Dependence on technology and potential for system failures.
  • Limited standardization and interoperability among devices.
  • Complexity and increased maintenance requirements.
  • High initial investment costs.
  • Limited battery life on some devices.


Examples of IoT applications

Healthcare

In the healthcare industry, IoT devices can be used to monitor patients remotely and collect real-time data on their vital signs, such as heart rate, blood pressure and oxygen saturation. This sensor data can be analyzed to detect patterns and identify potential health issues before they become more serious. IoT devices can also be used to track medical equipment, manage inventory and monitor medication compliance.

Manufacturing

Industrial IoT devices can be used in manufacturing to monitor machine performance, detect equipment failures and optimize production processes. For example, sensors can be used to monitor the temperature and humidity in a manufacturing facility, ensuring that conditions are optimal for the production of sensitive products. IoT devices can also be used to track inventory, manage supply chains and monitor the quality of finished products. Industrial IoT is such an expansive new technology space, that it is sometimes referred to by its own abbreviation: IIoT (Industrial IoT). 

Retail

In the retail industry, IoT devices can be used to track customer behavior, monitor inventory levels and optimize store layouts. For example, sensors can be used to track foot traffic in a store and analyze customer behavior, allowing retailers to optimize product placement and improve the customer experience. IoT devices can also be used to monitor supply chains, track shipments and manage inventory levels.

Agriculture

IoT devices can be used in agriculture to monitor soil conditions, weather patterns and crop growth. For example, sensors can be used to measure the moisture content of soil, ensuring that crops are irrigated at the optimal time. IoT devices can also be used to monitor livestock health, track equipment and manage supply chains. Low-power or solar-powered devices can often be used with minimal oversight in remote locations.

Transportation

In the transportation industry, IoT devices can be used to monitor vehicle performance, optimize routes, and track shipments. For example, sensors can be used to monitor the fuel efficiency of connected cars, reducing fuel costs and improving sustainability. IoT devices can also be used to monitor the condition of cargo, ensuring that it arrives at its destination in optimal condition.


future of IoT

Growth: The number of IoT devices is expected to continue to grow rapidly, with estimates suggesting that there will be tens of billion IoT devices in use over the next few years. This growth will be driven by increased adoption across industries, as well as the development of new use cases and applications.


Edge computing: Edge computing is becoming increasingly important for IoT, as it allows data to be processed and analyzed closer to the source of the data, rather than in a centralized data center. This can improve response times, reduce latency and reduce the amount of data that needs to be transferred over IoT networks.


Artificial intelligence and machine learning: AI and machine learning are becoming increasingly important for IoT, as they can be used to analyze vast amounts of data that is generated by IoT devices and extract meaningful insights. This can help businesses make more informed decisions and optimize their operations.


Blockchain: Blockchain technology is being explored as a way to improve security and privacy in the IoT. Blockchain can be used to create secure, decentralized networks for IoT devices, which can minimize data security vulnerabilities.


Sustainability: Sustainability is becoming an increasingly important consideration for IoT, as businesses look for ways to reduce their environmental impact. IoT can be used to optimize energy usage, reduce waste and improve sustainability across a range of industries.


Web development

 Web development is the process of creating websites and applications for the World Wide Web. It involves designing, building, testing, and maintaining websites. 

Types of web development


  1. Front-end development
  2. Back-end development
  3. Full-stack development

Front-end development

The part of a website where the user interacts directly is termed as front end. This involves designing the structure, layout, and behavior of the website It is also referred to as the ‘client side’ of the application.

The part of the website that users see and interact with. This includes designing the layout, structure, and behavior of the website. 

Frontend Technologies

  1. HTML: HTML stands for HyperText Markup Language. It is used to design the front end portion of web pages using markup language. It acts as a skeleton for a website since it is used to make the structure of a website.
  2. CSS: Cascading Style Sheets fondly referred to as CSS is a simply designed language intended to simplify the process of making web pages presentable. It is used to style our website.
  3. JavaScript: JavaScript is a scripting language used to provide a dynamic behavior to our website.
  4. Bootstrap: Bootstrap is a free and open-source tool collection for creating responsive websites and web applications. It is the most popular CSS framework for developing responsive, mobile-first websites. Nowadays, the websites are perfect for all browsers (IE, Firefox, and Chrome) and for all sizes of screens (Desktop, Tablets, Phablets, and Phones).

Frontend Frameworks

  • React.js : A popular JavaScript library for building dynamic, component-based user interfaces.
  • Angular : A full-fledged framework for building single-page applications (SPAs), with features like two-way data binding and dependency injection.
  • Vue.js : A progressive JavaScript framework that is flexible and can be used for building both simple and complex user interfaces.



Back-end development

The server-side software that focuses on what users can't see on the website. This includes databases, application programming interfaces (APIs), and architecture. 

The Backbone of the Web. Backend is the server side of a website. It is part of the website that users cannot see and interact with. It is the portion of software that does not come in direct contact with the users. It is used to store and arrange data.

Server-side Programming Languages-

  • PHP: PHP is a server-side scripting language designed specifically for web development.
  • Java: Java is one of the most popular and widely used programming languages. It is highly scalable.
  • Python: Python is a programming language that lets you work quickly and integrate systems more efficiently.
  • Node.js: Node.js is an open source and cross-platform runtime environment for executing JavaScript code outside a browser.
  • Ruby: Ruby is a dynamic, reflective, object-oriented, general-purpose programming language.
  • C# : C# is a high-level, general-purpose programming language developed by Microsoft.

Databases

  1. MySQL
  2. PostgreSQL
  3. MongoDB
  4. MariaDB
  5. SQLite


Full-stack development

The practice of designing, building, and maintaining the entire software stack of a web application. This includes both the front-end and back-end components. 

Full-stack development refers to the practice of developing both the frontend and backend of a website or web application. Full-stack developers have a deep understanding of both areas and can build end-to-end solutions.

Full Stack Technologies:

  • MERN Stack : MongoDB, Express.js, React, Node.js
  • MEAN Stack : MongoDB, Express.js, Angular, Node.js
  • JAMstack : JavaScript, APIs, Markup
  • Django Stack : Django, MySQL/PostgreSQL, HTML/CSS/JavaScript
  • Spring Boot Stack : Spring Boot, MySQL/PostgreSQL, Java
  • LAMP Stack : Linux, Apache, MySQL, PHP
  • LEMP Stack : Linux, Engine-X, MySQL, PHP


Web development life cycle 

  1. Gathering information
  2. Planning
  3. Design and layout
  4. Content creation
  5. Development
  6. Testing, review, and launch
  7. Maintenance and updation

Databases-


1. Relational Database : 

RDBMS stands for Relational Database Management Systems. It is most popular database. In it, data is store in the form of row that is in the form of tuple. It contain numbers of table and data can be easily accessed because data is store in the table. This Model was proposed by E.F. Codd. 

A relational database is a way of storing and organizing data that emphasizes precision and interconnection. Imagine it as a well-organized filing cabinet, where each drawer (table) holds neatly filed records (rows) categorized by specific information (columns).

These tables are the building blocks of a relational database. Each one represents a different type of data, like customer information or product details, and every row in a table is a distinct record with its own unique identifier.

What truly sets relational databases apart is their reliance on Structured Query Language (SQL), a powerful tool for interacting with the stored data. Imagine SQL as the librarian who knows exactly where every piece of information resides.

With SQL, users can execute complex queries, update data, and even manage access to the database. This combination of structured storage and robust querying makes relational databases a reliable choice for scenarios where data integrity and accuracy are paramount, such as financial transactions or inventory management.


2. NoSQL : 

NoSQL Database stands for a non-SQL database. NoSQL database doesn’t use table to store the data like relational database. It is used for storing and fetching the data in database and generally used to store the large amount of data. It supports query language and provides better performance.

NoSQL especially in scenarios where data is vast, varied, and rapidly changing. Imagine a toolset where each tool is specialized for a particular task — NoSQL offers this level of specialization in data management.

It handles various data formats, from documents and key-value pairs to complex graphs, making it ideal for applications dealing with unstructured or semi-structured data, like content management systems or big data analytics. At its core, NoSQL prioritizes speed and flexibility, sometimes at the expense of the strict consistency that relational databases uphold.

It’s particularly effective in environments where quick access to data is crucial, and the data structure may evolve over time. This makes NoSQL an appealing choice for emerging tech landscapes, where agility and the ability to process massive amounts of data quickly are key drivers of success.


Tuesday, February 11, 2025

Cyber Attack

 


A cyber attack is the process of attempting to steal data or gaining unauthorized access to computers and networks using one or more computers. A cyber attack is often the first step an attacker takes in gaining unauthorized access to individual or business computers or networks before carrying out a data breach.

Cyber criminals use a range of methods and techniques to gain unauthorized access to computers, data, and networks and steal sensitive information.

A cyber attack is any type of offensive action that targets computer information systems, infrastructures, computer networks or personal computer devices, using various methods to steal, alter or destroy data or information systems

The goal of a cyber attack is either to disable the target computer and take it offline or gain access to the computer’s data and infiltrate connected networks and systems. Cyber attacks also differ broadly in their sophistication, with cyber criminals launching both random and targeted attacks on businesses. Attackers deploy a wide range of methods to begin a cyber attack, such as denial of service, malware, phishing, and ransomware.

An example is CMA CGM, one of the largest container shipping companies in the world. The firm suffered a cyber attack that originally targeted its servers, which then led to a data breach. The September 2020 attack occurred as malware was used to target the firm’s peripheral servers, which led to CMA CGM taking down access to its online services.

Malware: A company does not take the appropriate cyber attack prevention steps and allows its employees to visit any website they like. An employee goes to a fake site that automatically downloads malware onto their computer. The malware sets up a backdoor for a future ransomware attack.

Phishing: A phishing email, one of the most common cyber attack types, gets sent to an employee telling them they need to update their bank account password. They are led to a fake site, and a hacker collects all the information they put in.

These cyber attack examples are fairly simple not the sophisticated types some criminal syndicates unleash but they are still some of the most common methods malicious actors use to exploit companies and their employees.


Types of cyber attacks



1. Malware

Malware is malicious software designed to cause damage to computers, networks, and servers. There are different forms of malware, including Trojans, viruses, and worms, and they all reproduce and spread through a computer or network. This allows the hacker to gain deeper access into the target network to steal data, cause damage to devices, render networks inoperable, or take control of systems.

  • Trojans :- A Trojan or a Trojan horse is a program that hides in a useful program and usually has a malicious function. A major difference between viruses and Trojans is that Trojans do not self-replicate. In addition to launching attacks on a system, a Trojan can establish a back door that can be exploited by attackers. For example, a Trojan can be programmed to open a high-numbered port so the hacker can use it to listen and then perform an attack. 
  •  Logic bombs :- A logic bomb is a type of malicious software that is appended to an application and is triggered by a specific occurrence, such as a logical condition or a specific date and time. 
  • Worms :- Worms differ from viruses in that they do not attach to a host file, but are self contained programs that propagate across networks and computers. Worms are commonly spread through email attachments; opening the attachment activates the worm program. A typical worm exploit involves the worm sending a copy of itself to every contact in an  infected computer’s email address In addition to conducting malicious activities, a worm spreading across the internet and overloading email servers can result in denial-of-service attacks against nodes on the network. 
  • Droppers :- A dropper is a program used to install viruses on computers. In many instances, the dropper is not infected with malicious code and, therefore might not be detected by virus-scanning software. A dropper can also connect to the internet and download updates to virus software that is resident on a compromised system. 
  • Ransomware :- Ransomware is a type of malware that blocks access to the victim’s data and threatens to publish or delete it unless a ransom is paid. While some simple computer ransomware can lock the system in a way that is not difficult for a knowledgeable person to reverse, more advanced malware uses a technique called crypto viral extortion, which encrypts the victim’s files in a way that makes them nearly impossible to recover without the decryption key.
  • Adware :- Adware is a software application used by companies for marketing purposes; advertising banners are displayed while any program is running. Adware can be automatically downloaded to your system while browsing any website and can be viewed through pop-up windows or through a bar that appears on the computer screen automatically. 
  • Spyware :- Spyware is a type of program that is installed to collect information about users, their computers or their browsing habits. It tracks everything you do without your knowledge and sends the data to a remote user. It also can download and install other malicious programs from the internet. Spyware works like adware but is usually a separate program that is installed unknowingly when you install another freeware application. 

2. Phishing


A phishing attack tricks a target into downloading malware or entering sensitive information into spoofed websites. These cyber attack methods are typically launched via email, with the attacker creating messages that look legitimate and may appear to be from a trusted sender. However, they will contain malware within an attachment or a malicious hyperlink that takes the recipient to a fake website that asks them to enter their login credentials or banking details.

Some phishing attacks take a blanket approach to try and catch as many victims as possible, but others are highly targeted and carefully researched to steal data from valuable individuals. Phishing is not restricted to email, however, as attacks are increasingly targeting mobile devices.


3. Ransomware

Ransomware attacks are a financially fueled form of malware attack. Attackers send messages containing a malicious attachment that, when downloaded, encrypts specific data and files or entire computers. The attacker will then demand a ransom fee from the victim and will only release or restore access to the data upon payment.

Ransomware attacks accounted for $8 billion of damage in 2018, of which only $1 billion came from ransom payments, and the rest was from reputational damage and lost revenue caused by downtime.


4. Denial of Service (DoS)

A denial-of-service (DoS) attack is designed to prevent online services from working efficiently, also known as a brute-force attack. It is typically caused by an attacker flooding a website with huge amounts of traffic or requests, in an attempt to overwhelm its systems and take them offline. A more advanced DoS form is a distributed denial-of-service (DDoS) attack, through which an attacker takes control of several computers to overload its target.


5. Man-in-the-Middle (MITM)

MITM attacks enable a malicious actor to position themselves between the target victim and an online service the user accesses. An example of this is an attacker creating a spoofed, free-to-access Wi-Fi network. When the user connects to or signs in to the network, the attacker can steal the login credentials and data they use while on it.


6. Cryptojacking

A cryptojacking attack occurs when a bad actor takes control of a computer, mobile device, or server to mine for online currency or cryptocurrency. The attack either begins with malware being installed on a computer or by running code in JavaScript to infiltrate the user’s browser.

Cryptojacking is financially motivated, and the method is designed to remain hidden from the target while using their computing resources to mine cryptocurrency. Often, the only sign of cryptojacking is a loss or reduction in computer performance or overactive cooling fans.


7. SQL injection

Attackers use Structured Query Language (SQL) injection to exploit vulnerabilities and seize control of a database. Many websites and web applications store data in SQL and use it to share user data with databases. If an attacker spots a vulnerability in a webpage, they can perform an SQL injection to discover user credentials and mount a cyber attack.

In some cases, they may be able to alter and add data within a database, delete records, transfer money, and even attack internal networks.


8. Zero-day exploits

Zero-day attacks target vulnerabilities in software code that businesses have not yet discovered, and as a result, have not been able to fix or patch. Once an attacker spots a code vulnerability, they create an exploit that enables them to infiltrate the business before it realizes there is a problem. They are then free to collect data, steal user credentials, and enhance their access rights within an organization.

Attackers can often remain active within business systems without being noticed for months and even years. Zero-day vulnerability exploit techniques are commonly available on the dark web, often for purchase by government agencies to use for hacking purposes.


9. DNS tunneling

DNS tunneling is a cyber attack method that targets the Domain Name System (DNS), a protocol that translates web addresses into Internet Protocol (IP) addresses. DNS is widely trusted, and because it is not intended for transferring data, it is often not monitored for malicious activity. This makes it an effective target to launch cyber attacks against corporate networks.

One such method is DNS tunneling, which exploits the DNS to tunnel malicious data and malware. It begins with an attacker registering a domain with the name server pointing to the attacker’s server, which has a tunneling malware program installed on it. The attacker infiltrates a computer and is free to send DNS requests through their server, which establishes a tunnel they can use to steal data and other malicious activity.



Sunday, February 2, 2025

Cyber Security


 Cyber Security is the protection of internet-connected systems, including hardware, software and data, from cyber attacks.

In a computing context, security comprises Cyber Security and physical security  both are used by enterprises to protect against unauthorized access to data centers and other computerized systems. 

Information security, which is designed to maintain the confidentiality, integrity and availability of data, is a subset of Cyber Security.


Elements of Cyber Security

  •  Application security: Application security is the use of software, hardware, and procedural methods to protect applications from external threats.
  •  Information security: Information security is a set of strategies for managing the processes, tools and policies necessary to prevent, detect, document and counter threats to digital and non-digital information. Information security responsibilities include establishing a set of business processes that will protect information assets regardless of how the information is formatted or whether it is in transit, is being processed or is at rest in storage.
  •  Network security: Network security is any activity designed to protect the usability and integrity of your network and data. It includes both hardware and software technologies. Effective network security manages access to the network. It targets a variety of threats and stops them from entering or spreading on your network.
  • Disaster recovery/business continuity planning: A business continuity plan (BCP) is a document that consists of the critical information an organization needs to continue operating during an unplanned event.
  • Operational security: OPSEC (operational security) is an analytical process that classifies information assets and determines the controls required to protect these assets.
  • End-user education: Not educating your end-users in cybersecurity initiatives is like trying to keep a flood at bay using a screen door. Your end-users are the first line of defense against cybersecurity attacks (like phishing scams).

Types of Cyber Security threats:

The process of keeping up with new technologies, security trends and threat intelligence is a challenging task. However, it's necessary in order to protect information and other assets from cyberthreats, which take many forms.

➢ Ransomware is a type of malware that involves an attacker locking the victim's computer system files typically through encryption  and demanding a payment to decrypt and unlock them.

➢ Malware is any file or program used to harm a computer user, such as worms, computer viruses, Trojan horses and spyware.

➢ Social engineering is an attack that relies on human interaction to trick users into breaking security procedures in order to gain sensitive information that is typically protected.

➢ Phishing is a form of fraud where fraudulent emails are sent that resemble emails from reputable sources; however, the intention of these emails is to steal sensitive data, such as credit card or login information.

What Cyber Security can prevent?

The use of Cyber Security can help prevent cyber attacks, data breaches and identity theft and can aid in risk management. When an organization has a strong sense of network security and an effective incident response plan, it is better able to prevent and mitigate these attacks. For example, end user protection defends information and guards against loss or theft while also scanning computers for malicious code.


Challenges in Cybersecurity and trends:

1. Ransomware Evolution
Ransomware is the bane of cybersecurity, IT, data professionals, and executives.
Perhaps nothing is worse than a spreading virus that latches onto customer and business 
information that can only be removed if you meet the cybercriminal’s egregious demands. And 
usually, those demands land in the hundreds of thousands (if not millions) of dollars.
Ransomware attacks are one of the areas of cybercrime growing the fastest, too. The number of 
attacks has risen 36 percent this year.


2. AI Expansion
Robots might be able to help defend against incoming cyber-attacks.
Between 2016 and 2025, businesses will spend almost $2.5 billion on artificial intelligence to 
prevent cyberattacks.


3. IoT Threats
The vast majority of humans in first-world countries have an iPhone in their pockets, a computer at 
work, a television at home, and a tablet in their cars.
The Internet of Things is making sure that every single device you own is connected. Your 
refrigerator can tell you when the milk runs out. Alexa can order you a pizza.
Of course, all of that connection carries with it massive benefits, which is what makes it so appealing 
in the first place. You no longer have to log in on multiple devices. You can easily control your TV 
with your phone. And you might even be able to control your at-home thermostat from other digital 
devices.
The problem is that all of that interconnectedness makes consumers highly susceptible to 
cyberattacks. In fact, one study revealed that 70 percent of IoT devices have serious security 
vulnerabilities.
Specifically, insecure web interfaces and data transfers, insufficient authentication methods, and a 
lack of consumer security knowledge leave users open to attacks.
And that truth is compounded by the fact that so many consumer devices are now interconnected. 
In other words, if you access one device, you’ve accessed them all. Evidently, with more convenience 
comes more risk.
That’s a risk that security professionals need to be prepared to face by integrating password 
requirements, user verification, time-out sessions, two-factor authentication, and other 
sophisticated security protocols.


4. Blockchain Revolution
2017 ended with a spectacular rise in the valuation and popularity of crypto currencies like Bitcoin 
and Ethereum. These crypto currencies are built upon blockchains, the technical innovation at the 
core of the revolution, a decentralized and secure record of transactions. 
What does blockchain technology have to do with cybersecurity?
It's a question that security professionals have only just started asking. As 2018 progresses, you'll 
likely see more people with answers.
While it's difficult to predict what other developments blockchain systems will offer in regards to 
cybersecurity, professionals can make some educated guesses. Companies are targeting a range of 
use cases which the blockchain helps enable from medical records management, to decentralized 
access control, to identity management. As the application and utility of blockchain in a 
cybersecurity context emerges, there will be a healthy tension but also complementary integrations 
with traditional, proven, cybersecurity approaches. You will undoubtedly see variations in 
approaches between public & private blockchains.
One thing's for sure, though. With blockchain technology, cybersecurity will likely look much 
different than it has in the past.


5. Serverless Apps Vulnerability
Serverless apps can invite cyber-attacks.
Customer information is particularly at risk when users access your application off-server  or 
locally  on their device.


Thursday, January 30, 2025

Software Engineering

 Software Engineering is the process of designing, developing, testing, and maintaining software. It is a systematic and disciplined approach to software development that aims to create high-quality, reliable, and maintainable software.

The term software engineering is the product of two words, software, and engineering. The software is a collection of integrated programs. Software subsists of carefully-organized instructions and code written by developers on any of various particular computer languages. Computer programs and related documentation such as requirements, design models and user manuals.

Engineering is the application of scientific and practical knowledge to invent, design, build, maintain, and improve frameworks, processes, etc.

Software engineering includes a variety of techniques, tools, and methodologies, including requirements analysis, design, testing, and maintenance. It is a rapidly evolving field, and new tools and technologies are constantly being developed to improve the software development process. By following the principles of software engineering and using the appropriate tools and methodologies, software developers can create high-quality, reliable, and maintainable software that meets the needs of its users. Software Engineering is mainly used for large projects based on software systems rather than single programs or applications.

The main goal of Software Engineering is to develop software applications for improving quality,  budget, and time efficiency. Software Engineering ensures that the software that has to be built should be consistent, correct, also on budget, on time, and within the required requirements. Software is the program that is required to work on the input, processing, output, storage and control.


Why is Software Engineering required?

  • To manage Large software
  • For more Scalability
  • Cost Management
  • To manage the dynamic nature of software
  • For better quality Management


Need of Software Engineering

The necessity of software engineering appears because of a higher rate of progress in user requirements and the environment on which the program is working.

Huge Programming: It is simpler to manufacture a wall than to a house or building, similarly, as the measure of programming become extensive engineering has to step to give it a scientific process.

Adaptability: If the software procedure were not based on scientific and engineering ideas, it would be simpler to re-create new software than to scale an existing one.

Cost: As the hardware industry has demonstrated its skills and huge manufacturing has let down the cost of computer and electronic hardware. But the cost of programming remains high if the proper process is not adapted.

Dynamic Nature: The continually growing and adapting nature of programming hugely depends upon the environment in which the client works. If the quality of the software is continually changing, new upgrades need to be done in the existing one.

Quality Management: Better procedure of software development provides a better and quality software product.


Types of Software

It can be categorized into different types:

  1. Based on Application
  2. Based on Copyright


1. Based on Application

The software can be classified on the basis of the application. These are to be done on this basis.

1. System Software:

System Software is necessary to manage computer resources and support the execution of application programs. Software like operating systems, compilers, editors and drivers, etc., come under this category. A computer cannot function without the presence of these. Operating systems are needed to link the machine-dependent needs of a program with the capabilities of the machine on which it runs. Compilers translate programs from high-level language to machine language. 

2. Application Software:

Application software is designed to fulfill the user’s requirement by interacting with the user directly. It could be classified into two major categories:- generic or customized. Generic Software is software that is open to all and behaves the same for all of its users. Its function is limited and not customized as per the user’s changing requirements. However, on the other hand, customized software is the software products designed per the client’s requirement, and are not available for all.  

3. Networking and Web Applications Software:

Networking Software provides the required support necessary for computers to interact with each other and with data storage facilities. Networking software is also used when software is running on a network of computers (such as the World Wide Web). It includes all network management software, server software, security and encryption software, and software to develop web-based applications like HTML, PHP, XML, etc. 

4. Embedded Software:

This type of software is embedded into the hardware normally in the Read-Only Memory (ROM) as a part of a large system and is used to support certain functionality under the control conditions. Examples are software used in instrumentation and control applications like washing machines, satellites, microwaves, etc. 

5. Reservation Software:

A Reservation system is primarily used to store and retrieve information and perform transactions related to air travel, car rental, hotels, or other activities. They also provide access to bus and railway reservations, although these are not always integrated with the main system. These are also used to relay computerized information for users in the hotel industry, making a reservation and ensuring that the hotel is not overbooked. 

6. Business Software:

This category of software is used to support business applications and is the most widely used category of software. Examples are software for inventory management, accounts, banking, hospitals, schools, stock markets, etc. 

7. Entertainment Software:

Education and Entertainment software provides a powerful tool for educational agencies, especially those that deal with educating young children. There is a wide range of entertainment software such as computer games, educational games, translation software, mapping software, etc.  

8. Artificial Intelligence Software:

Software like expert systems, decision support systems, pattern recognition software, artificial neural networks, etc. come under this category. They involve complex problems which are not affected by complex computations using non-numerical algorithms. 


2. Based on Copyright

Classification of Software can be done based on copyright. These are stated as follows:

1. Commercial Software:

It represents the majority of software that we purchase from software companies, commercial computer stores, etc. In this case, when a user buys software, they acquire a license key to use it. Users are not allowed to make copies of the software. The company owns the copyright of the program.

2. Shareware Software:

Shareware software is also covered under copyright, but the purchasers are allowed to make and distribute copies with the condition that after testing the software, if the purchaser adopts it for use, then they must pay for it. In both of the above types of software, changes to the software are not allowed. 

3. Freeware Software:

In general, according to freeware software licenses, copies of the software can be made both for archival and distribution purposes, but here, distribution cannot be for making a profit. Derivative works and modifications to the software are allowed and encouraged. Decompiling of the program code is also allowed without the explicit permission of the copyright holder.

4. Public Domain Software:

In the case of public domain software, the original copyright holder explicitly relinquishes all rights to the software. Hence, software copies can be made both for archival and distribution purposes with no restrictions on distribution. Modifications to the software and reverse engineering are also allowed.


Software Characteristics

Functionality:

It refers to the degree of performance of the software against its intended purpose. 

Functionality refers to the set of features and capabilities that a software program or system provides to its users. It is one of the most important characteristics of software, as it determines the usefulness of the software for the intended purpose. Examples of functionality in software include:

  • Data storage and retrieval
  • Data processing and manipulation
  • User interface and navigation
  • Communication and networking
  • Security and access control
  • Reporting and visualization
  • Automation and scripting

Reliability:

A set of attributes that bears on the capability of software to maintain its level of performance under the given condition for a stated period of time. 

Reliability is a characteristic of software that refers to its ability to perform its intended functions correctly and consistently over time. Reliability is an important aspect of software quality, as it helps ensure that the software will work correctly and not fail unexpectedly.


Examples of factors that can affect the reliability of software include:

  • Bugs and errors in the code
  • Lack of testing and validation
  • Poorly designed algorithms and data structures
  • Inadequate error handling and recovery
  • Incompatibilities with other software or hardware


Efficiency:

It refers to the ability of the software to use system resources in the most effective and efficient manner. The software should make effective use of storage space and executive command as per desired timing requirements. 

Efficiency is a characteristic of software that refers to its ability to use resources such as memory, processing power, and network bandwidth in an optimal way. High efficiency means that a software program can perform its intended functions quickly and with minimal use of resources, while low efficiency means that a software program may be slow or consume excessive resources.

Examples of factors that can affect the efficiency of the software include:

  • Poorly designed algorithms and data structures
  • Inefficient use of memory and processing power
  • High network latency or bandwidth usage
  • Unnecessary processing or computation
  • Unoptimized code


Usability:

It refers to the extent to which the software can be used with ease. the amount of effort or time required to learn how to use the software. 


Maintainability:

It refers to the ease with which modifications can be made in a software system to extend its functionality, improve its performance, or correct errors. 


Portability:

A set of attributes that bears on the ability of software to be transferred from one environment to another, without minimum changes. 


Software Development Life Cycle

Software Development Life Cycle (SDLC) is a well-defined, structured sequence of stages in software engineering to develop the intended software product.


Communication

This is the first step where the user initiates the request for a desired software product. The user contacts the service provider and tries to negotiate the terms, submits the request to the service providing organization in writing.

Requirement Gathering

This step onwards the software development team works to carry on the project. The team holds discussions with various stakeholders from problem domain and tries to bring out as much information as possible on their requirements. The requirements are contemplated and segregated into user requirements, system requirements and functional requirements. The requirements are collected using a number of practices as given –

• studying the existing or obsolete system and software,

• conducting interviews of users and developers,

• referring to the database or

• collecting answers from the questionnaires.

Feasibility Study

After requirement gathering, the team comes up with a rough plan of software process. At this step the team analyzes if a software can be designed to fulfill all requirements of the user, and if there is any possibility of software being no more useful. It is also analyzed if the project is financially, practically, and technologically feasible for the organization to take up. There are many algorithms available, which help the developers to conclude the feasibility of a software project.

System Analysis

At this step the developers decide a roadmap of their plan and try to bring up the best software model suitable for the project. System analysis includes understanding of software product limitations, learning system related problems or changes to be done in existing systems beforehand, identifying and addressing the impact of project on organization and personnel etc. The project team analyzes the scope of the project and plans the schedule and resources accordingly.

Software Design

Next step is to bring down whole knowledge of requirements and analysis on the desk and design the software product. 

The inputs from users and information gathered in requirement gathering phase are the inputs of this step. The output of this step comes in the form of two designs; logical design, and physical design. Engineers produce meta-data and data dictionaries, logical diagrams, data-flow diagrams, and in some cases pseudo codes.

Coding

This step is also known as programming phase. The implementation of software design starts in terms of writing program code in the suitable programming language and developing error-free executable programs efficiently.

Testing

An estimate says that 50% of whole software development process should be tested. Errors may ruin the software from critical level to its own removal. Software testing is done while coding by the developers and thorough testing is conducted by testing experts at various levels of code such as module testing, program testing, product testing, in-house testing, and testing the product at user’s end. Early discovery of errors and their remedy is the key to reliable software.

Integration

Software may need to be integrated with the libraries, databases, and other program(s). This stage of SDLC is involved in the integration of software with outer world entities.

Implementation

This means installing the software on user machines. At times, software needs post-installation configurations at user end. 

Software is tested for portability and adaptability and integration related issues are solved during implementation.

Operation and Maintenance

This phase confirms the software operation in terms of more efficiency and less errors. If required, the users are trained, or aided with the documentation on how to operate the software and how to keep the software operational. The software is maintained timely by updating the code according to the changes taking place in user end environment or technology. This phase may face challenges from hidden bugs and real-world unidentified problems.


Waterfall Model

Waterfall model is the simplest model of software development paradigm. All the phases of SDLC will function one after another in linear manner. That is, when the first phase is finished then only the second phase will start and so on.

This model assumes that everything is carried out and taken place perfectly as planned in the previous stage and there is no need to think about the past issues that may arise in the next phase. This model does not work smoothly if there are some issues left at the previous step. The sequential nature of model does not allow us to go back and undo or redo our actions.

Advantage:

This model is best suited when developers already have designed and developed similar software in the past and are aware of all its domains.

Drawback:

The sequential nature of model does not allow to go back and undo or redo the actions.


Iterative Model

This model leads the software development process in iterations. It projects the process of development  cyclic manner repeating every step after every cycle of SDLC process.

The software is first developed on very small scale and all the steps are followed which are taken into consideration. Then, on every next iteration, more features and modules are designed, coded, tested, and added to the software. Every cycle produces a software, which is complete in itself and has more features and capabilities than that of the previous one.

After each iteration, the management team can do work on risk management and prepare for the next iteration. Because a cycle includes small portion of whole software process, it is easier to manage the development process but it consumes more resources.

Advantage:

Because a cycle includes small portion of whole software process, it is easier to manage the development process.

Drawback:

Since more features are added to the software on every iteration, it consumes more resources.


Spiral Model

Spiral model is a combination of both, iterative model and one of the SDLC model. It can be seen as if you choose one SDLC model and combined it with cyclic process (iterative model).

This model considers risk, which often goes un-noticed by most other models. The model starts with determining objectives and constraints of the software at the start of one iteration. Next phase is of prototyping the software. This includes risk analysis. Then one standard SDLC model is used to build the software. In the fourth phase of the plan of next iteration is prepared.

Advantage:

1. Additional functionality or changes can be done at the later stage

2. Cost Estimation becomes easy Drawback:

1. Not advisable for smaller projects, as it might cost more

2. Demands Risk assessment expertise



V – model

The V-model is a type of SDLC model where process executes in a sequential manner in V-shape.

The major drawback of waterfall model is we move to the next stage only when the previous one is finished and there was no chance to go back if something is found wrong in later stages. V-Model provides means of testing of software at each stage in reverse manner.

At every stage, test plans and test cases are created to verify and validate the product according to the requirement of that stage. For example, in requirement gathering stage the test team prepares all the test cases in correspondence to the requirements. Later, when the product is developed and is ready for testing, test cases of this stage verify the software against its validity towards requirements at this stage.

This makes both verification and validation go in parallel. This model is also known as verification and validation model.

Advantage:

1. Each phase has specific deliverables.

2. Works well for small projects where requirements are easily understood.

3. Utility of the resources is high.

Drawback:

1. Very rigid, like the waterfall model.

2 Little flexibility and adjusting scope is difficult and expensive.



Thursday, January 23, 2025

Blockchain Technology

 


Blockchain is a decentralized distributed database (ledger) of immutable records accessed by various business applications over the network. Client applications of related businesses can read or append transaction records to the blockchain. Transaction records submitted to any node are validated and committed to the ledger database on all the nodes of blockchain network. Committed transactions are immutable because each block is linked with its previous block by means of hash and signature values. Protocols such as Gossip and Consensus ensure that the submitted transactions are transferred to all nodes and committed on all blockchain nodes consistently.

Blockchain ecosystem consists of blockchain client, blockchain node, blockchain network, transaction processor and consensus process.


Blockchain client is an application that creates transaction message in a prescribed format and submits it to blockchain node through web API. It may be any existing application, which posts transaction message to blockchain node. Clients are restricted using Public Key Infrastructure (PKI) technology at blockchain node level.

Blockchain node is a server node that runs blockchain services responsible for receiving the transaction and transmits the transaction to other blockchain nodes. With respect to the design, the node participates in consensus process to commit the block of transaction data to ledger database.

Blockchain network is a network of linked nodes used for read, write transactions into ledger database. The topology is based on the nodes participating in consensus process. Traditional systems are centralized where all data and decision-making is concentrated on a single node or cluster of nodes. In decentralized systems, the data and decision-making are spread out among a large number of nodes. These nodes maintain copies of the shared database and decide among themselves which data is to be committed to the database using consensus mechanism. Decentralized networks can be an interconnection of centralized or hub-and-spoke type networks. A distributed network is a special case of decentralized system where every single node in the network maintains the shared database and participates in consensus to determine which data is to be committed to the database.

There are at  types of blockchain networks 


  • public blockchains, 
  • private blockchains, 
  • consortium blockchains and 
  • hybrid blockchains.

Public blockchains

A public blockchain has absolutely no access restrictions. Anyone with an Internet connection can send transactions to it as well as become a validator (i.e., participate in the execution of a consensus protocol). Usually, such networks offer economic incentives for those who secure them and utilize some type of a proof-of-stake or proof-of-work algorithm.

Some of the largest, most known public blockchains are the bitcoin blockchain and the Ethereum blockchain.

Private blockchains

A private blockchain is permissioned. One cannot join it unless invited by the network administrators. Participant and validator access is restricted. To distinguish between open blockchains and other peer-to-peer decentralized database applications that are not open ad-hoc compute clusters, the terminology Distributed Ledger (DLT) is normally used for private blockchains.

Hybrid blockchains

A hybrid blockchain has a combination of centralized and decentralized features. The exact workings of the chain can vary based on which portions of centralization and decentralization are used.

Sidechains

A sidechain is a designation for a blockchain ledger that runs in parallel to a primary blockchain. Entries from the primary blockchain (where said entries typically represent digital assets) can be linked to and from the sidechain; this allows the sidechain to otherwise operate independently of the primary blockchain (e.g., by using an alternate means of record keeping, alternate consensus algorithm, etc.).

Consortium blockchain

A consortium blockchain is a type of blockchain that combines elements of both public and private blockchains. In a consortium blockchain, a group of organizations come together to create and operate the blockchain, rather than a single entity. The consortium members jointly manage the blockchain network and are responsible for validating transactions. Consortium blockchains are permissioned, meaning that only certain individuals or organizations are allowed to participate in the network. This allows for greater control over who can access the blockchain and helps to ensure that sensitive information is kept confidential.


Bitcoin and blockchain are not the same. Blockchain provides the means to record and store bitcoin transactions, but blockchain has many uses beyond bitcoin. Bitcoin is only the first use case for blockchain.


Proof of work

  • A proof of work is a piece of data which is difficult (costly, time consuming) to produce but easy for others to verify and which satisfies certain requirements.
  • In order for a block to be accepted by network participants, miner must complete a proof of work which covers all of the data in the block.
  • The difficulty of this work is adjusted so as to limit the rate at which new blocks can be generated by the network to one every 10 minutes.
  • Due to the very low probability of successful generation, this makes it unpredictable which worker computer in the network will be able to generate the next block.


Ethereum

  • Functions as a platform through which people can use tokens to create and run applications and create smart contracts
  • Ethereum allows people to connect directly through powerful decentralized super computer
  • Language- Solidity
  • Currency- Ether
  • Uses- POS


Smart Contracts

• A smart contract is an agreement or set of rules that govern a business transaction;

• It’s stored on the blockchain and is executed automatically as part of a transaction

• Their purpose is to provide security superior to traditional contract law while reducing the costs and delays associated with traditional contracts.


Hyperledger 

  • Hyperledger is an open source collaborative effort created to advance cross-industry blockchain technologies. 
  • It is a global collaboration, hosted by The Linux Foundation, including leaders in finance, banking, Internet of Things, supply chains, manufacturing and Technology.


Consensus 

Consensus is a procedure to select a leader node, which decides whether the block of transactions is to be committed or rejected. Earlier versions of blockchain system used Proof of Work (PoW) for consensus process. Every node or participatory node is given a mining task, and a node elected as leader completes the mining task first. Mining task is to find or calculate a certain pattern value of hash value by adding nonce to current hash. Node that participates in mining process requires heavy computing resources. Latest consensus protocol uses PoET, “Proof of Elapsed Time”. Every node in the consensus process selects random time and keeps decreasing. The node that reaches zero first is selected as leader.

Transaction

Transaction is a unit of business data within Block. Block is a set of transactions bundled with signatures and hash value of previous block. Genesis block is the first block of chain created during installation and configuration.

Merkle Tree

Merkle Tree is a tree data structure in which leaf node holds hashes of every transaction and intermediate node holds hash calculated from immediate child nodes. In blockchain, a block consists of one or more transactions and its respective tree of hashes. In a distributed system, this tree is used to maintain data consistency among all participating nodes.

Ledger

Ledger/ Chain Database is a key-value database for a chain of serialized blocks. One block may contain one or more transactions.

State Database is a key-value database for storing transaction state and links of its related transactions.

AI chatbot

 An AI chatbot is a software application designed to simulate human conversation using artificial intelligence (AI). It can interact with us...