Jayson Mcleod, Author at Broad-Hurst In Mart https://www.martinbroadhurst.com/author/jayson-mcleod/ Software development courses in C, C++, Python, C# , Java for Linux and Windows Tue, 12 Dec 2023 13:53:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.7 https://www.martinbroadhurst.com/wp-content/uploads/2023/11/cropped-web-page-3793072_640-32x32.png Jayson Mcleod, Author at Broad-Hurst In Mart https://www.martinbroadhurst.com/author/jayson-mcleod/ 32 32 Knapsack Problem Using Dynamic Programming in C: Optimizing  https://www.martinbroadhurst.com/knapsack-using-dynamic-programming-in-c/ https://www.martinbroadhurst.com/knapsack-using-dynamic-programming-in-c/#respond Thu, 26 Oct 2023 07:24:39 +0000 https://www.martinbroadhurst.com/?p=322 When it comes to optimizing resource allocation and decision-making, the knapsack problem stands as a classic example. In this article, we explore the efficient application…

The post Knapsack Problem Using Dynamic Programming in C: Optimizing  appeared first on Broad-Hurst In Mart.

]]>
When it comes to optimizing resource allocation and decision-making, the knapsack problem stands as a classic example. In this article, we explore the efficient application of dynamic programming to solve the knapsack problem using C. 

From understanding the fundamental concept to practical implementation, this guide delves into the intricacies of this problem-solving technique.

Can We Solve the Knapsack Problem Using Dynamic Programming?

The knapsack problem is a well-known optimization dilemma where you must select items from a set with given weights and values to maximize the total value while staying within a weight limit. 

Dynamic programming offers a robust solution to this problem by breaking it down into smaller subproblems, calculating their optimal values, and gradually building up the final solution. With dynamic programming, we can indeed solve the knapsack problem efficiently.

What Is an Example of a Knapsack Problem in Dynamic Programming?

Imagine you are embarking on a hiking expedition, and you have a limited backpack capacity. Your goal is to select items from a list of hiking gear with varying weights and values, maximizing the value you carry while not exceeding the backpack’s weight limit. 

This scenario represents a classic example of the knapsack problem. Dynamic programming helps you make the optimal gear selection, ensuring you get the most out of your hiking experience.

Discover how to streamline text data in Python with this guide Python Chomp: Streamlining Text Data with rstrip()

How to Implement the Knapsack Problem in C

Implementing the knapsack problem in C using dynamic programming requires breaking down the problem into smaller subproblems and utilizing memoization to store intermediate results. By following these structured steps, you can efficiently find the optimal solution:

  • Step 1: Define the Problem

Understand the problem’s constraints, including the weight limit and the available items’ weights and values;

  • Step 2: Create a Table

Set up a table to store the results of subproblems. The table size is determined by the number of items and the weight capacity of the knapsack;

  • Step 3: Initialize the Table

Initialize the table with base values, typically zeros, as a starting point;

  • Step 4: Calculate the Optimal Solution

Iterate through the items, calculating and storing the optimal value for each subproblem based on the previous results;

  • Step 5: Determine the Final Solution

Once all subproblems are solved, the final solution lies in the last cell of the table. It represents the maximum value that can be achieved within the given weight limit.

By adhering to these steps and employing dynamic programming techniques, you can implement the knapsack problem efficiently in C, making informed decisions when resource allocation is crucial.

 Practical Implementation: Solving the Knapsack Problem in C

Now, let’s put our knowledge into action and solve a practical example of the knapsack problem using dynamic programming in C. Consider a scenario where you have a knapsack with a weight limit of 10 units, and you’re presented with a list of items, each with its weight and value. 

Your goal is to select the combination of items that maximizes the total value while staying within the weight limit.

Here’s a simplified representation of the items:

  • Item 1: Weight – 2 units, Value – $12;
  • Item 2: Weight – 1 unit, Value – $10;
  • Item 3: Weight – 3 units, Value – $20;
  • Item 4: Weight – 2 units, Value – $15.

Let’s use dynamic programming to find the optimal selection of items.

Step 1: Define the Problem

We have a knapsack with a weight limit of 10 units and four items with their respective weights and values.

Step 2: Create a Table

Set up a table to store the results of subproblems. In this case, the table dimensions will be based on the number of items (4) and the weight capacity (10 units). We initialize it as follows:

```

   0  1  2  3  4  5  6  7  8  9 10

  ----------------------------------------------

0 | 0  0  0  0  0  0  0  0  0  0  0

1 | 0  0  12 12 12 12 12 12 12 12 12

2 | 0  10 12 22 22 22 22 22 22 22 22

3 | 0  10 12 22 30 32 42 52 52 52 52

4 | 0  10 15 25 30 32 42 52 57 57 67

```

Step 3: Initialize the Table

The first row and first column of the table are initialized to zeros as a starting point.

Step 4: Calculate the Optimal Solution

Iterate through the items and calculate the optimal value for each subproblem based on the previous results. The table is updated as follows:

```

   0  1  2  3  4  5  6  7  8  9 10

  ----------------------------------------------

0 | 0  0  0  0  0  0  0  0  0  0  0

1 | 0  0  12 12 12 12 12 12 12 12 12

2 | 0  10 12 22 22 22 22 22 22 22 22

3 | 0  10 12 22 30 32 42 52 52 52 52

4 | 0  10 15 25 30 32 42 52 57 57 67

```

Step 5: Determine the Final Solution

The final solution is found in the last cell of the table, representing the maximum value that can be achieved within the given weight limit. In this example, the optimal selection includes Item 1 and Item 4, with a total value of $27.

By following these steps, you can efficiently apply dynamic programming to solve the knapsack problem in C, making informed decisions when resource allocation is paramount.

Conclusion

The knapsack problem, when solved using dynamic programming in C, showcases the practicality of this approach in resource allocation and decision-making. Whether you’re optimizing your backpack for a hiking adventure or tackling real-world resource allocation challenges, the structured process of dynamic programming empowers you to make informed choices and maximize your outcomes.

The post Knapsack Problem Using Dynamic Programming in C: Optimizing  appeared first on Broad-Hurst In Mart.

]]>
https://www.martinbroadhurst.com/knapsack-using-dynamic-programming-in-c/feed/ 0
Cheapest Link Algorithm Example: Simplifying the TSP https://www.martinbroadhurst.com/cheapest-link-algorithm-for-tsp-in-c/ https://www.martinbroadhurst.com/cheapest-link-algorithm-for-tsp-in-c/#respond Thu, 26 Oct 2023 07:21:54 +0000 https://www.martinbroadhurst.com/?p=319 The Traveling Salesman Problem (TSP) is a renowned optimization puzzle, challenging individuals to find the shortest route that visits a set of cities once and…

The post Cheapest Link Algorithm Example: Simplifying the TSP appeared first on Broad-Hurst In Mart.

]]>
The Traveling Salesman Problem (TSP) is a renowned optimization puzzle, challenging individuals to find the shortest route that visits a set of cities once and returns to the starting city. Its applications span across various industries, from transportation and manufacturing to DNA sequencing. 

The fundamental goal is to minimize costs while identifying optimal routes, making TSP a critical problem to address.

Deciphering the Cheapest Link Algorithm

The Cheapest Link Algorithm provides a straightforward method for tackling the complexities of TSP. It operates in a few simple steps:

  • Initialization: Start the tour from any city within the set;
  • Finding Nearest Neighbors: Identify the closest unvisited city and incorporate it into the tour;
  • Continued Exploration: Keep discovering the nearest unvisited city, adding it to the tour until all cities are visited;
  • Returning Home: Conclude the tour by returning to the initial city.

Learn the ins and outs of determining element visibility with our Selenium guide

Selenium Check If Element Is Visible: Mastering Web Testing

A Real-Life Example

To grasp the Cheapest Link Algorithm’s application, let’s consider an example involving five cities (A, B, C, D, and E) and their respective distances. Using this algorithm, we can determine the shortest route:

  • A to B: 5 units;
  • A to C: 7 units;
  • A to D: 6 units;
  • A to E: 10 units;
  • B to C: 8 units;
  • B to D: 9 units;
  • B to E: 6 units;
  • C to D: 6 units;
  • C to E: 5 units;
  • D to E: 8 units.

The Cheapest Link Algorithm proceeds as follows:

  • Start at City A;
  • The nearest unvisited city is B, so we add B to the tour;
  • Continuing, we find D as the next closest unvisited city;
  • Next, E emerges as the nearest unvisited city;
  • Finally, we return to A to complete the tour.

The tour’s route becomes: A → B → D → E → A, with a total distance of 29 units.

Unveiling the Algorithm’s Efficiency

A deeper dive into the solution showcases its effectiveness. Starting from City A, the algorithm consistently selects the closest unvisited city, ensuring an optimal path. 

Let’s dissect our example:

  • Begin the tour at City A;
  • Move from A to B, covering a distance of 5 units;
  • Transition from A to D, with a distance of 6 units;
  • Advance from D to E, spanning 8 units;
  • Conclude the tour by returning to A, covering 10 units.

The tour’s path is A → B → D → E → A, with a total distance of 29 units. This exemplifies the Cheapest Link Algorithm’s proficiency in identifying the shortest route among multiple cities.

Applications Beyond the Puzzle

The Cheapest Link Algorithm’s practicality extends far beyond our example. It finds application in real-world scenarios such as optimizing delivery routes, circuit design, and DNA sequencing. Mastering its principles and applications empowers you to navigate complex optimization challenges in various domains.

Conclusion 

This comprehensive example unveils the Cheapest Link Algorithm’s potential for simplifying the Traveling Salesman Problem. Whether you’re streamlining delivery routes, crafting efficient circuits, or exploring genetic sequences, the Cheapest Link Algorithm stands as a reliable tool in your arsenal. Its straightforward approach and proven effectiveness make it a go-to solution for solving intricate optimization puzzles.

The post Cheapest Link Algorithm Example: Simplifying the TSP appeared first on Broad-Hurst In Mart.

]]>
https://www.martinbroadhurst.com/cheapest-link-algorithm-for-tsp-in-c/feed/ 0
Selenium Check If Element Is Visible: A Comprehensive Guide https://www.martinbroadhurst.com/how-to-check-if-an-element-is-visible-in-selenium/ https://www.martinbroadhurst.com/how-to-check-if-an-element-is-visible-in-selenium/#respond Thu, 26 Oct 2023 07:19:07 +0000 https://www.martinbroadhurst.com/?p=316 In the realm of web testing and automation, ensuring the visibility of web elements is a crucial task. Selenium, the widely-used web automation tool, provides…

The post Selenium Check If Element Is Visible: A Comprehensive Guide appeared first on Broad-Hurst In Mart.

]]>
In the realm of web testing and automation, ensuring the visibility of web elements is a crucial task. Selenium, the widely-used web automation tool, provides powerful capabilities to address this need. This article delves into methods and techniques for checking element visibility, equipping you with the tools to optimize your web testing endeavors.

Why Element Visibility Matters

Element visibility holds paramount importance in web automation for several reasons:

  • Enhanced User Experience: Visible elements directly impact user experience, ensuring seamless interactions and functionality;
  • Reliable Validation: Prior to interaction with specific elements like buttons, links, or form fields, it’s essential to validate their presence;
  • Dynamic Web Environments: On dynamic web pages, elements may appear or disappear based on user interactions. Ensuring visibility is pivotal to adapting to these dynamic changes.

 How to Verify Element Visibility

Selenium offers various methods to determine element visibility. Here are practical approaches.

Using the `.is_displayed()` Method

The most straightforward way to check element visibility is by employing the `.is_displayed()` method. It returns a Boolean value, `True` if the element is visible, and `False` if it’s not. Here’s a Python example:


```python

element = driver.find_element(By.ID, "elementID")

if element.is_displayed():

  print("The element is visible.")

else:

  print("The element is not visible.")

```

Handling Element Exceptions

In some cases, an element might not exist on the page, leading to a `NoSuchElementException`. To prevent this error, you can gracefully handle exceptions with `try` and `except` blocks:

```python

try:

  element = driver.find_element(By.ID, "elementID")

  if element is not None and element.is_displayed():

    print("The element is visible.")

  else:

    print("The element is not visible.")

except NoSuchElementException:

  print("Element not found on the page.")

```

Discover the world of optimization with the Cheapest Link Algorithm in our article Cheapest Link Algorithm Example: A Practical Approach to TSP

Real-World Scenarios

Let’s delve into two practical examples illustrating the significance of checking element visibility.

Example 1: Submitting a Form

Imagine a scenario where you need to click a “Submit” button on a registration form. Before clicking, it’s crucial to ensure the button is visible and enabled for user interaction.

```python

submit_button = driver.find_element(By.ID, "submitBtn")

if submit_button.is_displayed() and submit_button.is_enabled():

  submit_button.click()

else:

  print("The 'Submit' button is not visible or not enabled.")

```

 Example 2: Handling Dynamic Content

On dynamic web pages, elements may become visible following user actions, such as a mouse click. In such cases, verifying element visibility is essential:

```python

show_more_button = driver.find_element(By.ID, "showMoreBtn")

show_more_button.click()

new_element = driver.find_element(By.ID, "dynamicElement")

if new_element.is_displayed():

  print("The new element is visible.")

else:

  print("The new element is not visible.")

```

Conclusion

Checking element visibility is a fundamental aspect of web testing and automation with Selenium. It ensures a seamless user experience and enables adaptability to dynamic web environments. Mastering the techniques outlined in this guide empowers you to enhance the reliability and effectiveness of your web testing endeavors.

The post Selenium Check If Element Is Visible: A Comprehensive Guide appeared first on Broad-Hurst In Mart.

]]>
https://www.martinbroadhurst.com/how-to-check-if-an-element-is-visible-in-selenium/feed/ 0
Greedy Algorithm Python: An Approach to Set Cover Problems https://www.martinbroadhurst.com/greedy-set-cover-in-python/ https://www.martinbroadhurst.com/greedy-set-cover-in-python/#respond Thu, 26 Oct 2023 07:15:31 +0000 https://www.martinbroadhurst.com/?p=313 In the realm of problem-solving and optimization, the greedy algorithm in Python proves to be a valuable tool. It offers a straightforward and efficient approach…

The post Greedy Algorithm Python: An Approach to Set Cover Problems appeared first on Broad-Hurst In Mart.

]]>
In the realm of problem-solving and optimization, the greedy algorithm in Python proves to be a valuable tool. It offers a straightforward and efficient approach to address set cover problems. This article delves into the inner workings of the greedy algorithm, demonstrating how it simplifies decision-making processes and drives efficiency.

Understanding the Greedy Algorithm

The greedy algorithm is a widely used optimization technique that follows a simple principle: it makes the best possible choice at each step of a problem, without reconsidering previous choices. This algorithm is particularly useful in scenarios where you want to minimize the number of choices while ensuring that the selected choices cover a specific set comprehensively.

How Does the Greedy Algorithm in Python Work?

The greedy algorithm operates by iteratively selecting the most promising option that contributes to the overall solution. 

Here’s a simplified representation of how it works:

  • Start with an empty set that represents the solution;
  • Examine all available options and choose the one that seems the most beneficial;
  • Add the selected option to the solution set;
  • Repeat steps 2 and 3 until the problem is solved or a specific condition is met.

The greedy algorithm excels in scenarios where the problem has optimal substructure and the greedy choice property. These properties allow the algorithm to make locally optimal choices that, when combined, lead to a globally optimal solution.

Dive into the world of space optimization with this article Bin Packing Algorithm: Optimizing Space Utilization

Applications of the Greedy Algorithm in Python

The Greedy Algorithm finds application in various fields, ranging from computer science and network design to logistics and resource allocation:

  • Network Design

In network design, the greedy algorithm helps identify the optimal placement of network components to minimize costs while maximizing efficiency;

  • Data Compression

The algorithm is instrumental in data compression, where it selects the most efficient encoding methods to reduce the size of files or data streams;

  • Scheduling and Task Assignment

Scheduling and task assignment benefit from the greedy algorithm by optimizing the allocation of resources to minimize time and cost;

  • Resource Allocation

Resource allocation in various industries, such as manufacturing, transportation, and finance, leverages the greedy algorithm to distribute resources efficiently.

Real-World Examples of the Greedy Algorithm in Action

Minimal Spanning Trees in Network Design

In the field of network design, one common application of the greedy algorithm is the construction of minimal spanning trees. A minimal spanning tree connects all nodes within a network with the minimum possible total edge weight. 

By selecting the edges with the lowest weights at each step, the greedy algorithm efficiently constructs a network structure that minimizes costs and ensures efficient data flow.

Huffman Coding for Data Compression

Data compression is essential in various applications, from image and video streaming to file storage. The greedy algorithm is used in Huffman coding, an efficient compression technique that assigns variable-length codes to different characters based on their frequencies in a dataset. 

By choosing codes that minimize the overall length of the encoded data, the greedy algorithm ensures effective compression and reduced storage or transmission requirements.

Task Scheduling for Efficient Workflows

Efficient task scheduling is crucial in optimizing workflows, whether it’s managing a factory’s production line or scheduling jobs on a server. The greedy algorithm helps allocate tasks based on their priorities, deadlines, or resource requirements, ensuring that the most crucial tasks are completed first while minimizing delays and resource underutilization.

 Portfolio Optimization in Finance

In the world of finance, investors often face the challenge of optimizing their investment portfolios. The greedy algorithm can be used to select the most promising set of investments from a larger pool, aiming to maximize returns while adhering to risk constraints. By selecting the most promising assets one at a time, the algorithm helps build a diversified and potentially profitable portfolio.

A Versatile Decision-Making Tool

The greedy algorithm in Python is a versatile decision-making tool that can be applied to a wide range of problems across different fields. 

Whether it’s designing efficient networks, compressing data, scheduling tasks, or optimizing investment portfolios, this algorithm simplifies complex decision-making processes and offers a valuable approach to problem-solving. Understanding its principles and applications can lead to more efficient and effective solutions in various domains.

Conclusion

The greedy algorithm in Python is a powerful tool for solving set cover problems and making decisions efficiently. It operates on the principle of making the best local choices, resulting in globally optimal solutions. 

Whether you are working on network design, data compression, scheduling, or resource allocation, understanding the greedy algorithm’s principles and applications can streamline your decision-making processes and lead to more efficient solutions.

The post Greedy Algorithm Python: An Approach to Set Cover Problems appeared first on Broad-Hurst In Mart.

]]>
https://www.martinbroadhurst.com/greedy-set-cover-in-python/feed/ 0
Bin Packing Algorithm: Unleashing Efficiency Across Fields https://www.martinbroadhurst.com/bin-packing/ https://www.martinbroadhurst.com/bin-packing/#respond Thu, 26 Oct 2023 07:13:29 +0000 https://www.martinbroadhurst.com/?p=310 When it comes to efficient space utilization, Bin Packing algorithms are a powerful ally. Whether you’re managing inventory, fine-tuning memory allocation, or streamlining logistical challenges,…

The post Bin Packing Algorithm: Unleashing Efficiency Across Fields appeared first on Broad-Hurst In Mart.

]]>
When it comes to efficient space utilization, Bin Packing algorithms are a powerful ally. Whether you’re managing inventory, fine-tuning memory allocation, or streamlining logistical challenges, grasping the principles and applications of this algorithm is indispensable. 

In this article, we’ll explore the intricacies of Bin Packing algorithms, shedding light on their inner workings, practical uses, and their transformative impact across industries.

Demystifying the Bin Packing Algorithm

At its core, the Bin Packing Algorithm is a classic optimization technique aimed at packing objects of varying sizes into a finite number of containers or “bins” while minimizing any wasted space. This versatile algorithm finds applications in scenarios where space optimization is paramount:

  •  Inventory Efficiency

Imagine the importance of packing products into storage spaces efficiently to reduce storage costs. The Bin Packing Algorithm excels at solving this inventory management challenge;

  • Memory Optimization

In the realm of computer programming, efficient memory allocation is a game-changer. This algorithm minimizes wasted memory, enhancing software performance;

  • Resource Allocation

The allocation of tasks to servers or machines in a resource-efficient manner is a fundamental concern in modern computing. Bin Packing Algorithms streamline this allocation process;

  •  Logistics

In the world of logistics and transportation, loading goods into trucks or containers can become a complex puzzle. Bin Packing algorithms simplify this puzzle, saving transportation costs.

 Unleashing the Power of Bin Packing

In numerous real-world scenarios, efficient space utilization is not just a luxury—it’s a necessity. Squandering space translates to higher costs and inefficiencies. The Bin Packing Algorithm answers this call by finding the most effective way to pack objects into containers.

Explore the power of the Greedy Algorithm in Python in this post Greedy Algorithm Python: Solving Set Cover Problems

The Mechanism

The Bin Packing Algorithm operates on a simple principle: fill each bin to capacity, minimizing the number of bins needed to store all items. Here’s a simplified breakdown of its operation:

  • Start with an empty bin;
  • Add items one by one, considering the available space in the bin;
  • Continuously optimize the packing, minimizing empty space;
  • Repeat the process as needed by selecting a new bin for any remaining items.

The Far-Reaching Impact

Bin Packing algorithms serve as invaluable tools with applications across diverse industries. From efficient warehousing and streamlined manufacturing to optimized software development and enhanced logistics, these algorithms lead to cost savings, reduced waste, and heightened operational efficiency.

Expanding Horizons: Bin Packing in Action

In the sphere of scheduling and time management, the Bin Packing algorithm is a game-changer. It optimizes daily tasks by determining the most efficient way to schedule activities within fixed time slots, maximizing productivity and making the most of available time.

The Cutting Stock Challenge

Manufacturing companies grappling with the cutting stock problem turn to Bin Packing algorithms for solutions. These algorithms optimize the cutting of raw materials, reducing waste, and in turn, production costs.

Digital Image Packing

Digital media relies on the seamless organization of images. Bin Packing Algorithms come to the rescue, efficiently packing images onto screens, ensuring that content is aesthetically presented and organized.

Cloud Computing Load Balancing

Cloud computing providers utilize Bin Packing algorithms to distribute workloads efficiently across server clusters. This approach minimizes resource underutilization and guarantees high performance, resulting in cost-effective and scalable services for their clients.

 A Universal Tool for Efficiency

The applications of Bin Packing algorithms transcend industry boundaries. Whether you’re managing your time, optimizing manufacturing processes, beautifying digital media, or enhancing cloud computing services, understanding the principles and techniques of these algorithms is a valuable asset. 

Bin Packing algorithms empower you to optimize space utilization and resource allocation effectively, fostering efficiency and minimizing waste in your field.

The post Bin Packing Algorithm: Unleashing Efficiency Across Fields appeared first on Broad-Hurst In Mart.

]]>
https://www.martinbroadhurst.com/bin-packing/feed/ 0
Subset-Sum Problem with Backtracking in C https://www.martinbroadhurst.com/subset-sum-with-backtracking-in-c/ https://www.martinbroadhurst.com/subset-sum-with-backtracking-in-c/#respond Thu, 26 Oct 2023 07:11:16 +0000 https://www.martinbroadhurst.com/?p=306 The subset-sum problem is a classic computational challenge in computer science. The task is to find a subset of a set of integers that sums…

The post Subset-Sum Problem with Backtracking in C appeared first on Broad-Hurst In Mart.

]]>
The subset-sum problem is a classic computational challenge in computer science. The task is to find a subset of a set of integers that sums up to a specified value. Even though determining if such a subset exists is classified as an NP-complete problem, there are various algorithms to approach it, including backtracking.

This article presents a solution for the subset-sum problem using backtracking in the C programming language. Specifically, it will find all possible subsets from a set of integers that sum up to the target value.

typedef void(*subset_sumfn)(const unsigned int *, size_t);
 
static unsigned int promising(int i, size_t len, unsigned int weight, unsigned int total,
        unsigned int target, const unsigned int *weights)
{
    return (weight + total >= target) && (weight == target || weight + weights[i + 1] <= target);
}
 
static unsigned int sum(const unsigned int *weights, size_t len)
{
    unsigned int total = 0;
    unsigned int i;
    for (i = 0; i < len; i++) {
        total += weights[i];
    }
    return total;
}
 
static void subset_sum_recursive(const unsigned int *weights, size_t len, unsigned int target,
        int i, unsigned int weight, unsigned int total, unsigned int *include, subset_sumfn fun)
{
    if (promising(i, len, weight, total, target, weights)) {
        if (weight == target) {
            fun(include, i + 1);
        }
        else if (i < (int)len - 1){
            include[i + 1] = 1;
            subset_sum_recursive(weights, len, target, i + 1, weight + weights[i + 1],
                   total - weights[i + 1], include, fun);
            include[i + 1] = 0;
            subset_sum_recursive(weights, len, target, i + 1, weight,
                    total - weights[i + 1], include, fun);
        }
    }
}
 
void subset_sum(const unsigned int *weights, size_t len, unsigned int target, subset_sumfn fun)
{
    const unsigned int total = sum(weights, len);
    unsigned int *include = calloc(len, sizeof(unsigned int));
    if (include == NULL) {
        return;
    }
    subset_sum_recursive(weights, len, target, -1, 0, total, include, fun);
    free(include);
}
 
int main(void)
{
    unsigned int weights[] = {1, 2, 3, 4, 5, 6, 7, 8, 9};
    const unsigned int len = sizeof(weights) / sizeof(unsigned int);
    const unsigned int target = 7;
    subset_sum(weights, len, target, print_vector);
    return 0;
}
man pointing at floating code snippets from languages like JavaScript, Python, C++, PHP, and C#

Sample Output:

The result is represented as binary strings that indicate which elements from the initial set belong to the subset. For instance, the initial binary string corresponds to 1 + 2 + 4, resulting in a sum of 7.

The example provided in the code yields the following results:

1 1 0 1
1 0 0 0 0 1
0 1 0 0 1
0 0 1 1
0 0 0 0 0 0 1

Conclusion

The subset-sum problem, though computationally complex, can be tackled using algorithms like backtracking. The provided C code offers a comprehensive approach to finding all subsets that meet a given target sum. On a related note, for those interested in web automation, another article dives into how to execute JavaScript in Python using Selenium.

The post Subset-Sum Problem with Backtracking in C appeared first on Broad-Hurst In Mart.

]]>
https://www.martinbroadhurst.com/subset-sum-with-backtracking-in-c/feed/ 0
JavaScript in Selenium: Tips, Tricks, and Best Practices https://www.martinbroadhurst.com/how-to-execute-javascript-in-selenium/ https://www.martinbroadhurst.com/how-to-execute-javascript-in-selenium/#respond Thu, 26 Oct 2023 07:08:16 +0000 https://www.martinbroadhurst.com/?p=302 Selenium is a powerful tool used primarily for web application testing. It allows testers to write scripts in several programming languages such as Java, Python,…

The post JavaScript in Selenium: Tips, Tricks, and Best Practices appeared first on Broad-Hurst In Mart.

]]>
Selenium is a powerful tool used primarily for web application testing. It allows testers to write scripts in several programming languages such as Java, Python, and C#. One of the critical capabilities of Selenium is the execution of JavaScript code in the context of the currently selected frame or window. This is especially useful for interacting with web elements in ways that normal Selenium methods might not allow.

Why Execute JavaScript through Selenium?

Before delving into the ‘how’, it’s vital to understand the ‘why’. There are several reasons:

  • Direct Element Interaction: Sometimes, web elements may not be directly accessible or interactable using standard Selenium methods. JavaScript execution provides an alternative path;
  • Page Manipulation: JS can dynamically change webpage content, making it useful for testing dynamic behaviors or setting up specific test conditions;
  • Data Extraction: Extracting information that might not be readily available through typical Selenium methods becomes possible.

Benefits and Precautions

  • Flexibility: Directly executing JS provides testers with unparalleled flexibility in testing scenarios that would be otherwise challenging with standard Selenium methods;
  • Speed: Sometimes, using JS can be faster than traditional Selenium methods, especially when dealing with complex DOM manipulations or interactions;
  • Caution: Relying too heavily on JavaScript executions can make your tests brittle, as they may bypass typical user interactions. Always ensure your tests reflect real-world scenarios as closely as possible.

Let’s delve into how you can execute JavaScript within Selenium in various languages:

Java

In the Java programming realm, Selenium offers the WebDriver tool, enabling the execution of JavaScript through the`JavaScriptExecutor` interface. By casting your WebDriver instance to a `JavaScriptExecutor`, you can utilize the `executeScript()` method. This method executes the JavaScript you pass to it and returns an `Object`.

Here’s an example of how you can fetch the title of a web page using JS and Selenium in Java:

String title = ((JavascriptExecutor) driver).executeScript("return document.title;").toString();

Python

Python’s Selenium bindings simplify the process even more. The WebDriver in Python already comes with the `execute_script()` method, making it straightforward to run JavaScript commands.

Here’s how you can get the title of a web page using JS and Selenium in Python:

title = driver.execute_script("return document.title;")
computer monitor with various multimedia elements, surrounded by magnifying glass and gears

C#

For those using C#, the WebDriver can be cast to an `IJavaScriptExecutor`. This interface provides the `ExecuteScript()` method, which, like in Java, allows you to execute JavaScript and returns an `Object`.

Here’s an example in C#:

String title = ((IJavaScriptExecutor) driver).ExecuteScript("return document.title;").ToString();

Conclusion

Executing JavaScript in your Selenium scripts can open a myriad of opportunities, from manipulating web elements to extracting information that might not be readily accessible using regular Selenium methods. Whichever programming language you use, Selenium offers a straightforward method to run your JavaScript seamlessly. For those keen on exploring more in-depth topics in programming, there’s another article discussing the implementation of a Spanning Forest in C.

The post JavaScript in Selenium: Tips, Tricks, and Best Practices appeared first on Broad-Hurst In Mart.

]]>
https://www.martinbroadhurst.com/how-to-execute-javascript-in-selenium/feed/ 0
C Programming Insights and Techniques https://www.martinbroadhurst.com/spanning-forest-of-a-graph-in-c/ https://www.martinbroadhurst.com/spanning-forest-of-a-graph-in-c/#respond Thu, 26 Oct 2023 07:04:48 +0000 https://www.martinbroadhurst.com/?p=297 In the fascinating realm of graph theory, one often encounters the concept of a spanning tree—a subgraph that encompasses all the vertices of the original…

The post C Programming Insights and Techniques appeared first on Broad-Hurst In Mart.

]]>
In the fascinating realm of graph theory, one often encounters the concept of a spanning tree—a subgraph that encompasses all the vertices of the original graph while preserving connectivity. However, what happens when the graph is not fully connected, and we can’t form a single spanning tree that covers all vertices? This is where the concept of a spanning forest comes into play.

A spanning forest is a collection of spanning trees, each pertaining to a connected component within the graph. It is a vital construct, offering unique insights into the structure of non-connected graphs. In this article, we will delve into the notion of spanning forests, their significance, and the algorithmic approach to finding them.

The Need for Spanning Forests

Graphs come in various shapes and sizes, and not all of them are guaranteed to be connected. In cases where a graph isn’t connected, attempting to find a single spanning tree that encompasses all vertices becomes an impossibility. Instead, we turn to the concept of a spanning forest.

A spanning forest is essentially a set of spanning trees, with each tree representing a connected component within the original graph. Unlike traditional spanning trees, which are defined in terms of vertices, spanning forests focus on edges. Any vertices that are entirely isolated in the original graph will not appear in the spanning forest.

Non-Connected Graphs with Spanning Forests

several graphs, with some labeled "Trees", and others labeled "Not Trees" due to cycles

In the realm of graph theory, the absence of connectivity in a graph poses a challenge when attempting to find a spanning tree that covers all its vertices. However, a solution exists in the form of a spanning forest, which consists of multiple spanning trees, one for each connected component within the graph. Unlike traditional connected components, spanning forest components are represented by sets of edges, not vertices. Any isolated vertices within the graph remain absent in the resulting spanning forest.

Constructing a spanning forest is accomplished through the systematic use of the depth-first search algorithm. This process entails repeatedly initiating the algorithm from each unvisited vertex. As this traversal continues, the spanning forest gradually takes shape. Once all vertices associated with edges have been visited, the spanning forest stands complete.

"Connected Graph" with interconnected nodes, and "Spanning Trees" below

For those interested in implementing this concept, below is a concise C-based representation. The `spanning_forest()` function accepts a graph in edge list format, along with the number of edges (`size`) and vertices (`order`). Additionally, it accommodates a callback function that is invoked with each newly discovered spanning tree. The implementation efficiently employs the `spanning_tree_recursive()` function from the spanning tree algorithm to uncover each individual spanning tree.

#include <stdlib.h>
 
typedef struct {
    unsigned int first;
    unsigned int second;
} edge;
 
typedef void (*treefn)(const unsigned int *, size_t, const edge *, size_t);
 
void spanning_tree_recursive(const edge *edges, unsigned int size,
        unsigned int order, unsigned int *visited, unsigned int *tree,
        unsigned int vertex, int edge, unsigned int *len)
{
    unsigned int e;
    visited[vertex] = 1;
    if (edge != -1) {
        tree[(*len)++] = edge;
    }
    for (e = 0; e < size; e++) {
        if (edges[e].first == vertex || edges[e].second == vertex) {
            unsigned int neighbour = edges[e].first == vertex ?
                edges[e].second : edges[e].first;
            if (!visited[neighbour]) {
                spanning_tree_recursive(edges, size, order, visited, tree,
                        neighbour, e, len);
            }
        }
    }
}
 
void spanning_forest(const edge *edges, unsigned int size, unsigned int order,
        treefn fun)
{
    unsigned int *visited = calloc(order, sizeof(unsigned int));
    unsigned int *tree = malloc((order - 1) * sizeof(unsigned int));
    unsigned int len, v;
    if (visited == NULL || tree == NULL) {
        free(visited);
        free(tree);
        return;
    }
    for (v = 0; v < order; v++) {
        if (!visited[v]) {
            len = 0;
            spanning_tree_recursive(edges, size, order, visited, tree, v, -1, &len);
            if (len > 0) {
                fun(tree, len, edges, size);
            }
        }
    }
    free(visited);
    free(tree);
}

Here’s an illustrative program that identifies the spanning forest of the graph depicted above.

#include <stdio.h>
#include <stdlib.h>
 
/* Connect two edges */
void edge_connect(edge *edges, unsigned int first, unsigned int second,
        unsigned int *pos)
{
    edges[*pos].first = first;
    edges[*pos].second = second;
    (*pos)++;
}
 
void print(const unsigned int *tree, size_t tree_size, const edge *edges, size_t size)
{
    unsigned int e;
    for (e = 0; e < tree_size; e++) {
        printf("(%u, %u) ", edges[tree[e]].first, edges[tree[e]].second);
    }
    putchar('\n');
}
 
int main(void)
{
    const unsigned int order = 9; /* Vertices */
    const unsigned int size = 8; /* Edges */
    edge *edges;
     
    edges = malloc(size * sizeof(edge));
    if (edges == NULL) {
        return 1;
    }
  
    /* Square */
    edges[0].first = 0;
    edges[0].second = 1;
    edges[1].first = 1;
    edges[1].second = 2;
    edges[2].first = 2;
    edges[2].second = 3;
    edges[3].first = 3;
    edges[3].second = 0;
  
    /* Triangle */
    edges[4].first = 4;
    edges[4].second = 5;
    edges[5].first = 5;
    edges[5].second = 6;
    edges[6].first = 6;
    edges[6].second = 4;
  
    /* Line */
    edges[7].first = 7;
    edges[7].second = 8;
 
    spanning_forest(edges, size, order, print);
 
    free(edges);
    return 0;
}

The output:

(0, 1) (1, 2) (2, 3)
(4, 5) (5, 6)
(7, 8)

Conclusion

Spanning forests are crucial for understanding non-connected graph structures, offering insight into each connected component when a single spanning tree is unfeasible due to lack of connectivity. Using the efficient depth-first search algorithm, we construct spanning forests, revealing the core of each component within the original graph. These forests find applications in network analysis, algorithm design, and other domains, providing a versatile tool to navigate the intricate relationships within graphs, making them indispensable for graph theory enthusiasts and problem solvers. For those interested in delving deeper into algorithmic constructs, our article on hashtables in C offers a comprehensive exploration of data structures, complementing the understanding gained from spanning forests and graph theory.

The post C Programming Insights and Techniques appeared first on Broad-Hurst In Mart.

]]>
https://www.martinbroadhurst.com/spanning-forest-of-a-graph-in-c/feed/ 0
Hash Table in C using Singly-Linked Lists https://www.martinbroadhurst.com/hash-table/ https://www.martinbroadhurst.com/hash-table/#respond Thu, 26 Oct 2023 07:00:27 +0000 https://www.martinbroadhurst.com/?p=293 Hash tables are a type of data structure that provides a mechanism to store and retrieve values based on a key. This is achieved using…

The post Hash Table in C using Singly-Linked Lists appeared first on Broad-Hurst In Mart.

]]>
Hash tables are a type of data structure that provides a mechanism to store and retrieve values based on a key. This is achieved using an array of lists, where each list is known as a ‘bucket’. In the event of collisions, when two different keys hash to the same bucket, we need to have a mechanism to distinguish between the two. One such mechanism is to use chaining, where each bucket points to a list of all entries that hash to the same bucket. This article presents a hash table implementation in C using singly-linked lists to manage such collisions.

Structure of the Hash Table

Here are the core components of our hash table:

  • hashnode: This represents a node in the singly-linked list;
  • hashtable: This is the main hash table structure. It contains an array of hashnode pointers (the ‘buckets’), along with metadata such as the total number of entries, the size of the table, and function pointers for hashing and comparison operations.

Key Functions

  • hashtable_create: Initializes a new hash table;
  • hashtable_add: Adds a new entry to the hash table;
  • hashtable_find: Finds an entry in the hash table based on its key;
  • hashtable_remove: Removes an entry from the hash table based on its key;
  • hashtable_empty: Empties the hash table, removing all entries;
  • hashtable_delete: Deallocates the hash table and all its contents;
  • hashtable_get_load_factor: Calculates the load factor of the hash table, which is a measure of how full the table is;
  • hashtable_for_each: Iterates over each entry in the hash table;
  • hashtable_set_hashfn: Sets a new hash function for the table.

Hashing Mechanism

The hash function used in this implementation is the sdbm hash function, which is a simple and effective string hashing function. However, the design allows for a custom hash function to be set, catering to different requirements.

persons with computer screens, surrounded by programming icons like HTML, CSS, and "phyton"

Example Program

Here’s a sample program:

#include <stdio.h>
#include <string.h>
 
#include <hashtable.h>
 
int main(void)
{
    hashtable * table;
    const char * result;
    unsigned int e;
    char * elements[] = {"A", "B", "C", "D", "E", "F"};
    const unsigned int n = sizeof(elements) / sizeof(const char*);
 
    table = hashtable_create(7, (hashtable_cmpfn)strcmp);
    for (e = 0; e < n; e++) {
        hashtable_add(table, elements[e]);
    }
    hashtable_for_each(table, (hashtable_forfn)puts);
    for (e = 0; e < n; e++) {
        result = hashtable_find(table, elements[e]);
        if (result) {
            printf("Found: %s\n", result);
        }
        else {
            printf("Couldn't find %s\n", elements[e]);
        }
    }
    printf("The load factor is %.2f\n", hashtable_get_load_factor(table));
    for (e = 0; e < n; e++) {
        result = hashtable_remove(table, elements[e]);
        if (result) {
            printf("Removed: %s\n", result);
        }
        else {
            printf("Couldn't remove %s\n", elements[e]);
        }
    }
    hashtable_delete(table);
 
    return 0;
}

The header file:

#include <stdlib.h>
 
#include <hashtable.h>
 
hashnode * hashnode_create(void * data)
{
    hashnode * node = malloc(sizeof(hashnode));
    if (node) {
        node->data = data;
        node->next = NULL;
    }
    return node;
}
 
void hashnode_delete(hashnode * node)
{
    free(node);
}
 
static unsigned long sdbm(const char *str)
{
    unsigned long hash = 0;
    int c;
 
    while ((c = *str++))
        hash = c + (hash << 6) + (hash << 16) - hash;
 
    return hash;
}
 
hashtable * hashtable_create(size_t size, hashtable_cmpfn compare)
{
    hashtable * table = malloc(sizeof (hashtable));
    if (table) {
        table->size = size;
        table->hash = (hashtable_hashfn)sdbm;
        table->compare = compare;
        table->count = 0;
        table->buckets = malloc(size * sizeof(hashnode *));
        if (table->buckets) {
            unsigned int b;
            for (b = 0; b < size; b++) {
                table->buckets[b] = NULL;
            }
        }
        else {
            free(table);
            table = NULL;
        }
    }
    return table;
}
 
void hashtable_empty(hashtable * table)
{
    unsigned int i;
    hashnode * temp;
    for (i = 0; i < table->size; i++) {
        hashnode * current = table->buckets[i];
        while (current != NULL) {
            temp = current->next;
            hashnode_delete(current);
            current = temp;
        }
        table->buckets[i] = NULL;
    }
    table->count = 0;
}
 
void hashtable_delete(hashtable * table)
{
    if (table) {
        hashtable_empty(table);
        free(table->buckets);
        free(table);
    }
}
 
void * hashtable_add(hashtable * table, void * data)
{
    const unsigned int bucket = table->hash(data) % table->size;
    void * found = NULL;
 
    if (table->buckets[bucket] == NULL) {
        /* An empty bucket */
        table->buckets[bucket] = hashnode_create(data);
    }
    else {
        unsigned int added = 0;
        hashnode * current, * previous = NULL;
        for (current = table->buckets[bucket]; current != NULL && !found && !added; current = current->next) {
            const int result = table->compare(current->data, data);
            if (result == 0) {
                /* Changing an existing entry */
                found = current->data;
                current->data = data;
            }
            else if (result > 0) {
                /* Add before current */
                hashnode * node = hashnode_create(data);
                node->next = current;
                if (previous == NULL) {
                    /* Adding at the front */
                    table->buckets[bucket] = node;
                }
                else {
                    previous->next = node;
                }
                added = 1;
            }
            previous = current;
        }
        if (!found && !added && current == NULL) {
            /* Adding at the end */
            previous->next = hashnode_create(data);
        }
    }
    if (found == NULL) {
        table->count++;
    }
 
    return found;
}
 
void * hashtable_find(const hashtable * table, const void * data)
{
    hashnode * current;
    const unsigned int bucket = table->hash(data) % table->size;
    void * found = NULL;
    unsigned int passed = 0;
    for (current = table->buckets[bucket]; current != NULL && !found && !passed; current = current->next) {
        const int result = table->compare(current->data, data);
        if (result == 0) {
            found = current->data;
        }
        else if (result > 0) {
            passed = 1;
        }
    }
    return found;
}
 
void * hashtable_remove(hashtable * table, const void * data)
{
    hashnode * current, * previous = NULL;
    const unsigned int bucket = table->hash(data) % table->size;
    void * found = NULL;
    unsigned int passed = 0;
 
    current = table->buckets[bucket];
    while (current != NULL && !found && !passed) {
        const int result = table->compare(current->data, data);
        if (result == 0) {
            found = current->data;
            if (previous == NULL) {
                /* Removing the first entry */
                table->buckets[bucket] = current->next;
            }
            else {
                previous->next = current->next;
            }
            hashnode_delete(current);
            table->count--;
        }
        else if (result > 0) {
            passed = 1;
        }
        else {
            previous = current;
            current = current->next;
        }
    }
    return found;
}
 
 
float hashtable_get_load_factor(const hashtable * table)
{
    unsigned int touched = 0;
    float loadfactor;
    unsigned int b;
    for (b = 0; b < table->size; b++) {
        if (table->buckets[b] != NULL) {
            touched++;
        }
    }
    loadfactor = (float)touched / (float)table->size;
    return loadfactor;
}
 
unsigned int hashtable_get_count(const hashtable * table)
{
    return table->count;
}
 
unsigned int hashtable_find_count(const hashtable *table)
{
    unsigned int b;
    const hashnode *node;
    unsigned int count = 0;
    for (b = 0; b < table->size; b++) {
        for (node = table->buckets[b]; node != NULL; node = node->next) {
            count++;
        }
    }
    return count;
}
 
void hashtable_for_each(const hashtable * table, hashtable_forfn fun)
{
    unsigned int b;
 
    for (b = 0; b < table->size; b++) {
        const hashnode *node;
        for (node = table->buckets[b]; node != NULL; node = node->next) {
            fun(node->data);
        }
    }
}
 
 
void hashtable_for_each2(const hashtable * table, hashtable_forfn2 fun, void *data)
{
    unsigned int b;
 
    for (b = 0; b < table->size; b++) {
        const hashnode *node;
        for (node = table->buckets[b]; node != NULL; node = node->next) {
            fun(node->data, data);
        }
    }
}
 
void hashtable_set_hashfn(hashtable * table, hashtable_hashfn hash)
{
    table->hash = hash;
}

Conclusion

Hash tables are an indispensable data structure with a wide variety of applications, from database indexing to caching. This implementation, using singly-linked lists for overflow chains, provides an effective method to handle collisions and offers flexibility with custom hash functions. The functions provided cater to most of the essential operations one might need to perform on a hash table. For those who wish to further explore the intricacies of data structures, our article on the graph data structure in C provides an in-depth look into another fundamental area of computational design and its applications.

The post Hash Table in C using Singly-Linked Lists appeared first on Broad-Hurst In Mart.

]]>
https://www.martinbroadhurst.com/hash-table/feed/ 0
An In-Depth Look at Graphs in C Programming https://www.martinbroadhurst.com/graph-data-structures/ https://www.martinbroadhurst.com/graph-data-structures/#respond Thu, 26 Oct 2023 06:56:20 +0000 https://www.martinbroadhurst.com/?p=286 In this article, we delve into the foundational concept of graph data structures, exploring its various facets and nuances. We will also discuss diverse methods…

The post An In-Depth Look at Graphs in C Programming appeared first on Broad-Hurst In Mart.

]]>
In this article, we delve into the foundational concept of graph data structures, exploring its various facets and nuances. We will also discuss diverse methods for representing graphs, with a specific focus on their implementation in the C programming language. This is an integral part of understanding the broader realm of data structures within C.

Conceptual Overview

Elements of Graphs: Nodes and Connections

A graph is fundamentally composed of elements termed nodes, and the links that intertwine them, known as connections.

Orientation in Graphs: Unidirectional vs. Bidirectional

Graphs can exhibit a specific orientation, known as directionality. If the connections within a graph exhibit a defined path or direction, it is termed a unidirectional graph or digraph, with these connections being designated as unidirectional connections or pathways. In the context of this discussion, our primary focus will be on unidirectional graphs, meaning whenever we discuss a connection, we are referencing a unidirectional connection. However, it’s essential to note that bidirectional graphs can effortlessly be portrayed as unidirectional graphs by establishing connections between intertwined nodes in a reciprocal manner.

It’s worth noting that representations can often be streamlined if tailored exclusively for bidirectional graphs, and we’ll touch upon this perspective intermittently.

Connectivity and Proximity in Graphs

When a node serves as the terminal point of a connection, it is labeled as a proximal node to the origin node of that connection. The origin node is then recognized as being in close connectivity or proximity to the terminal node.

Illustrative Case Study

Consider the visual representation below, showcasing a graph comprising 5 nodal points interconnected by 7 links. The connections bridging points A to D and points B to C denote reciprocal linkages, depicted in this context with a dual-directional arrow symbol.

Theoretical Framework

Delving into a more structured delineation, a graph can be characterized as an organized duo, G = <V, L>, where ‘V’ epitomizes the ensemble of nodal points, and ‘L’, symbolizing the collection of linkages, is essentially an ensemble of systematically arranged pairs of nodal points.

To elucidate, the subsequent formulations represent the earlier portrayed graph employing set notation:

V = {A, B, C, D, E}
A = {<A, B>, <A, D>, <B, C>, <C, B>, <D, A>, <D, C>, <D, E>}

Graph Operations and Methods Overview

For a comprehensive implementation of a graph structure, it’s essential to have a foundational suite of operations to construct, alter, and traverse through the graph, listing nodal points, linkages, and adjacent nodes.

Below are the operations offered by each model. Specifically, the details presented pertain to the primary model, termed ‘graphModel1’:

graph1 *graph1_create(void);
Create an empty graph

void graph1_delete(graph1 *graph);
Delete a graph

vertex *graph1_add(graph1 *graph, const char *name, void *data);
Add a vertex to the graph with a name, and optionally some data

vertex *graph1_get_vertex(const graph1 *graph, const char *name);
Retrieve a vertex by name

void *graph1_remove(graph1 *graph, vertex *vertex);
Remove a vertex

void graph1_add_edge(graph1 *graph, vertex *vertex1, vertex *vertex2);
Create a directed edge between vertex1 and vertex2

void graph1_remove_edge(graph1 *graph, vertex *vertex1, vertex *vertex2);
Remove the directed edge from vertex1 to vertex2

unsigned int graph1_get_adjacent(const graph1 *graph, const vertex *vertex1, const vertex *vertex2);
Determine if there is an edge from vertex1 to vertex2

iterator *graph1_get_neighbours(const graph1 *graph, const vertex *vertex);
Get the neighbours of a vertex

iterator *graph1_get_edges(const graph1 *graph);
Get all of the edges in the graph

iterator *graph1_get_vertices(const graph1 *graph);
Get all of the vertices in the graph

unsigned int graph1_get_neighbour_count(const graph1 *graph, const vertex *vertex);
Get the count of neighbours of a vertex

unsigned int graph1_get_edge_count(const graph1 *graph);
Get the count of edges in the graph

unsigned int graph1_get_vertex_count(const graph1 *graph);
Get the count of vertices in the graph

Vertex and Edge Representation

Vertices in Graph Representations

In all graph representations, a vertex is defined as follows:

typedef struct {
    char *name;
    void *data;
    void *body;
    deletefn del;
} vertex;

Please observe the “body” field, primarily used by certain representations (like Adjacency List and Incidence List) to incorporate per-vertex structure.

The following functions are available for vertex manipulation:

const char *vertex_get_name(const vertex *v);
Get the vertex’s name

void *vertex_get_data(const vertex *v);
Get the data associated with a vertex

Edges in Graph Representations

The internal implementation of edges differs across representations. In fact, in three representations—Adjacency List, Adjacency Matrix, and Incidence Matrix—edges do not exist as internal objects at all. Nevertheless, from the client’s perspective, edges, as enumerated by the iterator returned from the function for retrieving edges, appear as this structure:

typedef struct {
    vertex *from;
    vertex *to;
} edge;

Here are the functions available for handling edges:

const vertex *edge_get_from(const edge *e);
Get the vertex that is the starting-point of an edge

const vertex * edge_get_to(const edge *e);
Get the vertex that is the end-point of an edge
web development tools and languages like CMS, JavaScript, HTML, and CSS on a yellow background

Sample Program

The program below creates the graph introduced earlier using an intuitive representation known as “graph1.” It proceeds to list the vertices, their neighbors, and edges.

Diverse Approaches to Graph Representation

Graphs can be represented in various ways, each method offering unique advantages and use cases. Here are five fundamental approaches to graph representation:

  • The Intuitive Representation: This approach involves describing a graph in a natural language or visual manner, making it easy for humans to understand and conceptualize. While it’s intuitive, it may not be the most efficient for computational tasks;
  • Adjacency List: In this representation, each vertex of the graph is associated with a list of its neighboring vertices. This approach is particularly useful for sparse graphs and can help save memory;
  • Adjacency Matrix: Here, a matrix is used to represent the connections between vertices. It provides a quick way to determine if there is an edge between two vertices but can be memory-intensive for large graphs;
  • Incidence Matrix: This representation uses a matrix to indicate which vertices are incident to each edge. It’s especially useful for directed graphs and can help solve various graph-related problems;
  • Incidence List: In this approach, each edge is associated with a list of its incident vertices. It’s a more compact representation than the incidence matrix and is often preferred for graphs with a low edge-to-vertex ratio.

Choosing the right graph representation depends on the specific requirements of your application, such as memory constraints, computational efficiency, and the type of graph you are working with. Each of these methods has its own strengths and weaknesses, making them valuable tools in the field of graph theory and data analysis.

The intuitive representation: graph1

The representation I refer to as the “intuitive” or sometimes the “object-oriented” representation involves directly translating the mathematical definition of a graph into a data type:

#include <stdio.h>
 
#include <graph1.h>
 
int main(void)
{
    graph1 *graph;
    vertex *v;
    vertex *A, *B, *C, *D, *E;
    iterator *vertices, *edges;
    edge *e;
 
    /* Create a graph */
    graph = graph1_create();
 
    /* Add vertices */
    A = graph1_add(graph, "A", NULL);
    B = graph1_add(graph, "B", NULL);
    C = graph1_add(graph, "C", NULL);
    D = graph1_add(graph, "D", NULL);
    E = graph1_add(graph, "E", NULL);
 
    /* Add edges */
    graph1_add_edge(graph, A, B);
    graph1_add_edge(graph, A, D);
    graph1_add_edge(graph, B, C);
    graph1_add_edge(graph, C, B);
    graph1_add_edge(graph, D, A);
    graph1_add_edge(graph, D, C);
    graph1_add_edge(graph, D, E);
 
    /* Display */
    printf("Vertices (%d) and their neighbours:\n\n", graph1_get_vertex_count(graph));
    vertices = graph1_get_vertices(graph);
    while ((v = iterator_get(vertices))) {
        iterator *neighbours;
        vertex *neighbour;
        unsigned int n = 0;
        printf("%s (%d): ", vertex_get_name(v), graph1_get_neighbour_count(graph, v));
        neighbours = graph1_get_neighbours(graph, v);
        while ((neighbour = iterator_get(neighbours))) {
            printf("%s", vertex_get_name(neighbour));
            if (n < graph1_get_neighbour_count(graph, vertex) - 1) {
                fputs(", ", stdout);
            }
            n++;
        }
        putchar('\n');
        iterator_delete(neighbours);
    }
    putchar('\n');
    iterator_delete(vertices);
    printf("Edges (%d):\n\n", graph1_get_edge_count(graph));
    edges = graph1_get_edges(graph);
    while ((e = iterator_get(edges))) {
        printf("<%s, %s>\n", vertex_get_name(edge_get_from(e)), vertex_get_name(edge_get_to(e)));
    }
    putchar('\n');
    iterator_delete(edges);
 
    /* Delete */
    graph1_delete(graph);
 
    return 0;
}
  • To add a vertex, it’s a matter of including it within the vertex set;
  • Adding an edge involves simply including it within the edge set;
  • Removing vertices and edges entails their removal from their respective sets;
  • When searching for a vertex’s neighbors, examine the edge set for edges where the vertex appears in the “from” field;
  • To determine adjacency between two vertices, inspect the edge set for an edge with the first vertex in the “from” field and the second vertex in the “to” field;
  • Obtaining all edges is straightforward; just retrieve an iterator over the edge set;
  • In the case of undirected graphs, each edge is stored only once, and neighbor identification and adjacency testing consider both vertices.

The edge object is described not as “from” and “to” but rather as “first” and “second,” meaning it represents an unordered pair.

  • In this representation, edges are treated as internal objects, similar to the Incidence List method;
  • It closely resembles a sparse Adjacency Matrix, where the edge set contains adjacent pairs, and non-adjacent pairs are absent.

Adjacency List: graph2

  • The graph consists of a collection of vertices;
  • Each vertex includes a set of neighboring vertices.

For the graph introduced earlier, the neighbor sets would appear as follows:

A: {B, D}
B: {C}
C: {B}
D: {A, C, E}
E: {}
  • Including a vertex involves simply adding it to the vertex set;
  • Adding an edge entails adding the endpoint of that edge to the neighbor set of the starting vertex;
  • Accessing a vertex’s neighbors is straightforward since the vertex retains all neighbor information.

Simply provide an iterator to access them. In this implementation, the graph argument in the function for retrieving neighbors is rendered unnecessary.

  • Checking for adjacency is straightforward; search the neighbors of the first vertex for the second vertex;
  • Retrieving all edges is more challenging to implement in this representation since edges are not treated as distinct objects.

You should sequentially iterate through each vertex’s neighbors and build the edge using the vertex and the respective neighbor.

Adjacency Matrix: graph3

The graph comprises a collection of vertices and a matrix indexed by vertices. This matrix contains a “1” entry when the vertices are connected.

typedef struct {
    set	*vertices;
    matrix *edges;
} graph3;

The adjacency matrix for the graph introduced earlier would appear as follows:

data set
  • To add a vertex, insert a row and column into the matrix;
  • When removing a vertex, eliminate its corresponding row and column.

Because the addition and removal of rows and columns are resource-intensive operations, the adjacency matrix is not well-suited for graphs where vertices are frequently added and removed.

  • Adding and removing edges is straightforward, involving no memory allocation or deallocation, just setting matrix elements;
  • To find neighbors, inspect the vertex’s row for “1” entries;
  • To establish adjacency, search for a “1” at the intersection of the first vertex’s row and the second vertex’s column;
  • To retrieve the edge set, locate all “1” entries in the matrix and create edges using the corresponding vertices;
  • In undirected graphs, the matrix exhibits symmetry around the main diagonal.

This allows for the removal of half of it, resulting in a triangular matrix.

  • For efficient vertex lookup, the vertex set should be organized with index numbers, or the matrix should function as a 2-dimensional map with vertices as keys;
  • The memory consumption for edges remains a constant |V|^2.

This is most effective for a graph that is nearly complete, meaning it has a high density of edges.

The matrix can be sparse, aligning memory usage more closely with the number of edges.

Sparse matrices simplify the addition and removal of columns (without block shifts), but necessitate renumbering.

Employing booleans or bits within the matrix can optimize memory usage.

A person working on a laptop surrounded by coding languages like PHP, CSS, HTML, and C++

Incidence Matrix: graph4

The graph is comprised of vertices and a matrix, similar to the concept of an Adjacency Matrix. However, in this representation, the matrix has dimensions of vertices × edges, where each column holds two non-zero entries, designating both the starting and ending vertices of an edge. This approach offers a more compact representation, especially for graphs with a large number of vertices and a substantial number of edges.

typedef struct {
    set	*vertices;
    matrix *edges;
} graph4;

The incidence matrix for the graph introduced earlier appears as follows (where “1” denotes “from” and “2” denotes “to”):

data set
  • When adding a vertex, introduce a new row to the matrix;
  • For adding an edge, insert a new column into the matrix;
  • When removing a vertex, all columns containing that vertex must be removed from the matrix;
  • To retrieve edges, iterate through the columns and construct edges from the paired values;
  • To identify neighbors, search for “1s” in the vertex’s row, and within those columns, locate the “2” value indicating a neighbor;
  • To establish adjacency, locate a column with “1” in the starting-point vertex’s row and a “2” in the end-point’s row;
  • In the case of an undirected graph, each edge corresponds to one column with “1” denoting “connected,” resulting in two “1s” per column.

Incidence List: graph5

Similar to the concept of an Adjacency List, this representation involves a set of vertices. However, in contrast to the Adjacency List, each vertex stores a list of the edges for which it serves as the starting point, rather than merely listing neighbors. This approach is particularly well-suited for certain applications and facilitates efficient access to information regarding the edges originating from each vertex.

typedef struct {
    set * vertices;
} graph5;
 
typedef struct {
    set *edges;
} vertex_body;

For the graph introduced earlier, the sets of edges would be represented as follows:

A: {<A, B>, <A, D>}
B: {<B, C>}
C: {<C, B>}
D: {<D, A>, <D, C>, <D, E>}
E: {}
  • When incorporating a vertex, it entails adding it to the vertex set;
  • To add an edge, it is included in the edge set of its starting vertex;
  • To determine if two vertices are adjacent, one needs to inspect the edge set of the first vertex for an edge that includes the second vertex as its “to” field;
  • Obtaining neighbors involves extracting them from the pairs in the set of edges associated with the vertex;
  • Accessing the edge set necessitates iterating through each vertex’s edge sets in succession;
  • It is possible to store the edges both within the graph object and in each individual vertex for efficient data access.

Conclusion

This comprehensive exploration into graph data structures sheds light on their fundamental elements, orientation, connectivity, and theoretical underpinnings. With a focus on the C programming language, the piece offers a suite of practical operations for creating, altering, and traversing graphs. By elucidating the nuances of nodes, connections, and their interrelationships, readers are equipped with a profound understanding of graph implementation, serving as a pivotal resource for anyone striving to deepen their grasp on data structures.

The post An In-Depth Look at Graphs in C Programming appeared first on Broad-Hurst In Mart.

]]>
https://www.martinbroadhurst.com/graph-data-structures/feed/ 0