CuriouSTEM

View Original

Computing Systems: Sequential, Parallel, and Distributed Computing

Just like computers, we solve problems and complete tasks every day. Most likely, when solving issues, there are multiple ways to solve that problem, where some are better than others. The same is true when a computer solves problems. Sequential computing, the standard method for solving a problem, executes each step in order one at a time. In programs that contain thousands of steps, sequential computing is bound to take up extensive amounts of time and have financial consequences. Nonetheless, there are two crucial, much easier ways to avoid time-consuming sequential computing: parallel and distributed computing. Parallel and distributed computing are two terms used interchangeably constantly, but in reality, they are two different solutions that help us accomplish our algorithms quicker. 

But what is parallel computing? Parallel computing is when multiple processors are used to processing a task simultaneously. We break the job we need to complete into smaller sequential operations and run them concurrently. One way to analyze the benefits of parallel computing compared to sequential computing is to use speedup. Speedup is the ratio of the time taken to run the program sequentially to the time to run the parallelized version of the program. Although the speedup may not show a substantial difference initially, as the input size grows by the thousands or millions, we will see a meaningful difference in the speedup. 

Distributed computing is when a problem is distributed across multiple computing devices to process the tasks. With distributed computing, numerous computing devices connect to a network to communicate. For example, in a simple distributed computing system, a managing computer would send the appropriate data to each of the working computers, and the working computers would then send the results back to the managing computer, all across a shared network. Similar to parallel computing, we also use speedup to compare the outcomes to sequential computing. In this case, the speedup is the ratio of the time taken to run the program sequentially to the time to run the distributed version of the program.

The main difference between these two methods is that parallel computing uses one computer with shared memory, while distributed computing uses multiple computing devices with multiple processors and memories. A similarity, however, is that both processes are seen in our lives daily. Parallel computing is used in many industries today which receive astronomical quantities of data, including astronomy, meteorology, medicine, agriculture, and more. Additionally, distributed computing is everywhere. The world wide web is an example of a massive distributed computing network. The Internet allows for distributed computing on a large scale. More examples of distributed computing on a small scale include smart homes and cell phone networks. Overall, even though parallel and distributed computing may sound similar, they both execute processes in different manners, but they both have an extensive effect on our everyday lives.

Picture Source: electricalvoice.com