What is a computer?

28 Aug 2021
Table of Contents
I. Introduction
II. What is a Computer?
- A. History of Modern Computers
- B. Hardware
- C. Software
- D. Modern Computers and Security
III. Conclusion
Appendices
Footnotes
Literature Review
References
 

Abstract— Everyone living in the modern world has utilized a computer at some point in time, but have you ever stopped to wonder, “what is a computer?” This paper covers the concept of what a computer is and how one functions. Computers are everywhere, but many people will be puzzled if you ask them this seemingly simple question. They will probably show you their phone, or another gadget or tool and say, “well this is a computer,” but what classifies these items as computers? In this paper, a proper definition of a computer is analyzed, as defined by the well-known Merriam-Webster dictionary, and continues by deconstructing this definition to find what a computer is. Following the analysis of the definition, the paper covers a brief history of modern computers, and along the way, find what would be defined as a computer in the modern terminology. After describing the history in terms of pioneers, we analyze what components a modern computer system is comprised of by standards systems of 2020. Next, the topic of computers in everyday life, and the importance of cybersecurity are examined. The paper ends in a philosophical, open-ended way meant to evoke ethical and existential ideas in the reader. The theme across most paragraphs is based on human centrism and problems that follow from this fallacy. By the end, the reader will hopefully obtain a new perspective on computational systems and the future of computer science.

Keywords— computer science, discrete math, theory of computation, cybersecurity, what’s a computer.

I. Introduction

This paper was originally written for the Fall 2020 ICS 222 course at the University of Hawai`i at Manoa. The prompt for the original assignment was the same as the title. This was my first formal writing on a computer science topic, and some modifications were made one year later to add more depth and correct some issues around flow, grammar, and concepts. The SARS-CoV-2 pandemic has given me monumental time to deeply dive into the surface of computer science. Thus, it contains content from a period where I knew very little information on computers, and content from a time where I knew a little more about computer science (I’d like to think I know more than I actually do). The majority of modifications made were formatting, flow inconsistencies, footnotes, and the addition of the Appendices. This paper is both a fast summarization of modern computer history as well as the anatomy of a computer, while also poking at some philosophical and societal concepts and dilemmas. The only recommended pre-requisites for this paper are high school level mathematics and English. I want this to be accessible to anyone interested in computer science, and I hope people already in the field can read this and still obtain valuable knowledge. Some topics are expanded upon in the Appendices covering many fundamental concepts of computer science (some knowledge of calculus and linear algebra will help to better understand these). I have also included links to great resources under Literature Review. Most of these resources are difficult to digest from a beginner’s perspective but will expand the mind. The footnotes provide insight into why I wrote something a certain way, concepts difficult to contextualize, and definitions. I appreciate critiques and comments which can be sent to my e-mail or discord. I hope this paper is engaging, and the reader becomes inspired to delve deeper into the field of computer science.

II. What is a Computer?

A computer is defined as, “a programmable, usually electronic, device that can store, retrieve, and process data,” by Merriam-Webster [1, References], and provides an idea of what a computer is, but this definition is deceptively definitive1. The abacus is the oldest tool found across ancient civilizations to aid people in computational tasks, dating back as early as 2700 BC, but it does not fit the description above. Determining whether an abacus is a computer can constitute its own paper, but can be described simply as a tool for arithmetic calculations, and requires a user to produce functionality. The abacus is capable of storing and processing data, but it is not implicitly programmable [Appendix A]. It is left to the user to execute instructions and interpret the output, which shows that it does not fit the definition of a computer. However, a modern computer can also be described as a tool that is unable to do any computation until a user performs actions. Thus, the definition above is fragmentary and the abacus should be regarded as the most primitive computer. The definition contains a time-flow fallacy2, which is band-aided by the generalized injection, “usually electronic.” This fallacy is built on human centricity, and cannot be fixed by adding a statement, but should be restructured entirely to solve the problem3. Human centricity, whether it be in terms of species, or something else such as time, is unfortunately inevitable but should be recognized as it can create false imagery and ideologies. A new definition can thus be derived as, a machine that can process data through storage and manipulation, based on any formalized task.

A. History of Modern Computers

This new definition sufficiently describes a computer as defined by Merriam-Webster without logical fallacy. This new definition also includes the abacus and all mechanical calculators. The origin of modern computers can be found as mathematical computational systems such as the Turing Machine and lambda-calculus. The Turing Machine was described by Alan Turing in the paper, On Computable Numbers, with an Application to the Entscheidungsproblem, in 1936. Turing discussed an automatic machine that was capable of data manipulation based on its configuration [2, References]. Turing then described a computing machine to utilize an automatic machine and an infinite tape to perform computations through automation by instruction of the automatic machine [Appendix B]. The Lambda-calculus model was described by Alonzo Church, who worked with later worked with Turing to produce the Church-Turing thesis4. Both Church and Turing worked independently on their models and soon discovered they had both created two different systems which accomplish the same task. Around the same time that the papers were produced, the Z1 was created by Konrad Zuse [3, References]. The Z1 was a physical machine and can be considered as the first computer by the original definition. However, this is heavily disputed as the programmability of the machine was very limited and the machine was highly unreliable. By the 1940s, many machines being built were fully electronic, and eventually, the computers became more advanced and commercialized.

One of the first fully electronic computers, the Harvard Mark I, was mammoth in size, and functioned as a calculator which could do basic computations in a few seconds and logarithmic problems in the range of minutes, depending on the problem difficulty [4, References] [Appendix C]. While it was fully electronic, it did rely on various mechanisms to perform procedures, however, it was fully automated, requiring no manual machinery where humans aid the computer. The Harvard Mark II, the successor to the Mark I, also shares a pioneer event as it was the first computer to contain a bug [5, References]. It was a very real moth that had gotten smashed on the relay panel of the computer. This is the origin of ‘debugging’ which refers to checking for flaws and functionality of programs. Both of these were general-purpose computational systems, and meet both definitions. However, the first computer is most often referred to be the Electronic Numerical Integrator and Computer, ENIAC, first introduced in 1946 [6, References]. ENIAC was the first of its kind to remove all mechanical parts and replaced them with vacuum tubes and diodes for computation and storage. ENIAC was also much larger than the Harvard Mark I and II, requiring approximately 1800 square feet of floor space. This computer was only used by the government and large businesses, and computers for the general public were still decades away.

The first versions of computers intended for personal use were created in the early to mid-1970s, however, only computer enthusiasts were able to get full functionality out of the machines [7, References]. This is most apparent with the release of the Altair and the Apple I. The release of the Apple II is believed to be the first successful mass-produced personal computer system; however, it still required the user to have a good understanding of computer science to be used. In present times, computers are everywhere, and they are consistently becoming more powerful. Two aspects make up a computer, the hardware, and software, which in tandem allow modern computers to function and access resources like the World Wide Web, or other information services.

B. Hardware

For hardware, a computer generally consists of a motherboard, storage drive, random access memory (RAM), a power supply unit, and expansion cards that specialize in a specific category [8, References]. These expansion cards are not necessary to run a computer but can significantly enhance areas such as graphics, sound, and network connection. There is also a central processing unit, CPU, located on the motherboard, which receives instructions and manipulates data [Appendix D]. There may also be other components like a fan to cool the units which get hot from high amounts of computation. These are hardware components making up the computer itself, but there are also input and output devices necessary to facilitate user interaction and observe the output after computation.

The motherboard is the main circuit board for the computer and contains connections to all components of the computer. These connections transfer streams of data between the different components based on the instructions the CPU receives. RAM is a set amount of storage space that is readily available for the computer to access. If a file is selected, the CPU takes the item from the storage drive and moves the file into the RAM drive. Now, the CPU can work solely from the RAM rather than the slower process of accessing the file from the hard drive. All of these components are used to automate computation, but in order to compute something, a formal set of instructions must be created.

C. Software

Software is the platform that allows the user access to the components of the computer. All computers are navigated through an operating system that bridges the gap between files and the user. There are many operating systems, with the most popular being Windows, macOS, and Linux for standard computers, and Android and iOS for mobile devices [9, References]. Some examples of what operating systems do include booting the computer, file architecture and manipulation, data security, disk, and device management, printing and network control, user interface, logging activity, and loading and executing programs. Each operating system, even every program language, has their own unique benefits and pitfalls and were created for specific tasks.

Programming languages are based on a lexicon that a developer understands, however, a computer is unable to comprehend such an abstract language. The CPU requires machine language, also known as binary, to manipulate data, and this is accomplished with a compiler, linker, and assembler [10, References]. This process is conceptualized in compiler theory. Modern compilers have many options to check for errors in code, although these are most often syntactic errors, and translate the source code into machine language. Syntax errors are errs surrounding the translation of the programming language itself, and not based on logic, which is called semantic errors. This was also examined in footnote 3, which is very important as code that compiles may still have many errors.

Every programming language requires a compiler to create object files, which contain the translated code. The assembler runs on the back-end and helps the operating system direct the flow of data across the hardware architecture. The object file can now be passed to the linker, which can combine multiple object files, to create an executable file. Computer science has numerous fundamental concepts, and computer scientists should have a good understanding of most, if not every topic. At each point of processing information, there are potential areas that can be vulnerable to exploitation and must be thoroughly examined to maintain the security of data and the system. This is the fundamental concept around cybersecurity, which has many sub-divisions within, to harbor a safe platform where user’s privacy and data are protected. There is also the other end of the spectrum, where attackers try to find and exploit the system to obtain sensitive data, or breach privacy [Appendix E]. It is essential that all programmers are familiar with security, as this will help to provide a safe environment for the users and systems being used.

D. Modern Computers and Security

Computers have reached a point where they are incorporated into every aspect of life, and many of these systems can be accessed by unauthorized users. One example is cars, as they help to process user actions and regulate internal processes. There are also autonomous vehicles that use numerous sensors to drive with little user interaction. There have already been numerous events of cars being hijacked by attackers, and this can be done with many other machines. Printers, televisions, and even light bulbs are all being connected to networks for autonomy, but this can also be used to execute devastating damage. People buy technology as they see its usefulness in enhancing their lifestyles, however, they put their faith in the engineers and developers to create safe products. This is all the more reason to learn about information security, even if it is at a basic user level. The more information the general public learns and applies to their life, the more secure society can become.

Every decade, the world evolves to an almost unrecognizable environment as everything is digitized. This evolution only happens as technology is consistently getting more powerful. The security of the future of society is a daunting problem and can create existential crises as everything is wildly unpredictable. This raises many philosophical questions such as; when will computers become advanced enough to perform its’ own actions, or write its’ own program? Will the events of Skynet become a reality rather than just a work of fiction? Would it be possible to extract the conscious mind of humans and upload it to a network? If an android were to use an abacus independent of human instruction, would this imply that humans are computers too?

III. Conclusion

I hope this ending raises many questions about existence and omnipotent beings. While the idea of an android using an abacus may seem redundant, it is meant to serve as a connection to both an already established idea, and to computers conducting human tasks without being directly programmed to do so. The ever-evolving world of computer science brings forth many new problems and still leaves many problems unsolved. We often try to make sense of these problems, and potentially work for centuries to attain a better understanding of these dilemmas. Philosophers have described the existential affliction we all face, and the theoretical threat of artificial intelligence has been a part of society for decades. Accordingly, if a computer claims to be conscious and self-aware, it would be naïve to state they are not. After all, I can claim I am self-aware and exist, but I will never be able to prove that you are, I can only take your word for it. However, computational systems are not human-exclusive, we simply have not yet found another intelligent life-form to do the same. What would then happen if our creation becomes more advanced than us? Would we appreciate the work done to unlock the hidden truths behind the veil of reality, or would we collapse under our perceived imperfection?

Appendices

The Appendices are meant to provide a general overview of a concept or concepts, and spark interest in the reader to research and learn more than the basic concepts described. I have kept formalizations and lemma out of these appendices to provide a welcoming environment for readers new to computer science, or who are intimidated by advanced mathematics. The concepts described are meant to be further researched by the reader to gain a critical and comprehensive understanding of Discrete Mathematics and Algorithms. I suggest, Discrete Mathematics and Its Applications, by Kenneth Rosen, and, Introduction to Algorithms, by four authors, CLRS. These teach fundamental concepts thoroughly and were used as resources for Appendices 1 – 4.

Appendix A

A program in terms of computer science is a formalized sequence of computations that receives input data and manipulates the data to produce the output. Programs are based on algorithms, and each algorithm follows the same description, however, it does not need to be formalized to a programming language. Algorithms are analogous to mathematical functions as they can manipulate numbers to produce an output, given an input. However, inputs and outputs of algorithms need not be directly observed by the user. This can also be described as functions being abstractions of an algorithm since algorithms linearly manipulate data while functions can be solved through grouping. Mathematical functions should not be confused with program functions, which are algorithms written in a coding language. Thus, we can form a hierarchy where a mathematical function is an abstraction of an algorithm, which is an abstraction of a program.

There are many programming paradigms to describe a programming language’s purpose, and each language can fall under multiple categories. However, each paradigm serves a different purpose and manipulates data differently from another. One central concept across all paradigms is the use of variables to store data. With this ability, we can start to compound variables together, along with arithmetic and logical operations, to allow more complex computations to be created. For example, Object-Oriented programming, OOP, is based on the idea of manipulating sets of datum, called objects, which the developer assigns to variables. OOP also utilizes data structures to organize and store data to be manipulated by the program. These objects can also be compounded to create more advanced data structures than the ones OOP languages innately have. Data structures and program hierarchy are essential to understand for all programmers, and finding resources is quite easy for these topics. OOP languages such as Python and Java are usually a programmer’s first language. This is because it allows the person to learn how to program without having to understand complex tasks such as memory allocation and evaluation strategies. Developers should be able to recognize a language’s paradigm, and the differences among all paradigms, to ensure it is suitable for solving the problem in mind. This will also help to diversify one’s abilities and allocate time spent learning effectively.

Appendix B

If Turing Machines and mathematical models of computation are unfamiliar, I highly recommend researching formal language theory, automata theory, and program theory. These three concepts (along with the coinduction and corecurrsion, which is very interesting but not a necessity) were the concepts covered in ICS 222 at UHM. It was intensely difficult but extremely rewarding and fun to learn. However, there is a relatively high level of discrete math knowledge needed to tackle these topics confidently, particularly graph and set theory. Turing Machines are described in a very basic form in this paper, but many intricate parts create the machine. Each machine can be described in a matrix, or table, where the rows of the matrix represent states in the machine, and columns represent values that are recognized by the Turing Machine. Each index of the matrix represents actions and transitions upon reading a specified symbol. The actions performed are, write, move left, and move right. The write action may or may not be performed, but there is always a movement across the tape. Lambda-calculus is similar to Turing Machines but is an abstraction of computational models. Both are equal in computational abilities, but Turing Machines show bit-by-bit manipulation across each state of computation, while lambda-calculus models the flow of data across functions to derive an output.

Appendix C

The concept of runtime is a fundamental tool to prove efficiency. This is known as time complexity and requires rigorous proofing techniques to prove, given an arbitrarily large number of inputs, how long will a program run. This time is based on a concept known as asymptotic runtime, which is a functional bound on ‘n’ inputs. There are five types of functions that are used in relation to the program being analyzed. These are: O, Θ, Ω, o, and ω. They are named Big-O, Theta, Big-Omega, little-o, and little-omega respectively. Big-O and Theta notation are the most used, and each function represents an inequality symbol; ≥, =, ≤, >, and < respectively. These describe how a program’s runtime is bounded for some arbitrarily large value for n. For example, Big-O describes an upper bound on the program over n input elements. We use the term, arbitrarily large, to recognize that some functions have variations in runtime for small values of n. However, after some value n0 (named n-not), the function remains consistent, if and only if the limit as n approaches infinity is non-negative and does not oscillate.

This concept can be best described by the graphical representation of n and √n. For small n values, the root function is larger than the linear function, but after n0¬, n is the upper bound of √n5. In this case, we could say T(√n) = o(n), meaning the runtime of √n is loosely bounded above by n. This is because after n0, the function √n and n will never be equivalent, and the upper bounded function has a degree that is higher than the function being evaluated. We could also use Big-O for the evaluation, however, we cannot reliably use little-o evaluation for Big-O evaluations unless the function is proven to be a degree larger than the runtime being evaluated. We can use Big-O and Big-Ω to find Θ if and only if Big-O is related to Big-Ω by a constant multiplier. There is a lot that is not covered here, but time complexity is critical to understand to analyze a program’s runtime. One must also have a good understanding of Riemann Sums to analyze programs that contain loops and recursion. There is another topic that is closely related to time complexity known as space complexity. Space complexity utilizes the same functions but describes how much memory a program takes up.

Appendix D

A graphics card, also known as Graphics Processing Unit, GPU, was introduced in the 1970s and originally processed the color of images to be displayed on screens of arcade machines. Over time, the GPU has been transformed into something extremely powerful, as it is capable of a process called parallel programming. In modern times, the GPU is crucial for things like rendering high-resolution images and videos, video games, machine learning, and cryptocurrency mining. The CPU is also capable of parallel programming and the time reduction can be dramatic for intense computations. This is based on the core processors of the unit, as each core can read instructions and manipulate data individually. The difference between a CPU and GPU is that the CPU manipulates memory in RAM or a storage disk, and the GPU has dedicated memory which it can access much faster than the CPU with either storage unit.

There are two processes used in parallel programming which both have distinctly different implications, although this can be unclear and confusing at first. These processes are called multithreading and multiprocessing. Multithreading creates threads of a repeated process and the data being manipulated is in the same location, known as concurrency. Multiprocessing uses the processors to individually manipulate data in parallel. Thus, multithreading is faster in terms of starting the procedure, since there is more setup and breakdown for multiprocessing computations. Furthermore, multiprocessing is only allowed when more than two cores are being used for the procedure. Parallel processing is only able to be utilized when a program contains loops or recursion. However, many problems can arise with parallel programming and requires a good understanding of how memory is manipulated by the CPU and other concepts from Discrete Mathematics. For example, given a program that computes a repeated process for an unspecified amount of time, it is impossible to determine if the program will always reach completion. This concept is one of many problems in computer science that are considered to be unsolvable. This is known as the Halting Problem and was described by Alan Turing as a proof that a formal mathematical system is undecidable. This concept of unsolvable problems is a branch of discrete math known as computability and complexity theory, which is complex but necessary to understand. It can help to prove whether a problem has characteristics that match with known unsolvable problems.

Appendix E

Security is the most advanced concept in computer science. To be a security expert, it is often described as requiring a breadth of knowledge over every field in computer science, while also requiring considerable depth in each field. I like to think of security as a never-ending race to a non-existent finish line. This breadth and depth of knowledge are required as data flows through everything within modern computational systems. As expressed in the reading, cybersecurity has many sub-fields within, and the most overarching set is the idea of attackers versus defenders. However, even within these fields, there are vast sub-branches covering every aspect of modern systems. Resource security, network security, data forensics, and telecommunication are all fields that accomplish vastly different tasks. Most of these are commonly associated with the defense of a system, except for telecommunication, but defending a system is only effective if you understand how attackers attack a system. This is also true for the attackers, as they have to understand what they can exploit, what the defenders create as defense mechanisms, and how to find or create exploitable paths and waypoints to conduct an attack. The most popular form of cybersecurity, as portrayed in media, is hacking, but hacking can be done by anyone. In the first footnote, I introduced a basic concept of what hacking is; deconstructing a system of processes and reconstructing it to complete a task that was not originally intended in its design. Hacking as shown in media is flashy, and juvenile, and can cause many legal ramifications for the attacker. Thus, security requires a considerable amount of knowledge of laws and should be conducted with sound ethics. Furthermore, to become an expert requires innovation through creativity, patience, and perseverance.

A relatively new branch of mathematics, known as category theory, is becoming fundamental to security as it is a platform for formal proofing of mathematical abstractions. Category theory utilizes concepts of discrete mathematics and abstract algebra, to build a directed network, called a category. The primary concepts used are graph theory, type theory, ring theory, and group theory, and each topic has its own foundational concepts individually. Categories hold two fundamental axioms, arrows between objects are constructed associatively, and there exists an identity for each arrow. This topic is possibly the most advanced concept mentioned in this paper, and my knowledge leans more on fundamentals than explicit usage. However, with the low amount of knowledge I have, I can see the use to effectively prove, or disprove, that a system maintains security. Also, knowledge of category theory can help to enhance the branch of quantum mechanics, which is complex and counter-intuitive, but essential for the future of computer science, especially for cybersecurity.

Footnotes

1 While dictionaries are essential to language, natural language evolves over time. The definition seems to make sense while also creating confusion when applied. The following ideas may seem overly argumentative but are meant to showcase the fact that everything can, and should, be questioned. If you follow descriptions and dictation precisely, you can start to deconstruct things within the guidelines established, and use this to build a new perspective on the topic. My goal is to help break barriers the reader may unknowingly hold to advance learning abilities, think critically, and ask questions.

2 A time-flow fallacy is a type of logical fallacy which describes a bias to the flow of time being contextualized around the perspective of the observer. This fallacy then requires all subsequent observers to make the same assumption regardless of events occurring prior to the fallacious argument. For the definition, the fallacy lies in the description of what a computer of modern times is capable of, however, it does not sufficiently describe computational devices used before modern technology. Logical fallacies are vital to understand as these can appear in formal proofing, which would render the proof inconsistent or completely false.

3 We can draw a parallel here to programming, as when a function has a logical problem, simply adding or removing syntax does not remove the problem. Even if the problem disappears after this, one must structurally redesign the logistics to truly solve the problem. If code is simply band-aided, these flaws have the potential to be exploited.

4 The Church-Turing thesis is based on Kurt Gödel’s incompleteness theorem. Gödel’s incompleteness theorem is complex but very interesting. It concludes that a formal language contains unprovable axioms, which implies the system is incomplete. The Church-Turing thesis is the concept that a function can be calculated by an effective method if and only if it is computable by a Turing Machine. These concepts are fundamental to computability theory, which is touched upon in Appendix D.

5 This may raise some confusion as n should only be integer values, but if there is some defined constant that modifies the runtime of the ‘n’ function, this constant is simplified to O(1). For example, O(1000 * √n) is simplified to O(√n), as 1000 is a defined constant value. This also correlates to; given a function with a code block that runs n times, then 1000 constant procedures before or after the code block, the run time will be O(n) since O(n) + O(1000) = O(n) + O(1) = O(n).

Literature Review

I have provided links to readings on specific topics, and links to YouTube channels that teach the mathematics required for the readings. Many of the readings cover advanced concepts in discrete mathematics, so one should already have learned the fundamentals before attempting these. All of these resources were used to learn and many as a reference for this paper. They are ordered as they appear across the text.

Formal Language Theory

Automata Theory

Turing Machines

Compiler Theory

Computability and Complexity Theory

Overview of Cybersecurity

How to Become a Hacker

Category Theory

Category Theory for Computer Scientists

Learn Quantum Mechanics for Quantum Computing

Categorical Quantum Mechanics

YouTube Channels:

3Blue1Brown

TrevTutor

EasyTheory

References

[1] Merriam-Webster Dictionary, “Computer,” Merriam-Webster. [Online]. Available Here. [Accessed Aug. 30, 2020].

[2] A. M. Turing, On Computable Numbers, with an Application to the Entscheidungsproblem, Reading, 1936. [E-book] Available Here.

[3] F. Olito, “The evolution of computers since the 1930s,” Insider, Sept. 13, 2019. [Online]. Available Here. [Accessed Aug. 30, 2020].

[4] M. R. Swaine, “Harvard Mark I,” Britannica. [Online]. Available Here. [Accessed Aug. 30, 2020].

[5] National Geographic, “Sep 9, 1947 ce: World’s first computer bug,” National Geographic. [Online]. Available Here. [Accessed Aug. 30, 2020].

[6] The University of Rhode Island, “History of computers,” University of Rhode Island. [Online]. Available Here. [Accessed Aug. 30, 2020].

[7] Britannica, “Personal computer: Faster, smaller, and more-powerful pcs,” Britannica. [Online]. Available Here. [Accessed Aug. 31, 2020].

[8] GCF Global, “Inside a computer,” Goodwill Community Foundation, Inc. [Online]. Available Here. [Accessed Aug. 31, 2020].

[9] WGU, “5 most popular operating systems,” Western Governors University, October 2, 2019. [Online]. Available Here. [Accessed Aug. 31, 2020].

[10] S. Ghosh, “The compilation process,” Medium, May 7, 2018. [Online]. Available Here. [Accessed Aug. 27, 2021].