Introduction to Software History

 

Introduction to Software History

 

 

First Steps

This part will be different from the History of the computer, no chronological travel trough software-land, but a collection of articles and assays on software.

 

Software has a long history and as far as the facts are known to us we will give them to you. When missing stories, data, or other information are shared to us they will be put on this site. If you have any comments of suggestions regarding this page or any other page please do not hesitate to contact us.

 

A simple question: “What is software?” A very simple answer is: Hardware you can touch, software you can’t. But that is too simple indeed.

 

But when talking about software you talk about programming and programming languages. But about producing and selling the products made by programming (languages) as well.

 

There are over 300 different (“common”) computer languages in existence, apart from the various dialects stemming from one of them. Most of them can be classified in definable groups, but others don’t belong to anything. Some because they are rather new or the use of them was or is never wide spread and only used by a small specialized professionals or groups of scientists requiring these dialects. This is often the case with a specific language that was designed for just one purpose, e.g. telecommunication or supercomputing.
Some languages are even dead languages, some others are revived and expanded upon again, and there are ones that constantly rejuvenate. In the latter case a programmer is sometimes wondering whether he or she is not just upgrading to a newer version but instead learning a complete new language.

 

How It All Started

It shouldn’t be a big surprise that the creation or software also went in large but distinguishable steps. Compared with hardware there were fewer developments that went parallel or overlapping. In rare cases developments were reinvented sometimes because the development or invention was not published, even prohibited to be made public (war, secrecy acts etc.) or became known at the same time and after (legal)discussions the “other” party won the honors.

 

The earliest practical form of programming was probably done by Jaquard (1804, France). He designed a loom that performed predefined tasks through feeding punched cards into a reading contraption. This new technology allowed carpets and tissues to be manufactured with lower skills and even with fewer people. The little kid sitting under the loom changing rods and other things vanished. One single person could now handle a loom. That this met resistance from the weavers leaves no question. The same thing happened in England during the industrial revolution there. Even a movement came up called: Luddites (anti technology or just concerned citizens fighting for their bread?)

 

 

This picture shows the manufacturing of punched cards for looms

 

The technology of punched cards will later be adapted by (IBM’s) Recording and Tabulating Company to process data.

 

The situation was still a one on one game: a problem needed to be solved thus a machine was built. (Pascal, Babbage, Scheultz & Son) And when some sort of instruction was needed a sequence was designed or written and transferred to either cards or mechanical aids such as wires, gears, shafts actuators etc.. To call that programming? Well, according to our definition yes it was.

 

First there was Ada Lovelace, writing a rudimentary program (1843) for the Analytical Machine, designed by Charles Babbage in 1827, but the machine never came into operation.

 

Then there was George Boole (1815-1864), a British mathematician, who proved the relation between mathematics and logic with his algebra of logic (BOOLEAN algebra or binary logic) in 1847.

 

This meant a breakthrough for mathematics. Boole was the first to prove that logic is part of mathematics and not of philosophy.

 

A big step in thinking too.

 

But it will take one hundred years before this algebra of logic is put to work for computing.

 

John Von Neumann working at the Institute for Advanced Study developed in 1945 two important concepts that directly affected the path of computer programming languages:

 

The first concept became known as “shared-program technique” (7). This technique states that the actual computer hardware should be simple and not need to be hand-wired for each program. Instead, complex instructions should be used to control the simple hardware, allowing it to be reprogrammed much faster.(8)
The second concept was also extremely important to the development of programming languages. Von Neumann called it “conditional control transfer”(7) (www.softlord.com). This idea gave rise to the notion of subroutines, or small blocks of code that could be jumped to in any order, instead of a single set of chronologically ordered steps for the computer to take. The second part of the idea stated that computer code should be able to branch based on logical statements such as IF (expression) THEN, and looped such as with a FOR statement. “Conditional control transfer” gave rise to the idea of “libraries,” which are blocks of code that can be reused over and over.(8)

the first draft on the EDVAC

 

What was first: software or hardware

Frankly this is a matter of philosophy, or simpler: how you look at it.

Nonetheless this question is difficult to answer:

With the early computers “the idea” did not express itself neither in software nor in just hardware but was broadly interpreted: computing was problem based.
Remember that the first computers were designed for just one single task. One problem – one computer or machine or contraption.

The autonomous – something that could theoretically “run” by itself on any machine – software idea came only after these single task computers were already history.

That is why can be said:

The first computers that were built, represented in the same time the software as well as the hardware

More precise: one could build a machine to have it solve a problem automatically. By doing that you translated an idea into a mechanical expression that ran the built in problemsolving mechanism by itself.

So this question can not be answered when you regard software as a non integrated standalone idea or resource. In that case: hardware was first.

But for the hardware to run you needed an idea or problem to be translated into a mechanical expression. In other words software solidified into a machine. In this case software was first.

 
 

 

(the following paragraphs are cited from http://www.princeton.edu/~ferguson/adw/programming_languages.shtml)

 

It took Claude Shannon (1916-2001) who wrote a thesis (A Mathematical Theory of Communication in the Bell System Technical Journal -1948) on how binary logic could be used in computing to complete the software concept of modern computing.

 

In 1949, a few years after Von Neumann’s work, the language Short Code appeared . It was the first computer language for electronic devices and it required the programmer to change its statements into 0′s and 1′s by hand. Still, it was the first step towards the complex languages of today. In 1951, Grace Hopper wrote the first compiler, A-0. A compiler is a program that turns the language’s statements into 0′s and 1′s for the computer to understand. This lead to faster programming, as the programmer no longer had to do the work by hand.(9)

 

In 1957, the first of the major languages appeared in the form of FORTRAN. Its name stands for FORmula TRANslating system. The language was designed at IBM for scientific computing. The components were very simple, and provided the programmer with low-level access to the computers innards. Today, this language would be considered restrictive as it only included IF, DO, and GOTO statements, but at the time, these commands were a big step forward. The basic types of data in use today got their start in FORTRAN, these included logical variables (TRUE or FALSE), and integer, real, and double-precision numbers.

Though FORTAN was good at handling numbers, it was not so good at handling input and output, which mattered most to business computing. Business computing started to take off in 1959, and because of this, COBOL was developed. It was designed from the ground up as the language for businessmen. Its only data types were numbers and strings of text. It also allowed for these to be grouped into arrays and records, so that data could be tracked and organized better. It is interesting to note that a COBOL program is built in a way similar to an essay, with four or five major sections that build into an elegant whole. COBOL statements also have a very English-like grammar, making it quite easy to learn. All of these features were designed to make it easier for the average business to learn and adopt it.

In 1958, John McCarthy of MIT created the LISt Processing (or LISP) language. It was designed for Artificial Intelligence (AI) research. Because it was designed for such a highly specialized field, its syntax has rarely been seen before or since. The most obvious difference between this language and other languages is that the basic and only type of data is the list, denoted by a sequence of items enclosed by parentheses. LISP programs themselves are written as a set of lists, so that LISP has the unique ability to modify itself, and hence grow on its own. The LISP syntax was known as “Cambridge Polish,” as it was very different from standard Boolean logic (Wexelblat, 177) :

x V y – Cambridge Polish, what was used to describe the LISP program
OR(x,y) – parenthesized prefix notation, what was used in the LISP program
x OR y – standard Boolean logic

 

LISP remains in use today because its highly specialized and abstract nature.

The Algol language was created by a committee for scientific use in 1958. It’s major contribution is being the root of the tree that has led to such languages as Pascal, C, C++, and Java. It was also the first language with a formal grammar, known as Backus-Naar Form or BNF (McGraw-Hill

 

Encyclopedia of Science and Technology, 454). Though Algol implemented some novel concepts, such as recursive calling of functions, the next version of the language, Algol 68, became bloated and difficult to use (www.byte.com). This lead to the adoption of smaller and more compact languages, such as Pascal.

Pascal was begun in 1968 by Niklaus Wirth. Its development was mainly out of necessity for a good teaching tool. In the beginning, the language designers had no hopes for it to enjoy widespread adoption. Instead, they concentrated on developing good tools for teaching such as a debugger and editing system and support for common early microprocessor machines which were in use in teaching institutions.

Pascal was designed in a very orderly approach, it combined many of the best features of the languages in use at the time, COBOL, FORTRAN, and ALGOL. While doing so, many of the irregularities and oddball statements of these languages were cleaned up, which helped it gain users (Bergin, 100-101). The combination of features, input/output and solid mathematical features, made it a highly successful language. Pascal also improved the “pointer” data type, a very powerful feature of any language that implements it. It also added a CASE statement, that allowed instructions to to branch like a tree in such a manner:

CASE expression OF
possible-expression-value-1:
statements to execute
possible-expression-value-2:
statements to execute
END

 

Pascal also helped the development of dynamic variables, which could be created while a program was being run, through the NEW and DISPOSE commands. However, Pascal did not implement dynamic arrays, or groups of variables, which proved to be needed and led to its downfall (Bergin, 101-102). Wirth later created a successor to Pascal, Modula-2, but by the time it appeared, C was gaining popularity and users at a rapid pace.

C was developed in 1972 by Dennis Ritchie while working at Bell Labs in New Jersey. The transition in usage from the first major languages to the major languages of today occurred with the transition between Pascal and C. Its direct ancestors are B and BCPL, but its similarities to Pascal are quite obvious. All of the features of Pascal, including the new ones such as the CASE statement are available in C. C uses pointers extensively and was built to be fast and powerful at the expense of being hard to read. But because it fixed most of the mistakes Pascal had, it won over former-Pascal users quite rapidly.
Ritchie developed C for the new Unix system being created at the same time. Because of this, C and Unix go hand in hand. Unix gives C such advanced features as dynamic variables, multitasking, interrupt handling, forking, and strong, low-level, input-output. Because of this, C is very commonly used to program operating systems such as Unix, Windows, the MacOS, and Linux.

In the late 1970′s and early 1980′s, a new programing method was being developed. It was known as Object Oriented Programming, or OOP. Objects are pieces of data that can be packaged and manipulated by the programmer. Bjarne Stroustroup liked this method and developed extensions to C known as “C With Classes.” This set of extensions developed into the full-featured language C++, which was released in 1983.

C++ was designed to organize the raw power of C using OOP, but maintain the speed of C and be able to run on many different types of computers. C++ is most often used in simulations, such as games. C++ provides an elegant way to track and manipulate hundreds of instances of people in elevators, or armies filled with different types of soldiers. It is the language of choice in today’s AP Computer Science courses.

In the early 1990′s, interactive TV was the technology of the future. Sun Microsystems decided that interactive TV needed a special, portable (can run on many types of machines), language. This language eventually became Java. In 1994, the Java project team changed their focus to the web, which was becoming “the cool thing” after interactive TV failed. The next year, Netscape licensed Java for use in their internet browser, Navigator. At this point, Java became the language of the future and several companies announced applications which would be written in Java, none of which came into use.

Though Java has very lofty goals and is a text-book example of a good language, it may be the “language that wasn’t”. It has serious optimization problems, meaning that programs written in it run very slowly. And Sun has hurt Java’s acceptance by engaging in political battles over it with Microsoft. But Java may wind up as the instructional language of tomorrow as it is truly object-oriented and implements advanced techniques such as true portability of code and garbage collection.

Visual Basic is often taught as a first programming language today as it is based on the BASIC language developed in 1964 by John Kemeny and Thomas Kurtz. BASIC is a very limited language and was designed for non-computer science people. Statements are chiefly run sequentially, but program control can change based on IF..THEN, and GOSUB statements which execute a certain block of code and then return to the original point in the program’s flow.

Microsoft has extended BASIC in its Visual Basic (VB) product. The heart of VB is the form, or blank window on which you drag and drop components such as menus, pictures, and slider bars. These items are known as “widgets.” Widgets have properties (such as its color) and events (such as clicks and double-clicks) and are central to building any user interface today in any language. VB is most often used today to create quick and simple interfaces to other Microsoft products such as Excel and Access without needing a lot of code, though it is possible to create full applications with it.

Perl has often been described as the “duct tape of the Internet,” because it is most often used as the engine for a web interface or in scripts that modify configuration files. It has very strong text matching functions which make it ideal for these tasks. Perl was developed by Larry Wall in 1987 because the Unix sed and awk tools (used for text manipulation) were no longer strong enough to support his needs. Depending on whom you ask, Perl stands for Practical Extraction and Reporting Language or Pathologically Eclectic Rubbish Lister.

Programming languages have been under development for years and will remain so for many years to come. They got their start with a list of steps to wire a computer to perform a task. These steps eventually found their way into software and began to acquire newer and better features. The first major languages were characterized by the simple fact that they were intended for one purpose and one purpose only, while the languages of today are differentiated by the way they are programmed in, as they can be used for almost any purpose. And perhaps the languages of tomorrow will be more natural with the invention of quantum and biological computers.
<end quote>

Also the hardware needed to make jumps ahead like the folowing:

  • It took Zuse to create the first binary programmable computer, relay based.
  • The Bomba, originally built by Polish engineers to crack the Enigma code, pushed the envelop again
  • The colossus built by people from Bletchley Park (near London, UK) for the same purpose
  • Atanasov and Berry designed the ABC computer, a binary computer, as Zuse’s was, but now 100% electronic.
  • And not to forget the ENIAC by Eckert and Mauchly and a team made up of many others

Now things were in place to start off with the information age.

 

Enter the Information Age

In the beginning of the so called “Information Age” computers were programmed by “programming” direct instructions into it. This was done by setting switches or making connections to different logical units by wires (circuitry).

(1)
Two women wiring the right side of the ENIAC with a new program
(US Army photo, from archives of the ARL Technical library, courtesy of Mike Muuss)

 

Programming like this was nothing else but rewiring these huge machines in order to use all the options, possibilities and calculations. Reprogramming always meant rewiring.

 

In that way calculations needed days of preparations, handling thousands of wires, resetting switches, plugs etc. (in the most extreme case that is). And the programmed calculation itself just took a few minutes. If the “programming” of wiring did not have a wrong connection, the word bug was not in used yet, for programming errors.
The coding panels very much looked like that a few telephone switchboards hustled together, and in fact many parts actually came from switch boards.

 

With the invention of vacuum tubes, along with many other inventions, much of the rewiring belonged to the past. The tubes replaced the slow machines based on relays.

 

The above picture displays an early electronic switch called a triode. The first contactpoint transistor

 

When the transistor was invented this again replaced a technology: vacuum tubes.

 

When Shannon reinvented or better rediscovered the binary calculus in 1948 and indicated how that could be used for computing a revolution started. The race was on!

 

Programming the new tools (or toys?)

Programming the first binary computer was still not an easy task and much prone to mistakes. First programming was done by typing in 1′s or 0′s that were stored on different information carriers. Like paper tapes, punched hole cards, hydrogen delay lines (sound or electric pulse) and later magnetic drums and much later magnetic and optical discs.

 

By storing these 0′s and 1′s on a carrier (first used by Karl Suze’s X1 in 1938) it was possible to have the computer read the data on any later time. But mis typing a single zero or one meant a disaster because all coding (instructions) should absolutely be on the right place and in the right order in memory. This technology was called absolute addressing.

 

An example:

1010010101110101011

If this is a sequence for switches it means switch one on, switch two off etc. etc.

In simple language:

Panel 1 function: enter house
Switch 0 1 open the door
Switch 1 1 put the lights on
Switch 2 0 close the door (please)

 

In fact the protocols for programming the machines in this way looked very much like that.

In the early 50′s programmers started to let the machines do a part of the job. This was called automatic coding and made live a lot easier for the early programmers.

 

Absolute addressing:

the programmer instructs the machine at what location of the memory (valve, relay, transistor) to store a value

Soon the next step was to have the program to select the proper memory address in stead of using absolute addressing.

 

The next development was to combine groups of instruction into so called words and abbreviations were thought up called: opcodes (Hopper 1948)

 

Machine Language

Opcode works like a shorthand and represents as said a group of machine instructions. The opcode is translated by another program into zero’s and one’s, something a machine could translate into instructions.

 

But the relation is still one to one: one code to one single instruction. However very basically this is already a programming language. It was called: assembly language.

An example:

Label Opcode Register
CALC: STO R1, HELP0
  STO R2, HELP2
  LD R3, HELP1
  ADD R3, HELP2
  LD R4, HELP1
  SUB R4, HELP2
  RSR SP, 0
HELP1: DS 2
HELP2: DS 2

This piece of assembly code calculates the difference between two numbers.

 

Subroutines

Soon after developing machine languages and the first crude programming languages began to appear the danger of inextricable and thus unreadable coding became apparent. Later this messy programming was called: “spaghetti code”.
One important step in unraveling or preventing spaghetti code was the development of subroutines. And it needed Maurice Wilkes, when realizing that “a good part of the remainder of his life was going to be spent in finding errors in … programs”, to develop the concept of subroutines in programs to create reusable modules. Together with Stanley Gill and David Wheeler he produced the first textbook on “The Preparation of Programs for an Electronic Digital Computer”.(6)
The formalized concept of software development (not named so for another decade) had its beginning in 1951.

 

Below is an example of how subroutines would work.

Start of program

the main “menu”

 

 

 

first subroutine

 

back to the main menu

second subroutine

with a parameter (contents of what to print)

 


back to procedure: main

Begin program;

Main;

Printf (“Hello World”);
DoSomethingElse()
Printf (“Hello World”);

(end of program)

Function DoSomethingElse;

Add two numbers;

Return OK

Function Printf(what_to_print)

Open channel to printer interface;
Initialize printer;
Send “what_to_print” to printer;
Send page feed to printer;
Close printer interface;

Return OK

 

This program would print “Hello World” twice on two different pages.

By re-using the Printf subroutine a possible error in this routine would show up only once. An enormous advantage when looking for errors. Off course the Open, Initialize, Send, and Close “commands” in this Printf function are also subroutines.

 

Fortran

The next big step in programming began when an IBM team under John W. Backus created FORTRAN – FORmula TRANslator 1952. It could only be used on their own machine, the: IBM 704. But later versions for other machines, and platforms were sold soon after. Until long past 1960 different CPU’s required an other kind instruction set to add a number, thus for each different machine a different compiler was needed. Typically the manual came years later in 1957!
Rewiring of machines to reprogram them now definitely belonged to the past!

 

Programming language

FORTRAN soon became called a programming language. So why calling a set of some predefined instructions a programming language?

 

Because some characteristics of a language are met:

  • It must have a vocabulary – list of words
  • It must have a grammar – or syntax
  • It must be unique, both in words and in context

 

All the above criteria were easily met for this – strictly defined- set of computer instructions.

 

An example:

Let’s presume communication with a computer can be accomplished. Then how would you tell it to add two numbers in simple terms?

human computer
Add 2 and 2  
Show me the answer print 2+2

Depending on what (dialect of) computer language you use it could look different:

human computer
Add 2 and 2 answer := 2+2;
Show me the answer printf (“%d\n”, answer);

The above example is standard ANSI C.

 

And by the time when Cobol, Common Business Oriented Language, was published in 1960 by the Codasyl committee, (Hopper was a member) the term Computer Language was a fact.

 

In the meantime hardware developments raced literally ahead.

 

Already computers were connected to teletype machines to expedite the entry of programs. In the late 1960′s the first video interfaces were connected. The network was invented. And floppy disks, harddrives etc. made live for programmers a lot easier.

 

As mentioned you could not simply run FORTRAN on any machine. It had to be rewritten for each particular machine if the type of processor was different. In in that early days ALL types were different. This did not promote the development of programming at all!

 

Enter “C”

C came into being in the years 1969-1973 and was developed by Dennis Richey and David Kerningham both working at the Bell laboratories.(3)

 

And the magic word was portability.

 

Parallel at the development of computer languages, Operating Systems (OS) were developed. These were needed because to create programs and having to write all machine specific instructions over and over again was a “waste of time” job.
So by introducing the OS ‘principle’ almost all input and output tasks were taken over by the OS.

 

Such as:

  • writing data to and from memory,
  • saving data on disks,
  • writing data to screens and printers,
  • starting tapes,
  • refreshing memory,
  • scheduling specific tasks
  • etcetera, etcetera.

 

(More on operating systems in another chapter)

As the common computer languages had trouble to be translated from one machine to another the OS’s had to take the same hurdle every time a new machine was developed.

 

The need ad pressure for a common portable language was enormous.

 

There were some unsuccessful projects aimed to solve this problem, like Multics, a joint venture of MIT, General Electric, and Bell Labs. And other developments at DOD’s in different countries. But they all came either too late or became too complex to succeed.

 

But the demise of Multics inspired Dennis Ritchie and Brian Kernigham to develop C in 1972. This was and still is a very strict language that stayed close enough to the machine’s internal logic and structure. If you were allowed to say that. This new language was reasonable well to read and understand by humans. And because of this combination the language is fast, compact and became very popular amongst system programmers and commercial software manufacturers.

 

With that language they also developed UNIX, a generic operating system.

 

The power of C was that the language had a small language base (vocabulary) but leaned heavily on what they called libraries. Libraries contain machine specific instructions to perform tasks, like the OS does. These libraries were the only parts that had to be redesigned for different machines, or processor families. But, and that was C’s strength, the programming interface/language remained the same. Portability was born. Sourcecode could be reused, only to be recompiled when it had to run on other machines.

 

A classic example of C, printing “Hello World” on your screen:

 

/* helloworld.c */

main()
{
printf(‘Hello World\n”);
}

In another chapter this routine will be shown to make the various differences in language visual.

Soon other language manufacturers sprang on the bandwagon and the software industry leaped ahead. The industry took of like a rocket!

 

Artificial Intelligence

Back in the dark days of history many clerics and scientists designed, or tried to design, some kind of artificial intelligent being or whatever mimicked it closely. Mainly to fool their followers, and in a way the purpose was fulfilled. But the keyword here is: Artificial Intelligence. (AI) This is in short the Holy Grail of all that has to do with computing. Even Leonardo da Vinci, who probably created the first human alike automata, persued this “stone of wisdom”.
Asimov’s favorite subject in his books were humanoids (Robots) too. Robots were remarkable popular in films and TV series even till this day (e.g. Mr. Data of the Startrek series). And who does not remember good old Frankenstein with his recreation of a human ‘being’.
What drives the development in AI in present time is the demand that all new gadgets have to be smart, adaptive, responsive, intuitiv, and so on. It puts a serious strain on the developers to come up with a solution.

 

The secret to this all is AI, but for this to make reality new programming languages were needed. And strange enough these languages were developed parallel to the other languages, languages that could mimic intelligence.
One of the first and best known is LISP, developed in 1958/9 by the Artificial Intelligence Group at MIT under John McCarthy (McCarty also coined the term AI). Lisp is used in so called expert systems: you ask the program answers. But also in situations were lots of data have to be interpreted, like chess programs.

example of lisp:

;;; HWorld.lsp

;;; ================ ;;;
;;; = hello world simulation = ;;;
;;; ================ ;;;

(DEFUN HELLO ()
“HELLO WORLD”
)

 

Other examples of languages that are used in the AI field are: Prolog (PROgramming LOGic 1970, Alain Colmerauer)

example of prolog:

// the main program

Hello:-
nl,
write(‘Hello world!’ ).
}

 

Smalltalk (1979), Algol (1960), and in lesser extend Simula(1967),

 

 

Enter “OOP”

This is about rewriting routines and programming functionality over and over again for each different part of a program and for each new program again and again. There was a need for common shared parts that acted on instructions like a black box. So after some years formal development of software was on its way, it was strongly felt that portability was one thing but reusability was another. And history repeats itself: what was written about the subroutines above is also true for this idea of black boxes. OOP was introduced by the development of Smalltalk (in 1979) and became known as the Object Oriented Programming method another significant step into the right direction.

There is a formal definition of OOP, but that goes beyond the scope of this paper.

To read more about this subject go to the next chapter: object oriented programming a primer

 

New Methods required

Programming became an 80 hours a week job. Debugging took as long as creating the software. Too much some must have thought. Lets do something about it and make some more business.

 

In the late 1980′s the Graphical User Interfaces were created by the same manufacturers that made software like C, Delphi, Clipper VO, and other languages to expedite the creation of software. Though this kind of interface stemmed from as early as the 1960, the idea never took off until the early 1990′s.

 

Having a graphical interface is marvelous, but there must also be a way to put it to use and thus interactively build software. The answer was RAD: Rapid Application Development.

 

A drag and drop interface, taking a lot of the handy work out of our hands. Like connecting to a database, getting and putting data to it, interfacing with the user or multiple users, share devices and resources, load balancing between multichannel processors, networks and so on. Point and click interfaces were created and with a few button clicks and entering forms with properties by the programmer a complete new program could be created.

 

Thus far the results are not very satisfactory, mainly due to project definitions and organizational problems, but improvements are on their way.

 

OOP was and is very much part of the RAD method.

 

And the future? Who dares to tell. Everything is changing so fast.

But some glimpses are there:

Objects containing Artificial Intelligence, self propagating objects, interfaces allowing anybody to build applications intuitively, and so forth.

Now let’s go back to earth and explain the insides of programming a bit.

 

What is a program

Any computer only works from a set of instructions in order to execute a specific task.

A set of instructions combined into a file or object is often referred to as a program or class.

The contents of such a file is also called: source code or logic.

Depending on the programming language and the syntax the source code is written in, the sources are optimized and translated. The product of this translation is what is often called machine code.

 

After the source code is compiled on a certain machine the resulting code can most certainly not be read by a normal person. Or better almost nobody.

 

This process we call the process of compiling: in other words the source’s code is translated into something that can be understood by a computer’s CPU.

 

With most modern languages there is a single compiling run resulting into machine language. And the CPU does not have to interpret the pre-compiled or also called the opcode result:

the source code (readable) is translated into machine code that can be executed directly by a computer’s processor.

 

Interpretation can be seen as if your own mother tongue would be translated into Esperanto. This language will not be understand by the computer but it knows how to translate Esperanto into its own language: machine code.

 

However translations can be on different levels. From a single shot, for the CPU not a fully compatible compilate – also called opcode – into and through various transitional stages of compilation.
From source code to what is “translated” into 100% “computer language”.

 

This means: a self contained running program called an “executable”.

 

This translation or compilation can either be done by software – as explained above – or “hardware”.

 

An example of the first method: interpretation is that the computer reads the opcode and compiles it while running the program. The early forms of BASIC were such an example.
The latter method is mostly an EPROM mounted on a computer board containing instructions. (micro code)
As an example look how Soccam- a programming language for parallel processors-  is implemented: it is integrated in a chip mounted on a board.

 

Again: what is a program?

Some explain it like this:

A program is a sequence of instructions that can be executed by a computer (CPU) that performs a certain task intentioned by the programmer

Another one:

Within a program actions are defined that should be executed by a computer. These actions are put in a predefined order: a logical path. And, if done correctly, in such a way that a fast, optimal and efficient execution is assured.

Sounding more scientific, surely more words, but with the same massage

 

Creation of a program

In the beginning days of software development the situation was:

There was no client

Or the end user – analyst – designer – programmer was the client all in one. Being one’s own client. There was no communication other then with coworkers, let alone that people outside your group understood what you where talking about, if you were allowed to do so anyway.

Traditionally programs are made in a series of production phases starting with Analysis:

This picture can also be seen as the life cycle of a program. As each step of production phase is a new cycle in itself each cycle has also its own end product.

  • The Analysis delivers a report and Design creates both Functional and Technical reports, the specifications of a program.
  • The Creation cycle produces the program (finally)
  • The Testing cycle tells the programmers what was either not according to specifications or resulted in errors.
  • After implementation (introduction and installation of the program at the user’s workplace) software starts a production cycle.
  • If the company or department is doing a good job (after sales) an evaluation cycle follows after the program has gone into production. From that evaluation complaints, suggestions, errors and new user requirements are used for a new Analysis cycle: a so called MoSCoW list is created and used in the Analysis cycle.

 

Moscow means: Must have, Should have, Could have and Want to have (wish list)

Though testing is historically underestimated it is almost as important as creating the software. When not tested a lot of bugs (errors) will show up during the implementation phase and later production. The customer will not be amused.

 

More detail in another chapter: Stages of program development  (under construction)

 

To make a program suitable for a particular CPU (processor) the source code, as explained above is compiled. When compiled the result of that process is called an executable. This executable can not run on another machine with a different type of CPU. (e.g. Motorola, Intel, RiscPa, Sun Sparc … )

 

Why?

Because the instructions to add two numbers can be as different for different processors (the real computer) as Chinese and English

 

But for some programming languages a recompilation on a different machine with a different CPU (target) will do the job. Simple as that. And the new executable will be able to run on that particular types of machine. The computer language ‘C’ is an excellent example of this technique. The how and why will be explained on a separate page dealing with C and in context with other languages below.

 

This process to carry over source code and recompile it on different machines (CPU’s) is called: “Porting”, to bring from one platform to another.

 

There are many forms of compilers and interpreters, over a 3000 have been identified so far. See the languages index for a list of the most common ones.
For a user the outer differences between the various languages and the dialects or variants are sometimes vague to say the least

 

The development of languages is still very much in changing day by day. But in the same time they become more and more accessible for people other than hard core programmers.
In the late nineties visual interfaces are designed to take away the chores of setting up projects, linked lists, memory  management, and the like.
In the beginning of the third millennium interfaces are made in such a way that you “almost” do not need to have knowledge of a programming language in order to be able to make a program.

 

Now and again scientists start saying that in a 15 year we would no longer make use of programmers.
But than who is making that kind of programs program to make such programs that makes programming superfluous. The classic princess in the tower parable.
But nonetheless in the far future you may expect to only have to define your needs and interfaces to create a program.
In the meantime creating such interfaces will be beneficial to both manufacturers (more clients) and users (easiness of creating projects). And that on its turn stimulates again the creation of easier to understand computer languages or so called interfaces.

Interpreters and Compilers

Like said before there are two methods to “translate” source code into computer language.

The first one is called INTERPRETER and is also the oldest type.

The second kind is called  COMPILER meaning so much as composer.

And, it would be human when there are no exceptions. There are some other methods some times combining interpreters and compilers. Many old hands still know GFA BASIC from their early hacking days. GFA BASIC was and I believe still is available for the Amiga, Atari and IBM or compatibles.

These languages make Esperanto from the source code that is translated by the CPU.

 

Interpreters

An interpreter is a computer language that execute instructions that are written in the form of a program.

 

The trick of an interpreter is that it loads the source code and translates the instruction into executable machine language line by line. And it does it over and over again any time that program is run.
In order to run a program you have to load the interpreter first (e.g. “load BASIC”) then load the source code into the computer (“read spaceinv”) and then type “run to execute the program.

 

Because of all this an interpreter is relatively slow.

 

There are interpreters that generate machine code before the program is executed in its entirety. That is such a hybrid language. The processing time is shorter and the program runs faster.

 

This method only has its use if the original source code is relatively large to compensate the lost time of translating source code into object code and then machine code. Because also this process is repeated every time the program is loaded.

 

Another disadvantage is that the user need to have a (Legal?) copy of that specific interpreter. That van be a set back for the user because mostly these interpreters are expensive. Also to the designer of the program this can be a disadvantage. He or she will just give away its trade secrets and have no longer control over his own program, the distribution or avoid unwanted adaptations.

 

In the end of the second millennium companies will start to give away the source codes for the operating systems (SUN Solaris)
Or a operating system will be in the public domain from the beginning like LINUX. Or the operating system is just given away (BeOS)

 

Compilers

In short a COMPILER is: the translator of the source code into computer language.

  • First the source code is read into the computer’s memory
  • Then is will be translated in to a kind of in between code: OBJECT CODE
  • A program will have many different objects and libraries that need to be linked together into one (large) executable

 

The object codes (modules) that are created from the source code files by a compiler are not yet suitable for a computer. Though that is depending on what kind of system you would use.

 

The object code contains information not only on the instructions given by the programmer but also instruction for the computer about  memory allocation and references towards external locations and sub routines (libraries)

 

Object modules can not be used by a computer because of missing pieces and incorrect order of the modules -sort of dependency.

 

To get every thing straight a so called “LINKER”  is used. The linker accepts one or more modules, solves external references and binds the various necessary routines from libraries into the program. And finally it reallocates memory blocks inside the program so that one piece will not overwrite another part of the program in memory.

 

Finally everything is neatly compiled and will be written to a disk or other form of permanent memory.

 

The result is an executable file or program.

The now ready made program can be run independently and be loaded and executed by the operating system

 

In the case of a DOS program, yes there are many still around, the end product will always be stored in EXE format ( see chapter on DOS) There is also a program called: EXE2BIN that can process relatively small exe’s into COM  executables. These are somewhat more compact and run, in some extend, faster.

 

Standardization

A common question is:

Is there a standard version of any higher level programming language?

The answer is NO

 

However

An important problem with higher programming languages is that they are seldom portable.
This means that a program written in FORTRAN or BASIC for a specific computer will not always run on another type of computer from a different brand. Even within a certain brand this could be a problem.
The cause of all this is because most manufacturers are creating their own standards in programming languages. The basis is often the same, but instructions for input and output and the extensions manufacturers think they need are most of the time different.
But there is hope. Manufacturers of programming languages are slowly coming to their senses in the beginning of the third millennium.

 

Still most of the sources have to be partly rewritten to make them suitable for other machines. This rewriting is often called: to “porting” (from ‘portable’ – to carry over).

BUT

 

This is where the language “C” becomes important. This language has a high portability because all in and output commands have the same syntax. The translation happens via the various codes that are incorporated in the so called libraries A library is a kind of repository of all possible program routines ranging from listening to the keyboard to managing memory that are specific for the computer where the program has to run on.

 

When you load the source of a C program on a different machine the only thing the compiler program has to do is: recompile – linking of  I/O routines and other specific functions. But now with a different library written for that specific machine. In this way (almost) no code should be rewritten but only recompiled. This is one of the reasons C became a very important, but because of its rather technical inclination, not a very popular language.

 

Computer Languages

One of the first instruction sets that could be seen as an easy to understand computer language was BASIC, an acronym (letter word) for: Beginners All purpose Symbolic Instruction Code. This language until the end of the second millennium was almost always bundled with a new computer.

 

In the 80-ies a new breed of of promising languages show up. These are the so called 4GL’s (Fourth Generation Languages). The 4GL’s allows the user to define his wishes or intentions in a certain form of expression. The 4GL program or compiler generates the final compiled program for the user.
A programmer can add a few whistles and bells but has to keep that to a minimum in order not to endanger later updating of the program.

Environments like that are called CASE (Computer Aided Software Engineering) or Workbench environments

 

True: a program developed that way does not always have user friendliness in its list of top-priorities. But when managed well a project with these tools is finished mostly within time and budget. And that can not often be said of projects being build without such tools.

 

Software Generations

First Generation

During the 1950′s the first computers were programmed by changing the wires and set tens of dials and switches. One for every bit sometimes these settings could be stored on paper tapes that looked like a ticker tape from the telegraph – a punch tape – or punched card. With these tapes and or cards the machine was told what, how and when to do something.

To have a flawless program a programmer needed to have a very detailed knowledge of the computer where he or she worked on. A small mistake caused the computer to crash.

 

Second Generation

Because the first generation “languages” were regarded as very user unfriendly people set out to look for something else, faster and easier to understand.
The result was the birth of the second generation languages (2GL) at the mid of the 1950′s

These generation made use of symbols and are called assemblers.

 

An assembler is a program that translates symbolic instructions to processor instructions. (See above for an example) But deep in the 1950′s there was still not a single processor but a whole assembly rack with umpteen tubes and or relays.

 

A  programmer did no longer have to work with one’s and zero’s when using an assembly language. He or she can use symbols instead. These symbols are called mnemonics because of the mnemonic character these symbols had (STO = store). Each mnemonic stands for one single machine instruction.
But an assembler still works on a very low level with the machine. For each processor a different assembler was written.

 

Third Generation

At the end of the 1950′s the ‘natural language’ interpreters and compilers were made. But it took some time before the new languages were accepted by enterprises.

 

About the oldest 3GL is FORTRAN (Formula Translation) which was developed around 1953 by IBM. This is a language primarily intended for technical and scientific purposes. Standardization of FORTRAN started 10 years later, and a recommendation was finally published by the International Standardization Organization (ISO) in 1968.

FORTRAN 77 is now standardized

COBOL (= Common Business Oriented Language) was developed around 1959 and is like its name says primarily used, up till now, in the business world.

With a 3GL there was no longer a need to work in symbolics. Instead a programmer could use a programming language what resembled more to natural language. Be it a stripped version with some two or three hundred ‘reserved’ words. This is the period (1970′s) were the now well known so called ‘high level’ languages like BASICPASCALALGOLFORTRANPL/I, and C have been born.

 

Fourth Generation

A 4GL is an aid witch the end user or programmer can use to build an application without using a third generation programming language. Therefore knowledge of a programming language is strictly spoken not needed.

 

The primary feature is that you do not indicate HOW a computer must perform a task but WHAT it must do. In other words the assignments can be given on a higher functional level.

 

A few instructions in a 4GL will do the same as hundreds of instructions in a lower generation language like COBOL or BASIC. Applications of 4GL’s are concentrating on the daily performed tasks such like screen forms, requests for data, change data, and making hard copies. In most of these cases one deals with Data Base Management Systems (DBMS).

 

The main advantage of this kind of languages is that a trained user can create an application in a much shorter time for development and debugging than would be possible with older generation programming language. Also a customer can be involved earlier in the project and can actively take part in the development of a system, by means of simulation runs, long before the application is actually finished.

 

Today the disadvantage of a 4GL lays more in the technological capacities of hardware. Since programs written in a 4GL are quite a bit larger they are needing more disk space and demanding a larger part of the computer’s memory capacity than 3GL’s. But hardware of technologically high standard is made more available every day, not necessarily cheaper, so in the long run restrictions will disappear.

 

Considering the arguments one can say that the costs saved in development could now be invested in hardware of higher performance and stimulate the development of the 4GL’s.

In the 1990′s the expectations of a 4GL language are too high. And the use of it only will be picked up by Oracle and SUN that have enough power to pull it through. However in most cases the 4GL environment is often misused as a documentation tool and a version control implement. In very few cases the use of such programs are increasing productivity. In most cases they only are used to lay the basis for information systems. And programmers use all kinds of libraries and toolkits to give the product its final form.

 

Fifth Generation

This term is often misused by software companies that build programming environments. Till today one can only see vague contours. When one sees a nice graphical interface it is tempting to call that a fifth generation. But alas changing the makeup does not make a butterfly into an eagle.

 

Yes some impressions are communicated from professional circles that are making these environments and sound promising.

 

But again the Fifth generation only exist in the brains of those trying to design this generation, YET!

Many attempts are made but are stranding on the limitations of hardware, and strangely enough on the views and insight of the use of natural language. We need a different speak for this!

 

But it is a direction that will be taken by these languages: no longer prohibiting for the use of natural language and intuitive approach towards the program (language) to be developed

 

The basis of this is laid in the 1990′s by using sound, moving images and agents – a kind of advanced macro’s of the 1980′s.

And it is only natural that neural networks will play an important role.

Software for the end user will be (may be) based on principles of knowbot-agents. An autonomous self changing piece of software that creates new agents based on the interaction of the end user and interface. A living piece of software, as you may say. And were human alike DNA / RNA (intelligent?) algorithms can play a big role.

 

Programming languages

Which high-level languages are in popular use today?

There are more than 200 other high-level languages such as PASCALFORTHPL/ILISP, SMALLTALK, APLC/C++ and PROLOG to name but a few. Many of these were developed for particular applications, while others are developments or improvements of existing languages. As an example,COBOL is a widely used language for business applications and has an almost English-language type of structure, whereas APL is a mathematically oriented language that uses symbols and equations rather than written words.

 

One of the first languages for general usage to be developed was BASIC, an acronym (letter word) for: Beginners All-purpose Symbolic Instruction Code. It is this language which is so widely spread that there is, until the end of the 90′s, almost no machine sold without some kind of dialect of BASIC. Be it as a stand alone installation or incorporated into an office application like Word or Excel.

For scientific applications, though BASIC is often used, the language is FORTRAN (Formula TRANslation), which was initially developed by IBM for their large computers. Another similar language called ALGOL (ALGorithm-Oriented Language) has also been used for scientific work but has not achieved the popularity of FORTRAN. Both FORTRAN and ALGOL use statements similar in form to those of BASIC, although their syntax rules are usually rather more complex and programs written in these languages tends to be a little more difficult than BASIC.

 

The ALGOL language was favored in academic circles because it tends to be more flexible than FORTRAN and more readily allows the construction of ‘structured’ programs. A development of the ALGOL idea is the language called Pascal.

 

Pascal was developed by Niclaus WIRTH (published in 1973) and is gaining on BASIC in importance mainly on educational institutions. Pascal was standardized by ISO in 1983.

An improvement on PASCAL is MODULA by the same author and stresses much more the structural architecture of programming.

 

Languages such as FORTRAN and BASIC are primarily designed for performing calculations and evaluating equations. They are generally less effective when dealing with text or the relationships between sets of data. A number of high-level languages, such as LISP, have been developed to deal with these applications. LISP (LISt Processing language) itself has become very popular for work in the study of Artificial Intelligence (AI), where the emphasis tends to be on data relationships rather than formula evaluation.

 

LOGO is a simplified variation of the LISP language and has recently become quite popular in education. One interesting part of this language is its ‘turtle graphics’ in which instructions are given to a robot (turtle) carrying a pen which may be moved around a sheet of paper or screen to draw pictures.

 

The attraction of FORTH is that it can easily be extended at one’s need. This language is highly standardized thus there are almost no different dialects in circulation.

C‘ was originally designed for and implemented on the UNIX operating system on the DEC PDP-11, by Denis Ritchie. The great advantage of C lays in the fact that C is not tied to any particular hardware or system and it is relatively easy to write programs that will run without change on any machine that supports C. C is now one of the most important languages in use on the 680xx CPU based machines.

Java, developed by engineers from Sun was originally developed for embedded computing. And is largely based on C++.
Sun decided to give it to the world as an open source language. Sun is very strict in maintaining compatibility and standardization though.
But since its publication Java has take off like a rocket in popularity and implementation base. Almost all web applications are build in Java. Front end applications in banking, insurance, and other large companies based on centralized dataprocessing but distributed access all have their interfaces build in Java. Or are in the process of doing so.

 

Most influential Computer Languages(4)

Konrad Zuse in Nazi Germany may have developed the first real computer programming language, Plankalkul” ca. 1945 (5).

According to Sammet, over 200 programming languages were developed between 1952 and 1972, but she considered only about 13 of them to be significant.

1957 FORTRAN
1958 ALGOL
1960 LISP
1960 COBOL
1962 APL
1962 SIMULA
1964 BASIC
1964 PL/I
1966 ISWIM
1970 Prolog
1972 C
1975 Pascal
1975 Scheme
1977 OPS5
1978 CSP
1978 FP
1980 dBASE II
1983 Smalltalk-80
1983 Ada
1983 Parlog
1984 Standard ML
1986 C++
1986 CLP(R)
1986 Eiffel
1988 CLOS
1988 Mathematica
1988 Oberon
1990 Haskell
1994 Java

 

For the big list of computer languages see lang-list.txt maintained outside tHoCF.

This concludes the introduction to the history of software. See also the languages index.

For further reading: Impressions and thoughts on creating software, an essay

 

 

 

 

 

 

Facebook IconYouTube IconVisit Our BlogVisit Our Blog