The personal computer until the mid-sixties was a too expensive machine that was used exclusively for special tasks and that performed only one task at a time.

Programming languages at that time, as well as the computer device on which they were used, were developed only for performing tasks of a specific plan, for example, for scientific computing, and not. For the reason that the machines, as we said above, were quite expensive, and only one task was performed at a time, time was considered expensive – so the speed of program execution was in the foreground.

Reading from any external device. writing to an external device. Assigning a value to a variable. In each language, a program is a sequence of instructions, similar – but more rigid – as a recipe in a cookbook. The computer executes the instructions after the word “then” only if the condition that is written after the word “if” is true. A computer repeatedly executes instructions after the word “repeat” until the written condition after the word “until” is true.

The computer is a repetitive machine that accelerates its limited computational skills. In fact, to multiply a positive number by 3, it must add it to itself 3 times. This can be achieved with a program that will have the following structure.

However, the cost during the sixties began to decrease and the time came when even small firms could afford this pleasure. In addition, the speed increased and machines were idle for a long time without performing any tasks. And to stop this, time-sharing systems were introduced.

The processor time in these systems was, as it were, “sliced”, and users could receive short segments of this time in turn. The computer device began to work much faster, which allowed the user to feel at the terminal as if he was working with the system alone. The device, in turn, was idle less, as it performed not one, but several tasks at once. The division of time significantly reduced the cost of hardware time, and all due to the fact that one device could be shared by not one user, and not even two, but hundreds.

Repeat the addition to the previous number. Until you have done it 3 times. Write the number on the monitor on the monitor. The ambiguity of natural languages. We read this title in the newspaper: “Worker accepts his wife”. If the fact described in the article is about a worker accepting his wife, it probably won’t appear in the newspaper. In fact, the article clarified the ambiguous title: A worker killed his wife in a strike. The next sentence is also ambiguous: “Sleeping mother leaves her daughter”. By placing commas in two different ways, it takes on opposite meanings: “The mother sleeps, her daughter comes out”. “The mother, sleeping daughter, comes out”.

So, when the power became cheaper and more accessible, those who created programming languages began to think more and more about more convenient writing software, and not about the speed of their execution. “Small” operations, that is, atomic-type operations that were directly performed by the devices of the devices, were combined into more “voluminous” high-level operations and unified structures with which it was much more convenient and easier for users to carry out their activities.

A distinctive language is an artificial language with its own syntactic rules and correct vocabulary. Artificial languages were invented for philosophical, literary, scientific, linguistic, religious and gaming purposes. The phenomenon of invention of artificial languages is not unified, but can be presented depending on the case with very different characteristics, motivations and results. For it to be fully and unambiguously distinguished from the phenomena of invention in a given language, there must be at least one of the following conditions: explicit formalization of the rules of syntax and naming of the same language.