Plan 9 from ?

I have been saying for years that I am seeing the same phenomonon today that I saw thirty years ago when the microprocessor was first invented. At that time I argued a lot with all the computer scientists. They insisted that microprocessors were not really computers because they did not run the same software that these people had on their large timesharing computers.

I argued that microprocessors would revolutionize computing because they were cheap. Within a few years certain models would become cheaper than mechanical switches or a bundle of wires and would replace mechanical control circuits in many appliances. I also argued that they would make personal computers possible, machines that you wouldn't have to timeshare.

Microprocessors did revolutionize computing. The embedded computer market grew to billions of units a year, and high end micros go into tens of millions of personal computers each year. But in that same time the high end microprocessors and personal computers have become bigger and faster than the large timesharing systems from the past. Many even run software that became the most popular software in the world for large timesharing computers.

I find it strange that after all the work done to make personal computers faster and more user friendly that for the most part their power is wasted by using the most inefficient software available. For the most part personal computer operating system software comes in three popular flavors, Unix, Windows, and Mac. Microsoft Windows has the largest market share, the most apauling performance, and is the most unstable and buggy of the three. It is a common joke in the industry and it is looked down upon by the people who use what they consider industrial strength software, Unix.

The place that brought us 'C' and Unix, Bell Labs, has developed a new system called Plan 9 to replace Unix and operate more effectively with new computing paradigms such as distributed networks of PCs, servers, and the internet. In a document first published in 1995 they explain the reason for the development of Plan 9. The document is available online at Plan 9 from Bell Labs

From the first few paragraphs of that document:

"By the mid 1980's, the trend in computing was away from large centralized time-shared computers towards networks of smaller, personal machines, typically UNIX `workstations'. People had grown weary of overloaded, bureaucratic timesharing machines and were eager to move to small, self-maintained systems, even if that meant a net loss in computing power. As microcomputers became faster, even that loss was recovered, and this style of computing remains popular today.

In the rush to personal workstations, though, some of their weaknesses were overlooked. First, the operating system they run, UNIX, is itself an old timesharing system and has had trouble adapting to ideas born after it. Graphics and networking were added to UNIX well into its lifetime and remain poorly integrated and difficult to administer. More important, the early focus on having private machines made it difficult for networks of machines to serve as seamlessly as the old monolithic timesharing systems. Timesharing centralized the management and amortization of costs and resources; personal computing fractured, democratized, and ultimately amplified administrative problems. The choice of an old timesharing operating system to run those personal machines made it difficult to bind things together smoothly."

After the thirty years that it took to build up microprocessors beyond the power of old mainframes people in the mainstream of computing are finally beginning to realize that microprocessor based computers are not large multiuser timesharing systems. They are beginning to think about using something other than the old software for large multiuser timesharing computers where things were done in batch mode and real-time response for the one user of the computer was not the idea.

A very different approach to software, Forth, is about a year older than Unix. It was designed to use a virtual machine concept and to get high performance by doing things as simply and quickly as possible. Forth become most popular on small, resource starved microprocessor based computes where it could provide high performance. On small machines that simply were not big enough or compilcated enough to run 'C' and Unix effectively very high performance could be achieved in Forth. These small machines that were not big enough for Unix and one user could support hundreds of users like the large timeshareing systems. This was possible because the idea of the design was to reduce as much overhead as possible to get the fastest context switches and multitasking possible. Forth's performance is so much higher than popular mainstream software on this that many people have dismissed it as too good to be true.

Forth became somewhat of a niche language used by people in engineering to get some combination of the fastest development, smallest code footprint, and fastest realtime response possible. For the most part most languages are specialized in what they do and are good at and don't directly compete with one another too much for a specific niche. But Forth and 'C' have some things in common and are both often viable choices for language when ease of development, portability and performance are the goals. Forth can also act as its own operating system and so also provides an alternative to Unix for some things where Unix is too big or too slow to be practical.

Forth has been widely used in a lot of embedded applications, and it has been particularly successful in high performance, and mission critical embedded applications where there is no operator to fix runtime errors and where old multiuser software models just cannot provide the needed performance or reliability. The Forth virtual machine and common multitasking implementation strategies provide the fastest context switches, interrupt handling, and threading of any enviroment. Typically only a few register need be swapped or copied to and from memory in Forth. This is in strong contrast to 'C' and Unix environments where these things are typcialy far more involved.

In discussions between Unix and Forth programmers the former will be talking about milliseconds while the latter will be talking about microseconds. In other contexts the former will be talking microseconds and the latter will be talking nanoseconds. The Plan 9 document from Bell Labs explains that they are seeking a higher level of performance in Plan 9 than that delivered by Unix on the same machine. I have added the timings available in Forth to the table. One should note the microsecond vs nanosecond units.

Performance

Test            IRIX    Plan 9   Forth
Context switch  150us   39us     120ns
System call     36us    6us      60ns
Light fork      2200us  1300us   100ns
Pipe latency    200us   100us    100ns

Of course this dramatic one hundred to one, or one thousand to one, ratio is only part of the full picture. The full picture involves other issues like using standard "canned" code solutions. When programming is difficult and performance is not important these are important issues. The reduction in the overal complexity in the Forth approach can make the programming much easier and faster and deliver performance. Our experience is that when done properly writing code is not only easier than finding and pasting other people's code from a library, but faster, cheaper, higher performance and easier to debug and maintain.

Forth has a long history of competing with 'C' and Unix on many projects. We have many examples of Forth coming to the rescue of projects where 'C' just could not do what was needed. We have many examples where the smaller team of Forth programmers finished the project much faster than the larger team of 'C' programmers. The typcial ratio in required man-months is about ten to one. There are also many examples where the resultant code generated in 'C' was simply too slow or too big or too power hungry to meet the application requirements and where Forth came to the rescue.

When people are only exposed to large timesharing computer methods these are the only methods that they are aware of or consider using. If they hear about a different approach that can deliver programs in one tenth the time or cost, and deliver much higher performance they will tend to be very skeptical. They may have a very hard time understanding these other methods if they have never been exposed to anything outside the mainstream popluar software enviroments that even the folks from Bell Labs who invented it describe as "an old timesharing system." In that same document on Plan 9 they go on to say, "The problems with UNIX were too deep to fix, ..."

Besides software another factor in the computing equation is hardware. People facing a given computer platform often think of the hardware as something that is fixed and not one of the variables in the computing equation. This is often the case in the desktop environment where the a programmer may be facing a particular machine. But 98% of the microprocessors made do not go into desktop machines but rather go into embedded computer applications. 98% of computers are embedded computers where the cost of the hardware is far more important than the cost of the software because the cost of software is spread accross many units. In these machines it is vital that the software be fast and efficient so that the device can meet its require specifications with a minimal amount of hardware.

The attitude in the desktop environment is often that program size or speed are not important because more memory and a faster processor is a cheap investment. In embedded applications the goal is zero cost or zero power consumption. Minimizing the cost of the hardware, the size of the hardware, and the efficiency of the software is most of the game. This is where Forth has traditionally been able to most effectively compete. Instead of upgrading the hardware from a $100 processor to a $500 processor to make up for inefficient software the goal in Forth is often to make the software so efficient that the same job can be done with a $10 processor. If you want to sell a lot of widgets making them work with 1/10th or 1/50th as expensive hardware will be a big advantage.

In this area Forth has a special advantage. For the last fifteen years people have been developing Forth hardware based on the same concepts that made Forth work so well in software. These machines can be incredibly small, cheap an low power. For the most part the manufacture cost and power requirement for a microprocessor will be proportional to the number of transistors int he design. Traditional small designs had low performance and to increase performanc and deliver the features that the 'C' compilers wanted to exploit the machines got bigger and bigger until they drarfed the mainframes from the old days.

Intel's first introduced the 4004 microprocessor in 1971 and it had only 2300 transistors and only performed 60 thousand instructions per second. By 8086 they were up to 29 thousand transistors and performance was up to 330 thousand instructions per second. Today Intel is up to 28 million transitors and approaching a billion instructions a second. They have increased transistor count by a factor of one thousand and performace by even more. But one must remember that for 98% of the computers made the goal is zero cost or zero power consumption and that cost and power consumption are proportial to the transistor count.

The first machine to use the Forth language as its instruction set was intoduced in 1985. The Novix microprocessor used only 4000 transistors yet it was considerably faster than the fastest Intel chip at the time, 80386, on many realtime problems. Today we have machines that run at speeds compeditive with the current Intel machines on our problems, but which have fewer transistors than the antique 8086. The idea is to reduce the hardware cost and power consumption by a factor of 1000 over conventional designs for executing Forth programs.

While many other people insist that effiency and cost are not longer issues with personal computers the people working in embedded computers know that all computers have limited memory and limited computing power. Our goal is to minimize cost, power consumption and to maximize performance, efficiency, and the effectiveness of the development effort.

When we try to explain what we are doing, to reduce the cost of hardare by a factor of 1000 and increase efficency of software by a similar factor by using Forth instead of Unix and 'C' we are usually immediately dismissed with some comment like "I don't want to get into a language war." It seems that sticking with what they know, and not even considering an alternative, is more important to them than anything else. They will not even consider the possibility that there could be way to improve the performance/price factor by about one million.

I haven't talked about Microsoft's software on Intel's hardware. The Unix folks think efficiency and reliabilty are important when they compare Unix and MS software. They enjoy a much more stable and higher performance environment than those using the Microsoft alternative. To bring Microsoft's approach to software into the equation means more zeros since they are much slower and less efficient in their software than Unix. Microsoft's software makes using an old timeshareing operating system on a PC look really great in comparison.

In conclusion the world of computing is much larger than Microsoft, Unix, and Forth combined. There are embedded systems ranging from tiny four bit processors that resemble the first microprocessor ever made to multiprocessors with the largest processors avialable. In addition to embedded computing there are handheld computers, wearable comptures, laptop computers, desktop computers, network servers, minicomputers, mainframe computers and supercomputers. Each have their own characteristics and no single operating system or programming language is ideal over the whole range of macines and application domains.

Jeff Fox 7/24/00