Is UNIX an open source OS ?

Hi everyone,
I know the following questions are noobish questions but I am asking them because I am confused about the basics of history behind UNIX and LINUX.

Ok onto business, my questions are-:

  1. Was/Is UNIX ever an open source operating system ?
  2. If UNIX was closed source (as Wikipedia states as "historically closed source") then is LINUX a reverse engineered version of UNIX ?
    I mean if UNIX was closed source how did Linus Torvalds create a "UNIX-like" OS called LINUX without having access to the source code of a closed source OS. The only possible explanation seems reverse engineering UNIX just like ReactOS is a reversed engineered binary compatible version of Windows.
  3. Now this seems a little odd to ask. Is LINUX actually an OS ? Or is Linux just a kernel ? I am asking this because the Debian OS which I use can work on Linux, FreeBSD and I think the GNU Hurd kernel (I have no idea what it is by the way). Wikipedia defines Linux as an OS while I have never used an OS called Linux but I have used distros like "Fedora", "Debian", etc.

Okay that's about it for now. Please don't flame me, I am really confused between the basics here.

Thanks in advance.

OK, a few definitions up front:

AT&T had a experimental lab, where some determined guys (Ken Thompson, Brian Kernighan, Dennis Ritchie, ...) programmed a OS - the first UNIX. Over time there were several revisions and at one point in time AT&T gave the code to a university (Berkeley) and let them play with it. They developed (today you'd say "forked") their own version. (This is called "BSD" - Berkeley Systems Department - AT&Ts main version is called "System V", you sure find more history about this when you search for it.) Initially, this was "Unix".

All these systems were closed source, but it was possible to buy a license from AT&T. You got the sources and were allowed to rebuild them, even change them to some point. Companies like IBM (but also Sun, HP, DEC, ...) did this and developed their own flavour of UNIX (in IBMs case "AIX"). These were "Unix" too.

But the success of Unix was not because of the (btw. very very good) implementation of the OS, but because of the stunning simplicity yet elegance of its design. Therefore, after some legal hassles, what "Unix" meant, changed. Before, it was a code base and the design principles were inherently inscribed in it. Not any longer. Today, "Unix" is a set of things an operating system must do under defined circumstances. "If system call X() is issued the system must return "A" in case of "...", "B" in case of "...", and so forth. See it like this: before, "Unix" was a blueprint for a certain car. Now, there is just a standard which says "if i turn the steering wheel clockwise the car is supposed to change the direction to the right side". How the connection between steering wheel and tires is made doesn't matter at all, as long as the system reacts in the expected, standardized manner. (search for "POSIX" and "Single Unix Specification" to get details about how this standard is designed. Or ask our revered local Master of Standards, Don Cragun and be prepared for the highest quality information on this issue you will ever get.

To answer your question about Linux: Linus Torvalds did not "reverse engineer". He took a very small and basic implementation of a Unix kernel ("Minix"), which was written by an american/dutch university professor for educational purposes: Mr Andrew Tanenbaum. The scope of this was to show students how to write operating systems (kernels) and as an example Mr. Tanenbaum used Unix, because it is so awfully well documented. Mr Torvalds used Minix as a basis and developed his own Unix-like kernel. This, see above, means the kernel is programmed all anew, but if a Unix kernel reacts to some cirumstance/does something in a certain way it will react in the same certain way/do the same.

Linux has (probably because of political considerations) never intended to get the official certification of being a real Unix. It is - we know so much - as "Unix" as it gets and would in all likelihood pass the certification, but this has not happened yet.

The difference is: Windows was built with the a certain (IBM-compatible PC) platform in mind. It runs there and nowhere else. The upside is that software is binary compatible: once you get a clean compile you can expect the compiled binary to run on every system (well - that is the theory!). Unix is not binary but source-code compatible. It was intended to make little or no assumptions about the system it runs on at all, but this in turn means, that a binary will only run on the target machine it was compiled for.

Windows: once you get it compiled, the compiled binary will run on every system and do the same.

Unix: once you get it compiled you can compile the same source on every Unix system and the produced binary will do the same in any system.

Therefore "reverse engineering" was not necessary. A software compiled for, say, SCO Unix on PC, will not run on Linux for PCs, but the same source can be compiled for any UNIX and each resulting binary is expected to do the same. (Again: this is the theory. I spare you the gory details so that your nights sleep is undisturbed.)

Yes, Linux is an OS. Yes, "Linux" is just the kernel. The kernel of an OS does not do everything itself, but it sets every design decision as a given. Therefore, in fact, the kernel IS the OS. "Kernel" here means also the driver layer, process accounting, process environment, resource scheduling, filesystems, system calls, libraries ....

On top of the Linux kernel one usually uses the GNU toolset. GNU deeloped their own OS ("The Hurd"), which is intended to surpass Unix designwise, but - they might not like to hear that, but this is my opinion) this is doomed to fail the same way as Plan 9 (another try to obsolete Unix) because of the sheer simplicity and straightforwardness of UNIX' design. This is similar to the programming language "Oberon", which Nikolaus Wirth thinks is his best creation, but still: if someone uses any Wirth language at all, he uses the first fruit of Mr. Wirths muse, PASCAL. Like the quote oftenly (bu wrongly) attributed to Mr. Gorbatshov:

He who comes too late is punished by life.

Because it is quite difficult to obtain all the sources, compile them, arrange them in bundles which make sense, etc. "Distributions" basically do exactly this, plus many have developed their own mechanism to bind software together to meaningful bundles. Fedora/Red Hat/CentOS/... have "rpm" for that, Debian/Ubuntu/... have "apt", SuSe has "zypper", etc. basically they all use the same software and bundle that together, write some installation routines, such things. Debian will only incorporate what is thoroughly tested while Fedora is more like as soon as the developer hits "save" i want the compiled version installed but these are just two sides of the same coin.

Basically it is always the same: the kernel, plus the set of utilities from GNU (like "GNU-ls", "GNU-mount", "GNU-<any-conceivable-unix-command"), plus a set of additional software, like a desktop manager (KDE, GNOME, ...), mail program, web browser and so on. At this level there is little difference between the distros. Fedora may use version 4.8.1.3.5 of some software while Debian still installs 4.7 or 4.5, the one may install KDE while the other has GNOME, but these are details.

I hope this helps.

bakunin

5 Likes

1) UNIX used to be a closed-source operating system, yes, made by Bell and AT&T.

UNIX is no longer an operating system, however. That particular kind is no longer sold, and the name is now controlled by a different group which maintains a paper standard -- defining what features and utilities a UNIX operating system is supposed to have without getting too specific about how it works internally. If you follow these papers, and certify with them, you can call your operating system UNIX.

These days, there are open UNIXes(the many kinds of BSD, some kinds of Solaris), closed UNIXes(AIX), and everything inbetween.

2) Linux is not UNIX. Linux and the GNU utilities were actually made in a spirit of competition with UNIX. Same with HURD -- HURD was actually the "official" GNU kernel, Linux was an upstart project which appeared out of nowhere and overtook it. :slight_smile:

It's nothing like ReactOS either, which can run Windows programs natively -- you couldn't run HPUX executables, AIX executables, or SunOS executables on Linux natively. It wouldn't make much sense to even try, these proprietary UNIXes are designed to run on their own proprietary machines. For that matter, Linux on ARM is not compatible with Linux on x86!

What different UNIXes have in common is source compatibility -- you can't expect to haul a program from an alien architecture and expect it to run, but you can hope to build it from source on some UNIX's own compiler and get the same effect. This is the sense in which different UNIX and UNIX-likes are supposed to be compatible. They have the same kernel features and programming language "construction kits". This is also how Linux has managed to spread to such a bewildering variety of architectures from supercomputers to set-top boxes.

3) Linux is a kernel. Linux plus the GNU utilities makes a complete UNIX-like operating system. HURD or MACH plus the GNU utilities also makes a complete UNIX-like operating system. MACH plus the BSD utilities could make a genuine UNIX-certified operating system. Different Linux distributions are the Linux kernel plus different userland utilities.

The thing is, Linux and GNU weren't made to be a UNIX -- they were made in direct competition with it. GNU even stands for "GNU's not UNIX". This stems right from the bad old days when a license for the UNIX source could set you back a cool hundred grand in 1980 dollars... It eschews the UNIX name for legal reasons but is very similar. It matters less these days, now that AT&T doesn't control the brand and there's many open alternatives. I hope GNU will forget the old feud and get things UNIX-certified someday.

5 Likes

Okay first of a great many thanks for taking the time out to give me such a detailed explanation. I could'nt have got better. But I have a couple of questions.

Are you saying that for example I have a piece of source code like -:

#include<stdio.h>
main()
{
printf("Hello World"\n");
}

And I compile it on a SCO UNIX machine and take the executable to Solaris Machine the executable won't run. Are you saying that I would need the source of the hello world program and then I would have to build it back on the Solaris Machine ?

I am sorry but could you elaborate what you mean by this. The kernel is at the end responsible for how software interacts with hardware, so it kinda does everything.

One last question it may be off-topic. You said -:

How do I contact someone like Don Cragun ? I am not saying that your answers were wrong or insufficient in any way but if I wanted to contact him then how would I do it ? By private message ?

Yes it did immensely.

Hi sreyan32...

Don't worry do something amiss and he will contact you...

:wink:

2 Likes

Yes. Exactly.

The features of the kernel define and constrain what your programs can do. Compare any UNIX-like to the Windows kernel for example.

Do you get disk devices? Yes -- as drive letters, c:\ d:\ etc, not as direct files.

Do you get terminal devices? Not really, unless you use a com port, and the emulation is still limited.

Do you get partitions? Yes, each mounted on their own root, not (usually) nested.

Do you get folders? Yes -- separated by \, rather than /. (Oddly, some calls in Windows actually can separate by /, some can't.)

Do you get files? Yes -- with case-insensitive names.

Which meant that Cygwin, which does as much as it can to act like UNIX within the Windows framework, can't avoid these facts. Some things it can translate between -- the / vs \ -- but some things are just unavoidable, like case-sensitive filenames. No matter what it does, if you create 'a' in the same folder as 'A', you are just overwriting 'A' again.

This has made some parts of Cygwin more difficult, slow, and complicated than they need to be just because Windows really isn't meant to do what's being asked of it here. fork() for example -- efficient and fundamental in UNIX, but slow and nightmarish to emulate in Windows, because it uses a very different process model. Also terminal devices, which are a bit of a nightmare to build from scratch anywhere you go -- Linus' project began as a terminal emulator, and from there it wasn't too too far to make it a complete kernel :wink:

And in the end, Windows' kernel just isn't suited to running UNIX-like things. UNIX can run thousands of tiny, short-lived processes in a few moments without a hiccup, that being one of the things it's designed for. Try and do that on Windows and it lags, hiccups, and kills random ones here and there for no apparent reason other than "if you don't make so many processes, it might do that less". Windows prefers larger, fewer, longer-lived processes.

3 Likes

Like Corona688 already said: yes, precisely. The compatibility of UNIX is defined as the guarantee that you can compile the same source code on different systems with the same outcome. For instance, this means that regardless of how your terminal is constructed (in terms of real hardware) you can expect "printf()" to do the same/analogous on every one of them. You can look "printf()" up somewhere in the POSIX standard and will find a detailed "printf() is required to do X in case of Y, return A in case of B, ...".

Notice that this standard just describes what has to come out, not how this outcome is realized! This is why UNIX (and Unix.like systems) run on everything from small embedded systems in your washing machine over cell phones (Android is just a customized Linux kernel), most WLAN routers, NAS appliances to real big irons like the IBM p795. We have about a dozen p780s in our data centers, most of them 4TB of memory and 128 processors. They run dozens of LPARs each. Compare this, along with some 50-60 PCI busses in each I/O subsystem (each system can have up to 4 of them), with your typical PC-compatible server Windows runs on. In addition we have some z/Linux systems running on the mainframe, Linux on all sorts of hardware, a few Sun servers running Solaris (another Unix), etc., etc..

Yes, exactly. It is not only a kernel but also a standard library, which executes all the system calls in a standardized way. If you never have to execute interactive commands but only a fixed program you do not need all the utilities that come with a OS usually and which do things like create users, files, and so on. Most embeded systems are constructed this way: a Linux kernel, the standard library, the one program it is supposed to run and from the kernel and the library stripped everything not necessary to run that one program. Take apart your home WLAN router, telephone or similar device and you might probably find exactly this, burned into an EPROM.

For instance. He does not bite (well, not when the moon is not full, anyways) as he is a very friendly guy and by far the best expert in UNIX standards issues we have here you can ask him if you need to know details we can't provide. He won't answer via PM (because this would not contribute to the knowledge base we are building here), but he might write something into this thread.

I hope this helps.

bakunin

2 Likes

I try to read through all posts concerning the shell and the "standard" utilities and will usually comment if I see something that doesn't look right; but I may miss a posting once in a while. If there is something that you think needs my attention and I haven't commented on it, send me a PM with a link to the discussion thread and state what you think needs to be clarified/verified. I won't make any promises about how quickly I'll get to it, but I try to be responsive. :wink:

If you build an application on SCO UNIX and try to run the binary produced on any other operating system (Linux, HP/UX, AIX, Solaris/SunOS, ...) it probably won't work (but some minimal applications like HelloWorld might actually run successfully more often than many of us would expect. You certainly can't run a Solaris/SunOS SPARC or Motorolla chipset binary on an SCO UNIX x86 box, but you can rebuild that program from source on all of those systems and get the same results. (Of course your application source code can't get any guarantees about portability if it uses anything that the standards don't specify; or if it uses what the standards call "implementation-defined", "unspecified", or "undefined" behavior.)

To expand on what bakunin said, the POSIX Standards and the Single UNIX Specifications (which defines part of what is required to be certified by The Open Group as a UNIX System) are API (Application Portability Ineterface) standards; not ABI (Application Binary Ineterface) standards. Some systems also adhere to ABI standards. (For example, the SPARC Compliance Definition standards (SCDs) allowed you to build a binary on certain SPARC based Sun/Oracle Solaris systems and be guaranteed that it would also run correctly on some Fujitsu SPARC systems without rebuilding and vice versa.)

Cheers,
Don

4 Likes

It does not really matter these days whether UNIX per-se is completely open sourced or not.
As some versions are, some are not, however the simplicity coupled with power at your fingertips is legendary these days...

As a novice at the *NIX family of OSes the single beauty to me is that everything is a __file__.
These __files__ can technically be read from and written to without much of a fuss.

Take the following, (this assumes /dev/dsp exists and an internal mic on your system)......

cat < /dev/dsp > /dev/dsp

......records a few seconds of voice then replays that recording from from the same device, continuously.
This could be the basis of a simple baby alarm all with the simplicity of "everything is a file".

So from one line of 25 characters of an open sourced command, cat , you have tremendous power at your fingertips.

This alone is both elegant and beautiful, and 'cat' along with other *NIX commands IS/ARE open source...

On Windows, if you use '\\.\' you get access to the device namespace. File namespace (the default) is accessed via '\\?\'

WinObj from the Sysinternals suite will allow you to browse the NT device namespace

1 Like

Only by certain special means. As usual the user gets none of that.

That's what I see as the big difference between UNIX and Windows... The shell on UNIX is there to help you access what's there, the shell on Windows is there to stop you from accessing what's there.

Okay I am making this post to clarify one of the most confusing topics that I have had about Unix and Linux.

Why do you say that Linux is not Unix ? I mean they are both POSIX compliant and they use the same commands. Agreed that some options that are found in Solaris is not present in Debian, but the general working is same for both OSes.

Also since my college syllabus includes the UNIX OS this semester so I have been doing some amount of reading on the subject. And the book that is recommended to engineering students in India (UNIX Concepts and Applications by Sumitabha Das) goes as far to say -:

Now I just want to know how accurate this really is. And if so why do all the experts that Linux is not UNIX. Also as Corona688 mentioned above GNU is not Unix.

Why is it not ? Is it just because it does not have an official certification ? Or are there actual differences at the kernel level ?

---------- Post updated at 06:39 PM ---------- Previous update was at 06:26 PM ----------

Okay once again another ultra noobish question. What is a terminal device that you are referring to above. I know its not the terminal in Linux or the command-prompt in Windows. What exactly are they ?

What do you mean by nested partitions ? The partitions in Linux/Unix are separate right ? Like I have my /home partition on my Debian system on a separate drive itself. Are you saying that partitions like /dev, etc can be nested under the root partition ? If so why would I want to do that ? What are the advantages of nested partitions ? I mean then if I lose the root partition I lose all the other partitions nested within that.(Assuming my understanding of partition nesting is right)

Please don't misunderstand when I say - Are you sure about the Windows kernel not being able to run numerous processes simultaneously ? It may have been true for older Windows like XP, but from Windows 7, Windows is pretty good at multitasking.

Well, for starters, because they say so. GNU is short for "GNU's not UNIX".

Second, because nobody's bothered to certify it. They say they implement POSIX commands portably, but haven't actually been through the UNIX group's rigorous battery of tests. Testing isn't free AFAIK, and Linux changes very fast, so I can understand not bothering.

It hasn't undergone certification. Just implementing the right system calls isn't enough, an OS has to go through a rigorous battery of tests.

How the kernel works inside matters less, to a degree. There's only one obvious way to do some things, but others, there's lots of choices.

The terminal device is a serial port -- either a real, physical serial port, or an emulated one(i.e, a vterm). Any proper terminal in UNIX, even a graphical one, will use one.

This is because UNIX serial devices come with lots and lots and lots of built-in software features. Have you ever hit ctrl-C to kill a stuck program? The serial port, even an emulated one in a GUI, does that directly. It has a "send SIGINT when you hit this key" setting.

Imagine you're running out of space on drive C in Windows, and add another hard drive to deal with it. Now you have a drive D with lots of space -- which is no help at all since C is where you need it.

In UNIX, you could attach the new drive's partitions wherever you wanted -- /home/ for example, if that particular folder is very big. The new partition would be empty, though -- you'd have to copy the old contents into it before it could be used.

That's all I mean.

This isn't about multitasking, it's more like a memory leak. Running many tiny processes -- even in a row, load/run/quit/load/run/quit -- fills up some queue or table inside the Windows kernel. Eventually it tells you "can't do that, out of room". It's not quite a leak since it's temporary.

In UNIX, once you've wait()ed for a program it's gone -- all its memory returned, its slot in the process table free, etc, etc. But Windows seems to give the "all clear" the instant the program you're waiting for wants to quit. This might be higher-performance -- you can do things instead of waiting while it cleans up -- but means that, under certain kinds of loads, resources can be used faster than Windows cleans them up.

I have seen this happen here on unix.com with many different versions of Windows, when people use things like Cygwin or Busybox to run UNIX shell scripts under Windows. Things which are merely inefficient in UNIX -- running awk 100,000 times to process 100,000 lines -- actually break down randomly in Windows.

I am sorry for being blunt first of all. But what you said makes absolutely no sense to me. Why would a terminal need a serial port to function ? Also I have used the terminal in both Debian and Fedora distributions. I have never made any connection to any serial port before using them !
Why would I need to do so ? OSes like Windows also offer ctrl+c combination to kill the program(not that I like comparing Windows and UNIX I am just trying to get my point across). And what do you mean by it does that directly ? How can a serial port send signals to the CPU ?

I am sorry but I don't understand what you are saying probably because I am starting out with UNIX. Do you think I should post a separate thread about this topic dealing with serial ports and terminal devices ? Because its getting off topic here.

If I am not mistaken you mean that you can move around partitions right ? For example I can unmount the /home partition and put it in a different hard disk all together and use it from there right ?
But then why would you call it "partition nesting" ? because that sounds a partition within another partition.

What are you talking about ??

In the beginning, terminals and telecommunications equipment were always attached to serial ports; that's what they're there for. Quite a lot of features -- SIGINT on ctrl-C, EOF on ctrl-D, timeouts, line-based reading, simple translation, cleanup when the connection closes, etc etc etc -- were added to the serial port device driver to keep them simple to use. Modems and terminals had a fair amount in common, incidentally.

So, any interactive terminal program in UNIX talks to the terminal like a serial port. When you close an xterm or a PUTTY window, you expect things it was running to die with it, yes? Like a dialup teletype session would die when a modem hung up. It's the same thing.

But these days, they don't always have a serial port! Instead of rewriting everything imperfectly every time they created a new and special kind of terminal, they added a "virtual terminal" device to the UNIX standard. It's like a special kind of pipe where, if you write ctrl-C into one end, the other end dies... It has all the other usual features UNIX terminal programs have come to expect -- and should, since it's the same device driver.

Run 'stty' in an xterm -- it will tell you your terminal's "baud rate". Which doesn't matter for virtual terminals, but it's there for historical reasons.

The serial port driver handles things like

  • Should keystrokes be instantly delivered to the program, or should it wait until ENTER is pressed?
  • What key is backspace?
  • Should I send SIGINT on ctrl-C? Or some other key? Or not at all?

...and lots of other things. That's all in the device driver itself.

Using virtual terminals means you can run the exact same interactive program in a local terminal, remote terminal, GUI terminal, serial terminal, or whatever else and expect it to work the exact same way, right down to the weirdest bits of UNIX terminal history. (Try logging into a LINUX terminal as allcaps -- and watch the rest of the text become allcaps when it decides you're running a 6-bit terminal!)

ctrl-C is the only serial-like feature Windows has, and not even for serial ports. Windows actually put that feature inside each program -- but not all programs. Which means ctrl-C only works when it wants to.

When you type ctrl-C, the kernel sends SIGINT to whatever's attached to your terminal. It's not a feature of your shell.

When you type a line and hit enter, the kernel delivers the complete line to whatever program's reading. Programs don't have to assemble each individual keystroke themselves (unless they want to.)

These and much more are features of the serial port device driver, which all interactive terminal programs in UNIX depend on.

Serial ports in UNIX are a complicated topic. Sure, if you want.

Depends what you mean by that.

When you mount something atop of /home/, you see the contents of that partition in it and not what was there before.

Sorry for the confusion. Perhaps that was poorly worded.

2 Likes

No current Linux system meets POSIX requirements for OS or utilities behavior. On a POSIX conforming system, you never have to give a --posix option to make a utility behave as specified by the POSIX standard. On a POSIX conforming system, the command:

echo -n abc

will write the characters -n abc and a trailing <newline> character to standard output.

On a POSIX conforming system, each thread in a process shares a single process ID; on a Linux system each thread gets its own process ID.

There are hundreds of places where Linux systems do not conform to POSIX standard requirements. Fortunately, for a lot of the stuff you run into in daily run-of-the-mill programming, they are quite similar. But, if you try write portable code that will work on any POSIX system (and all UNIX branded systems are POSIX conforming systems and have to also meet additional requirements) there is no guarantee that it will run on a Linux system. (Of course no test suite is perfect, so a UNIX or POSIX branded system may have bugs that haven't been caught yet; but vendors of these branded systems once notified that a conformance bug is present have a contractual obligation to fix it within 6 months or lose their right to use the brand.)

As the POSIX standards evolve, they are picking up some new features from Linux systems. And, over time, many GNU Utilities are coming closer to meeting POSIX requirements. But, for the foreseeable future, Linux systems are most definitely not POSIX conforming systems and cannot be branded as UNIX systems or POSIX systems even if one of the Linux distro vendors was willing to pay the certification costs and fill out all of the paperwork involved.

There is also a standard for Linux systems (the Linux Standard Base AKA LSB), but the last I heard, no Linux system has ever conformed to any version of the LSB either.

2 Likes

Most of your criticisms are quite true! But this one hasn't been true for 10 years. The old threading model -- which amounted to cloned processes sharing the same memory segments -- got thrown out when someone found a big design flaw, they replaced it with NPTL. That means "native POSIX threading library". I expect that's a fair bit closer to compliance.

I expect it would be quite a lot easier for an OS to become POSIX compliant if the tests were easier to get.

2 Likes

Thanks for letting me know. I said: many GNU Utilities are coming closer to meeting POSIX requirements.
I should have said: many GNU Utilities, Linux libraries, and the Linux kernel are coming closer to meeting POSIX requirements.

Surprisingly enough, the vendors who fund the development and maintenance of the UNIX and POSIX conformance test suites so that their systems can be branded or certified haven't accepted the idea that they should give away the test suites so their competitors in the open source community will have an easier time putting them out of business. Nonetheless, I believe some Linux distro vendors do occasionally pay The Open Group to run the tests for them to see what areas still need work to actually become POSIX certified or UNIX branded. Certification or branding will happen some day, but we aren't there yet.

3 Likes

Thanks a lot first of all Corona688 for being extremely patient with all my queries and answer them as simply as possible. Now I am thinking of starting a new thread about Serial ports, but before that I am asking for some reading material on serial ports and terminal devices. So that when I come back to ask my doubts here they don't need to be quite so noobish and obvious questions.

Any links to some study materials for beginners for this topic would really help. Also if anyone can suggest a good book which contains material on this topic will also be nice.

I am asking here because googling on the topic didn't give me any concrete starting point.

Serial ports maybe complex but I am already interested in how they work :). First time coming across a design where a terminal communicates to the kernel through a COM port.

---------- Post updated at 08:20 PM ---------- Previous update was at 08:17 PM ----------

Could you elaborate on this point further. I mean if no one conformed to a standard how on earth is that standard still surviving. And also why is it there ??