Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
DNIX
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Asynchronous events == DNIX's native system call was the <code>dnix(2)</code> library function, analogous to the standard Unix <code>unix(2)</code> or <code>syscall(2)</code> function. It took multiple arguments, the first of which was a function code. Semantically this single call provided all appropriate Unix functionality, though it was syntactically different from Unix and had, of course, numerous DNIX-only extensions. DNIX function codes were organized into two classes: Type 1 and Type 2. Type 1 commands were those that were associated with I/O activity, or anything that could potentially cause the issuing process to block. Major examples were, <code>F_OPEN</code>, <code>F_CLOSE</code>, <code>F_READ</code>, <code>F_WRITE</code>, <code>F_IOCR</code>, <code>F_IOCW</code>, <code>F_WAIT</code>, and <code>F_NAP</code>. Type 2 were the remainder, such as, <code>F_GETPID</code>, <code>F_GETTIME</code>, etc. They could be satisfied by the kernel immediately. To invoke asynchronicity, a special [[file descriptor]] called a trap queue had to have been created via the Type 2 opcode, <code>F_OTQ</code>. A Type 1 call would have the, <code>F_NOWAIT</code> bit OR-ed with its function value, and one of the additional parameters to, <code>dnix(2)</code> was the trap queue file descriptor. The return value from an asynchronous call was not the normal value but a kernel-assigned identifier. At such time as the asynchronous request completed, a, <code>read(2)</code> (or, <code>F_READ</code>) of the trap queue file descriptor would return a small kernel-defined structure containing the identifier and result status. The, <code>F_CANCEL</code> operation was available to cancel any asynchronous operation that hadn't yet been completed, one of its arguments was the kernel-assigned identifier. (A process could only cancel requests that were currently self-owned. The exact semantics of cancelling was up to each request's handler, fundamentally it only meant that any waiting was to be terminated. A partially completed operation could be returned.) In addition to the kernel-assigned identifier, one of the arguments given to any asynchronous operation was a 32-bit user-assigned identifier. This most often referenced a function pointer to the appropriate subroutine that would handle the I/O completion method, but this was merely convention. It was the entity that read the trap queue elements that was responsible for interpreting this value. <syntaxhighlight lang="c"> struct itrq { /* Structure of data read from trap queue. */ short it_stat; /* Status */ short it_rn; /* Request number */ long it_oid; /* Owner ID given on request */ long it_rpar; /* Returned parameter */ }; </syntaxhighlight> Of note is that the asynchronous events were gathered via normal file descriptor read operations, and that such reading was also able to be asynchronous. This had implications for semi-autonomous asynchronous event handling packages that could exist within one process. (DNIX 5.2 did not have [[lightweight process]]es or threads.) Also of note is that ''any'' potentially blocking operation could be issued asynchronously, so DNIX was well equipped to handle many clients with a single server process. A process was not restricted to having only one trap queue, so I/O requests could be grossly prioritized in this way. === Compatibility === In addition to the native <code>dnix(2)</code> call, a complete set of 'standard' [[libc]] interface calls was available. <code>open(2)</code>, <code>close(2)</code>, <code>read(2)</code>, <code>write(2)</code>, etc. Besides being useful for backwards compatibility, these were implemented in a binary-compatible manner with the [[NCR Corporation|NCR Tower]] computer, so that binaries compiled for it would run unchanged under DNIX. The DNIX kernel had two trap dispatchers internally, one for the DNIX method and one for the Unix method. Choice of dispatcher was up to the programmer, and using both interchangeably was acceptable. Semantically they were identical wherever functionality overlapped. (In these machines the [[68000]] <code>trap #0</code> instruction was used for the <code>unix(2)</code> calls, and the <code>trap #4</code> instruction for <code>dnix(2)</code>. The two trap handlers were very similar, though the [usually hidden] <code>unix(2)</code> call held the function code in the processor's D0 register, whereas <code>dnix(2)</code> held it on the stack with the rest of the parameters.) DNIX 5.2 had no networking protocol stacks internally (except for the thin [[X.25]]-based [[Ethernet]] [[protocol stack]] added by ISC for use by its diskless workstation support package), all networking was conducted by reading and writing to Handlers. Thus, there was no [[Berkeley sockets|socket]] mechanism, but a <code>libsocket(3)</code> existed that used asynchronous I/O to talk to the TCP/IP handler. The typical Berkeley-derived networking program could be compiled and run unchanged (modulo the usual Unix [[porting]] problems), though it might not be as efficient as an equivalent program that used native asynchronous I/O.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)