S) 3.0 IO controllers/interfaces

Q) 3.1 *How do IDE/MFM/RLL/ESDI/SCSI interfaces work?

Q) 3.2 How can I tell if I have MFM/RLL/ESDI/IDE/SCSI?
[From: ralf@alum.wpi.edu (Ralph Valentino)]

The most reliable way to tell what type of drive you have is to call the manufacturer with the model number and ask. There is an extensive list of phone numbers in the References section of the FAQ.

That aside, the first thing to check is the number of pins on the drive's connector(s). The second thing to check is the CMOS setup, assuming, of course, that it is in a working system.

SCSI = 1 cable: 50 pins (note 1,2)
usually set up as "not installed" in the CMOS

IDE = 1 cable: 40 pins
no reliable way to tell from the CMOS

RLL = 2 cables: 34 pins & 20 pins
always has 26 sectors per track

MFM = 2 cables: 34 pins & 20 pins
always has 17 sectors per track (note 3)

ESDI = 2 cables: 34 pins & 20 pins (note 4)
usually set up as type #1 in the CMOS and auto-configured at boot time

If you've narrowed it down to RLL/MFM or ESDI but it isn't in a working system, there's no easy way to narrow it down any further just by looking at the drive.

note 1: The QIC-2 tape drive interface also has 50 pins
note 2: To differentiate single ended and differential SCSI, see the scsi-faq
note 3: Some people attempt to set up MFM drives as RLL with varying success, this method will only tell you what it is set up as.
note 4: While ESDI uses the same type of cables as RLL and MFM, the signals are very different - do not connect ESDI to RLL or MFM!

Q) 3.3 Do caching controllers really help?
[From: backbone!wayne@tower.tssi.com (Wayne Schlitt)]

The short answer, is that if you are using a multi-tasking operating system with a good memory manager, caching controllers should be ignored. If you are running DOS or Windows, then *maybe* they will help, but I am not sure that they are a good buy.

There are lots of people who have said "I put a caching controller in my computer, and it runs faster!". This is probably true, but they never have measured the speed increase compared to putting the same memory into main memory instead. More importantly, the caching controllers cost more money than non caching controllers, so you should be able to add _more_ main memory instead of buying a caching controller.

The following is a shortened up version of a much longer article. If you want a copy of the longer article, send me email at wayne@cse.unl.edu.

Why A Multi-Tasking Operating System?
A multi-tasking operating system can allow the application to continue immediately after it does a write, and the actual disk write can happen later. This is known as write behind. The operating system can also read several blocks from the file when the application requests just part of the first block. This is known as read ahead.

When the application requests the block later on, the block will already be there and the OS can then schedule some more reads.

A multitasking operating system is required because these operations can cause interrupts and processing when control has been given back to the application.

Basically, operating systems such as DOS, MS-Windows, MacOS and such do not allow true preemptive multitasking and can not do the read a heads and the write behinds. For these systems, the latency of a disk drive is the most important thing. The application does not regain control until the read or write has finished.

The controller can't speed up the disk.
Remember, the bottleneck is at the disk. Nothing that the controller can do can make the data come off the platters any faster. All but the oldest and slowest controllers can keep up with all but the newest and fastest disks. The SCSI bus is designed to be able to keep *several* disks busy without slowing things down.

Speeding up parts of the system that are not the bottleneck won't help much. The goal has to be to reduce the number of real disk accesses.

First, isn't the caching controller hardware and isn't hardware always faster than software?
Well, yes there is a piece of physical hardware that is called the caching controller, but no, the cache is not really "in hardware". Managing a disk is a fairly complicated task, complicated enough that you really can't implement the controller in combinatorial logic.

So, just about all disk controllers and for that matter all disk drives have a general purpose computer on them. They run a little software program that manages the communication between the main CPU and the disk bus, or the disk bus and the disk. Often this CPU is put in with a bunch of other logic as part of a standard cell custom chip, so you might not see a chip that says "Z80" or such.

So, we are really not comparing "hardware" with "software", we are comparing "software on the controller" with "software on the main CPU".

Ok, why can the OS win?
Assume that you have a bunch of memory that you can either put into main memory and have the OS manage the cache, or put on a caching controller. Which one will be better? Let us look at the various cases.

For a cache hit you have:
If the OS does the caching, you just have the OS's cache checking latency.

If the card does the caching, you will have the OS's cache checking latency, plus the I/O setup time, plus the controller's cache checking latency, plus you have to move the data from the card to main memory. If the controller does DMA, it will be taking away from the memory bandwidth that the main CPU needs. If the controller doesn't have DMA, then the main CPU will have to do all the transfers, one word at a time.

For a cache miss, you have:
If the OS does the caching, you have the OS's cache checking latency plus the set up time for the disk I/O, plus the time it takes for the disk to transfer the data (this will be a majority of the time), plus the cost of doing either the DMA or having the CPU move the data into main memory.

The caching controller will have all of the above times, plus it's own cache checking latency.

As you can see, the caching controller adds a lot of overhead no matter what. This overhead can only be offset when you get a cache hit, but since you have the same amount of memory on the controller and the main CPU, you should have the same number of cache hits in either case. Therefore, the caching controller will always give more overhead than an OS managed cache.

Yeah, but there is this processor on the controller doing the cache checks, so you really have a multi-processor system. Shouldn't this be faster than a single processor? Doesn't this allow the main cpu to do other things while the controller manages the cache?
Yes, this really is a multi-processor system, but multi-processors are not always faster than uni-processors. In particular, multi-processor systems have communication overhead. In this case, you are communicating with the controller using a protocol that is fairly expensive, with OUTB instructions and interrupts and such. The overhead of communicating with this other processor is greater than the overhead of just checking the cache on main cpu, even if the main CPU is very slow.

The multi-processor aspect just doesn't help out when you are talking about managing a cache. There is just too much communication overhead and too little processing for it to be a win.

OK, but couldn't the caching controller do a better job of managing the cache?
Both the controller and the OS are going to be executing a piece of software, so in theory there isn't anything that the slower CPU on the controller can do that the OS can't do, but the OS can do things that the controller can't do.

Here are some of the things that the OS can do better:
* When you read a block from a file, the OS can read several more blocks ahead of time. Caching controllers often will read an entire track in order to simulate this file read a head, but the rest of the file isn't always on the same track, only the OS knows where the blocks are really going to be at. This can lead to wasted time and cache memory reading data that will never be used.

* In order to improve file system reliability, some writes _must_ complete immediately, and _must_ complete in the order that they are given. Otherwise, the file system structures may not be left in a coherent state if the system crashes.

Other writes can be completed as time is available, and can be done in any order. The operating system knows the difference between these cases and can do the writes appropriately.

Caching controllers, on the other hand, don't know if the write that it was just given _must_ be written right away, or if it can wait a little bit. If it waits when it shouldn't, you are risking your file system and data.

* Sometimes, you want a large disk cache if you are accessing lots of data off the disk. At other times, you want a small disk cache and more memory left to programs. The operating system can balance these needs dynamically and adjust the amount of disk cache automatically.

If you put the memory on a caching controller, then that memory can _only_ be used for disk caches, and you can _never_ use more. Chances are, you will either have too much or too little memory dedicated to the cache at any give time.

* When a process closes a file, the operating system knows that the blocks associated with that file are not as likely to to be used again as those blocks associated with files that are still open. Only the operating system is going to know when files are closed, the controller won't. Similar things happen with processes.

* In the area of Virtual Memory, the OS does an extremely better job of managing things. When a program accesses a piece of memory, the CPU will do a hardware level check to see if the page is in memory. If the page is in memory, then there will basically be no delay. It is only when the page isn't in memory that the OS gets involved.

Even if all of those extra pages are sitting in the caching controller's memory, they still have to be moved to main memory with all the overhead that that involves.

This is why dynamic caches vs program memory is so important.

What is the "Memory Hierarchy" and how does this relate to caching controllers?
The basic idea of a memory hierarchy is to layer various types of memory, so that the fastest memory is closest to the cpu. Faster memory is more expensive, so you can't use only the fastest type and still be cheap. If a piece of data isn't in the highest (fastest) level of the hierarchy, then you have to check the next level down.

In order for a memory hierarchy to work well, you need to make sure that the each level of the hierarchy has much more storage then the level above it, otherwise you wont have a high hit rate.

The hierarchy on a 486 goes something like this:
8 regs << 8k on chip cache << 256k off chip cache << main memory << disk

If you are going to put something between main memory and disk, it needs to be much larger than main memory in order for it to be effective.

What about all these neat things that a caching controller can do such as elevator seeking, overlapping seeks with reads and writes, scatter/gather, etc...?
These are nice features, but they are all done by either the OS or a good SCSI controller anyway. None of these things are at all related to supporting the cache, so you shouldn't buy a caching controller for just these features.

Ok, you have talked about things like Unix, OS/2 and Windows NT, but what about DOS and MS-Windows?
Well, here things get a lot grayer. First, older versions of DOS have notoriously bad disk cache programs. Since neither DOS nor MS-Windows are preemptive multi-tasking systems, it is much harder to do read ahead. Also, since DOS/MS-Windows users are used to being able to power off their computers at any time, doing write behind is much more dangerous. DOS and MS-Windows also can crash much easier than these other OS's, so people might reboot for many reasons.

Caching controllers usually leave the hard disk light on when they have data that hasn't been written out, and people don't usually power their computer off until that light goes out. This lets the controllers do write behind fairly safely. (But you can still loose power, so this isn't risk free.) They also do crude read a heads by prereading entire tracks.

DOS also runs in real mode and real mode can only access 640K of memory. This mean that a disk cache can be real helpful. Unfortunately, to do a software based disk cache, the CPU has to be switched into protected mode in order to access memory beyond the 640K boundary and then you have to switch back into real mode. Intel, however forgot to make it easy to switch back to real mode. All in all, this switching back and forth ends up being real expensive. This _might_ be more expensive than just using a caching controller, I don't know.

So, it is possible that if you configure DOS to not use a cache, and get a caching controller, then you might be ahead. I really don't know much about this area. I have not done any real timings of this.

So, when would you ever want to buy a caching controller?
The answer is not too often, but there are a few cases that I can think of:
* You have filled up all your SIMM slots on your motherboard and in order to add more memory you would have to throw some out. This is a real shaky reason. You can always sell your old memory, or move it to another computer. The jump from 4, 1MB SIMMs to 4, 4MB SIMMs is large, but you will be much better off in the long run with more main memory.

* You have maxed out your memory and you need it all for programs and data. If you can't put any more memory on the mother board, then you don't have many choices.

* If you have a bunch of slow (100ns-120ns) memory left over from say a 286 or something and you can't use it on your motherboard because it is too slow, then maybe adding it to a caching controller will help. Be careful however, if your hit rates on the caching controller are too low, then you may be just adding overhead without getting any benefits.

* If you are stuck with a bad OS because that's what your applications run on, then you might be better off with a caching controller.

*** What about those disk drives that come with caches, are they bad too? Don't confuse caching disk controllers with cache on disk drives. The latter is actually useful. The little CPU on the disk drive has to read every byte that comes off the disk in order to see when the sector that you are interested in has come under the heads and to do any error detection and correction. The disk also has to have buffers in case the bus is busy, and to sync up the speeds of the bus and the heads.

Since all this data is going though the CPU on disk drive and you have to have a buffer anyway, just making the buffer larger and saving the entire track is an easy win. Saving a couple of the most frequent tracks is also a win.

Most of these caches on the disk drives are fairly small (64k-256k), and a single memory chip will give you about that amount of memory anyway, so you aren't wasting many resources. This also allows the OS to always assume that interleaving is not necessary to get full disk throughput, even if it does a fair amount of processing between disk requests.

Q) 3.4 Do IDE controllers use DMA?
No, they do not. This is a rumor that keeps popping up. This may change on the next revision of the standard.

Q) 3.5 Why won't my two IDE drives work together?
[From: jruchak@mtmis1.mis.semi.harris.com (John Anthony Ruchak)] Assuming that the drives are attached to the same controller and they work properly when attached one-at-a-time, you probably don't have them configured properly for Master/Slave operation.

When operating 2 IDE drives, one must be designated as "Master" and the other as "Slave." There are jumpers on every IDE drive to configure this. Check your hard drive manuals for the jumper settings for your drives. In general, it doesn't matter which is which - just pick one as master, and make the other slave.

In your CMOS configuration, Drive 1 should have the parameters (heads, cylinders, etc.) that match the drive you set as "Master" and Drive 2's parameters should match those of the "slave" drive. In operation, the Master will appear as drive C: and the slave as drive D:.

Because not all hard drive manufacturers follow the IDE specifications closely enough, drives from 2 different manufacturers may not work well together. In this case, changing master -> slave and slave -> master (along with the appropriate CMOS changes) may help. If it doesn't, then trying two drives from the SAME manufacturer is the only avenue you have left.

Q) 3.6 Which is better, VLB or ISA IDE?
[From: pieterh@sci.kun.nl

If a simple answer is what you want, then yes, in general VLB IDE controllers are better than ISA ones. If you are purchasing or putting together a computer, the relatively small price difference makes the choice for a VLB controller a sensible one.

However, if you already have an ISA controller and are wondering whether it's worth upgrading to VLB, it's not that easy. VLB may be faster in principle, the question is if you're going to notice it.

The Bottlenecks
Let's take a look at what the limiting factors are in the path the data travels from your drive platter to the CPU.

1. Raw data transfer from the drive platter. To find out what this rate is, you need the spec sheet for your drive. Remember that it is dependent on the cylinder, so a single drive can give different results depending on where on the drive you're testing

. Anyway, this transfer rate is 1 to 2 MB/s on most IDE drives, depending on data density and rotational speed.

2. The data has to be digested by the drive's onboard controller, which not only mediates between the drive hardware and the IDE bus, but also manages the buffer cache. Let's hope it's both fast and intelligent (not always the case *sigh*).

3. Data transfer over the IDE/ATA bus (2-3MB/s with standard timing). The actual speed depends on the timing used; some drives and controllers support faster timing. Enhanced IDE (IDE-2) can transfer up to 11 MB/s.

4. Transfer from the interface to the CPU (ISA: max 5 Mb/s, VLB: 10-80 MB/s depending on CPU clock, wait states, interface...)

A generic IDE interface is usually not able to get the most out of the ISA and IDE bandwidths (3 and 4); a typical upper limit is about 2 MB/s if you use block transfers (see below), 2.5 MB/s if you're willing to push the ISA bus clock a little (more about that later on).

Still, it's clear that on all but the fastest drives the raw data transfer rate to/from the drive platter (1) will determine the maximum performance you're going to get. If you're getting transfer rates near this limit, you can't significantly improve your throughput whatever you do.

Tuning Your Harddisk
Suppose your harddisk throughput is lower than you think is possible with your drive. How can you tune your system to improve hard disk performance? I'll go through points 1-4 again and indicate what you can do to widen up the bottlenecks a little.

1. Drive platter to head transfer.
- Sorry, there's nothing you can do short of getting a new drive.

2. The drive controller.
- Many modern drives understand "block transfer", also known as multi-sector I/O or read/write multiple. Although the newest BIOSes have this feature built in, most of us will have to use a driver. More about that at the end of this section.

What is block transfer? Normally, for each sector the computer wants to read from or write to the drive, it has to issue a separate command. When you're transfering 2 MB/s, that means you're sending the drive 4,000 commands each second. Each command has to be issued by the CPU, transferred over the ISA and IDE buses, interpreted and acted upon by the drive's onboard controller. Every such command takes a little time.

By using block transfer mode, it is possible to read or write more than one sector (usually 4 to 32) using a single command. This greatly cuts down command overhead, as you can imagine, and may very well have a dramatic effect on a badly performing system. In most cases, it will improve performance by 5-20%.

Unfortunately some older drives have inferior support of this feature and actually slow down... or in exceptional cases even hose your data.

3. The IDE bus.
- With most drives you can use faster IDE bus timing, but your interface has to support this. Modern IDE interface chips often have completely programmable timing; these can be exceptionally fast if the software supports this feature and, of course, if the drive can keep up. Some controllers use jumpers to configure timing.

The last word in IDE bus bandwidth is of course the Enhanced IDE bus, which allows exceedingly fast transfers if both drives and interface support it.

If you cannot use improved timing with a new VLB interface, the IDE bus will prove to be as much as a bottleneck as the ISA bus was.

- Not all interfaces are created equal, some are better engineered. With the current VLB hype, there's bound to be a friend who has an old (ISA) interface gathering dust; try that one.

4. The ISA bus.
- The official speed of the ISA bus is about 8 MHz, but most cards, especially modern ones, will work fine on 11 MHz or more (some will even take as much as 20 MHz). If you don't mind experimenting, it may be worthwhile to see if your ISA cards run reliably at a higher bus clock. This is especially interesting if your drive nears the 2MB/s mark.

The BIOS setup has to support this, of course.

*WARNING* Pushing the ISA bus clock beyond spec often works well, but there is always the risk that it DESTROYS YOUR DATA. Make a backup before attempting this and thoroughly verify correct operation before entrusting critical jobs to a "pushed" system.

- Finally, if you need more than the 2.5-3 MB/s or so you can squeeze out of a good ISA controller, VLB is the way to go. Be aware that the controllers on the market are of variable quality; VLB alone isn't going to be enough if you need the highest performance. It has occurred that a VLB interface proved to be, all things equal, slower than the ISA one it replaced!

Take special note of the drivers: they must be stable and support whatever software you intend to use (DOS, Windows 32-bit VxD, OS/2). Without a driver loaded, the VLB interface will perform no better than an ISA controller.

A final word about block transfer drivers. VLB controllers are usually shipped with a TSR that, among other things, enables block transfers (usually designated "Turbo" mode)---this is often where most of the performance gain actually comes from. But block mode is equally possible using ISA based interfaces. Popular block transfer drivers are Drive Rocket and DiskQwik. You can get a crippled version of the latter from Simtel:

If you're using Linux, you can use Mark Lord's IDE performance patches to enable block mode. In true multitasking operating systems, block transfers have the additional advantage of greatly reducing CPU load.

Q) 3.7 How do I install a second controller?
[From: strople@ug.cs.dal.ca (PAUL LESLIE STROPLE)]

The following should solve about 95% (9.5?) of second controller problems, if only to tell you it can't be done!

Generic Second Controller Installation:
1) Normally the MFM/IDE/RLL controller is set up as the primary, and the ESDI/SCSI as the secondary; One reason for this is because the ESDI/SCSI controller cards are usually more flexible in their set up and secondly this method seems to work (probably due to reason one).

2) Your primary controller is set up using all the normal defaults:
- Floppy at primary address(3F0-3F7).
- Hard disk enabled, at primary addresses (1F0-1F7),
BIOS address C800 and interrupt 14.

3) Your secondary controller is set up as:
- Floppy drives disabled
- Hard disk controller enabled, secondary address(170- 177) and interrupt 15.
- NOTE: onboard bios set to D400, or D800 can be used, if there is a conflict.

4) Computer BIOS Setup:
- Any drive(s) on the primary controller (MFM/IDE), should be entered in the BIOS setup as usual.
- You DO NOT enter the drive types for the hard disks on the secondary controller, even if there are only two drives in the entire system i.e., if one drive on each controller you only enter the drive type of the hard disk on the primary controller -- the 2nd drive type is left as not installed (0).

Operating System:
If you do the above steps you now have the hardware setup correctly; your only other problem may be with the operating system.

Different OSs handle secondary controllers differently; as well, different controllers handles same OSs differently (scared yet?).

For example: with DOS you may require a device driver (available from the manufacture or through third party companies, such as Ontrack Computer Systems -- more on Ontrack later). Some flavors of UNIX handle a mixture of controllers better than others (e.g., IA 5.4 had probs mixing ESDI and SCSI controllers under certain conditions).

You should verify that your secondary controller, and associated hard drives, are working properly (you can try this by installing it as the primary system -- removing existing system first!). Follow above steps 1 to 4, pray, and turn on system! If it still won't work you may need additional drivers. First check with the supplier or manufacture (I know, for example, a DTC ESDI controller comes with the DOS drivers included, and it works perfectly).

I am not sure of operating systems supported by Ontrack Data Systems. I know that their DOS driver can assist secondary controllers, even allowing two IDEs to co-exist. Likewise, the drivers can also install virtually any drive, regardless of what is supported by the BIOS.

BIG NOTE: The features required in a secondary controller are normally not found on a $30.00 IDE controller. The best thing to do it, if possible, is to get a guarantee from the supplier/manufacture that if it doesn't work (and they can't make it) then they will take it back.

Ontrack supplies a complete range of hard disk products and services -- from driver software, data recovery services, to media and data conversions (including tape backups). The product I know them from is DiskManager.

Disk Manager is a utility for hard disk management. It will allow you to setup and install virtually any hard disk, regardless of disk's layout and BIOS options available. Disk Manager (version greater than 5.2.X, or there abouts) includes a driver for co-resident controllers. For driver to work the co-res board must be able to hit the above addresses and must be WD1003 AT command set compatible (this includes most IDE and ESDI boards).

DM contains a number of features, including full diagnostics. You may not need to know the disk's geometry, as there are numerous layouts stored internally. All you need to do is select the correct model and DM does the rest.

To contact Ontrack: U.S. (800)-872-2599; UK 0800-24 39 96 this is either an address or phone number! outside U.K. (but NOT U.S.) 44-81-974 5522

Q) 3.8 >What is EIDE/Fast-ATA/ATA-2/ATAPI what advantages do they have?
This topic is posted separately as the "Enhanced IDE/Fast-ATA/ATA-2 FAQ" and archived along side this FAQ. Refer to section one for instructions on retrieving this file.

Newsgroups: comp.sys.ibm.pc.hardware.storage,comp.sys.ibm.pc.hardware.misc, comp.answers,news.answers
Subject: Enhanced IDE/Fast-ATA/ATA-2 FAQ [* of *]
From: pieterh@sci.kun.nl (Maintainer)
Summary: This FAQ addresses issues surrounding Enhanced IDE, ATA-2, ATAPI and Enhanced BIOSes. It includes practical questions, background information and lists of net resources. Archive-name: pc-hardware-faq/enhanced-IDE

Q) 3.9 Which is better, SCSI or IDE?
[From: ralf@alum.wpi.edu (Ralph Valentino)]

1) SCSI and IDE devices cost approximately the same for the same features (size, speed, access time). Shop around for good prices.

Advantages of IDE:
1) faster response time (low request overhead)
2) hard drive interface is compatible with RLL/MFM/ESDI: any driver for one (including the main system BIOS) will run the other.
3) IDE controllers are considerably cheaper ($150 and up) than SCSI host adapters.
4) Will always be the boot device when mixed with SCSI.

Advantages of SCSI:
1) Supports up to 7 devices per host adapter. This saves slots, IRQ's, DMA channels and, as you add devices, money.

2) Supports different types of devices simultaneously the same host adapter (hard drives, tape drives, CDROMs, scanners, etc).

3) SCSI devices will work in other systems as well (Mac, Sparc, and countless other workstations and mainframes). If you change platforms in the future, you will still be able to use your SCSI devices.

4) Automatically configures device type, geometry (size), speed and even manufacturer/model number(SCSI-2). No need to look up CMOS settings.

5) Busmastering DMA (available in all but a few cheap SCSI host adapters) decreases amount of CPU time required to do I/O, leaving more time to work on other tasks (in multitasking OS's only).

6) Software portability - drivers are written for the host adapter, not the specific device. That is, if you have a CDROM driver for your host adapter, you can purchase any brand or speed SCSI CDROM drive and it will work in your system.

7) Will coexist with any other type of controller (IDE/RLL/MFM/ESDI) or host adapter (other SCSI cards) without any special tricks. SCSI host adapters do not take up one of the two available hard drive controller port addresses.

8) greater bandwidth utilization (higher throughput) with multiple devices. Supports pending requests, which allows the system to overlap requests to multiple devices so that one device can be seeking while the second is returning data.

9) Ability to "share" devices between machines by connecting them to the same SCSI bus. (note: this is considerably more difficult to do than it sounds).

10) Bridges are available to hook RLL and ESDI drives to your SCSI host adapter. (note: these tend to be prohibitively expensive, though).

1) With otherwise equal drives, IDE will perform better in DOS due to low command overhead. SCSI, however, will perform better in multitasking OS's (OS/2, Unix, NT, etc). If you see speed comparisons (benchmarks), make sure you know what OS they were run under.

2) Most benchmarks only test one aspect of your system at a time, not the effect various aspects have on each other. For instance, an IDE drive may get faster throughput but hurt CPU performance during the transfer, so your system may actually run slower. Similar confusions arise when comparing VLB and EISA host adapters.

3) When comparing two systems, keep in mind that CPU, memory, cache, and bus speed/type will all effect disk performance. If someone gets great I/O performance with a particular controller/drive combination on his Pentium, you should not expect your 386SX-25 to get such I/O performance even with the exact same controller/drive combination.

4) Similarly sized or even priced drives may not perform equally, even if they're made by the same manufacturer. If you're going to compare two drives, make sure they have the exact same model number. (IDE drives usually have an 'A' and SCSI drives usually have an 'S' appended to their model number).

Q) 3.10 Can MFM/RLL/ESDI/IDE and SCSI coexist?
The PC is limited to two drive controllers total. SCSI, however, is a "host adapter" and not a drive controller. To the rest of your system, it appears more like an ethernet card than a drive controller. For this reason, SCSI will always be able to coexist with any type drive controller. The main drawback here is that on most systems, you must boot off a disk on the primary drive controller, if you have one. That means if you have SCSI and IDE in your system, for example, you can not directly boot from the SCSI drive. There are various ways to get around this limitation, including the use of a boot manager.

Q) 3.11 What's the difference between SCSI and SCSI-2? Are they compatible?
The main difference between SCSI and SCSI-2 are some new minor features that the average person will never notice. Both run at a maximum 5M/s. (note: Fast and Wide SCSI-2 will potentially run at faster rates). All versions of SCSI will work together. On power up, the SCSI host adapter and each device (separately) determine the best command set the speed that each is capable of. For more information on this, refer to the comp.periphs.scsi FAQ.

Q) 3.12 How am I supposed to terminate the SCSI bus? Some basic rules on termination:
1. The SCSI bus needs exactly two terminators, never more, never less.

2. Devices on the SCSI bus should form a single chain that can be traced from the device at one end to the device at the other. No 'T's are allowed; stub length should be kept as short as possible.

3. The device at each end of the (physical) SCSI bus must be terminated, all other devices must be unterminated.

4. All unused connectors must be placed _between_ the two terminated devices.

5. The host adapter (controller) is a SCSI device.

6. Host adapters may have both an internal and external connector; these are tied together internally and should be thought of as an "in" and "out" (though direction has no real meaning). If you have only internal or external devices, the host adapter is terminated otherwise it is not.

7. SCSI ID's are logical assignments and have nothing to do with where they go on the SCSI bus or if they should be terminated.

8. Just because your incorrectly terminated system happens to work now, don't count on it continuing to do so. Fix the termination.


      internal      external           internal            external
    T------|-----|------T          T------|-------|-----|------|------T
 drive   drive   HA    cdrom      tape  unused unused   HA   drive  drive

        internal             external             external
     T------|-----T      T------|------T          T------T
   drive  drive   HA     HA    tape   cdrom       HA    cdrom

"T" = terminator   "|" = connector (no terminator)   "HA" = Host Adapter

Q) 3.13 Can I share SCSI devices between computers?
There are two ways to share SCSI devices. The first is removing the device from one SCSI host adapter and placing it on a second. This will always work if the power is off and will usually work with the power on, but for it to be guaranteed to work with the power on, your host adapter must be able to support "hot swaps" - the ability to recover from any errors the removal/addition might cause on the SCSI bus. This ability is most common in RAID systems.

The second way to share SCSI devices is by connecting two SCSI busses together. This is theoretically possible, but difficult in practice, especially when disk drives are on the same SCSI chain. There are a number of resource reservation issues which must be resolved in the OS, including disk caching. Don't expect it to 'just work'.

Q) 3.14 What is Thermal Recalibration?
When the temperature of the hard drive changes, the media expands slightly. In modern drives, the data is so densely packed that this expansion can actually become significant, and if it is not taken into account, data written when the drive is cold may not be able to be read when the drive is warm. To compensate for this, many drives now perform "Thermal Recalibration" every degree C (or so) as the drive warms up and then some longer periodic interval once the drive has reached normal operating temperature. When thermal recalibration takes place, the heads are moved and the drive may sound like you are accessing it. This is perfectly normal.

If you're attempting to access the drive when thermal recalibration occurs, you may experience a slight delay. The only time this becomes important is when you're doing real-time operations like recording / playing sound or video. Proper software buffering of the data should be able to hide this from the application, but software seldom does the proper thing on its own. Because of this, a few companies have come out with special drive models for audio/video use which employ special buffering techniques right on the drive. These drives, of course, cost significantly more than their counterparts. Some other drives offer a way to trigger thermal recalibration prematurely (thus resetting the timer), so if your real-time operation is shorter than the recalibration interval, you can use this to assure your operation goes uninterrupted. Disabling or delaying recalibration is dangerous and should be completely avoided. For more information on the thermal recalibration characteristics of a drive, contact the drive manufacturer directly.

Q) 3.15 Can I mount my hard drive sideways/upside down?
Old hard drives always had specific requirements for mounting while most modern hard drives can be mounted in any orientation. Some modern hard drives still have mounting restrictions; the only way to be sure is to read the documentation that comes with the drive or contact the manufacturer directly and ask. Restrictions may be model specific so be sure you know the exact model number of your drive. A common misconception is that it is always safe to mount the circuit board side up, this is not the case. When in doubt, look it up.

Failure to follow the mounting instructions can result in a shortened lifetime.

Q) 3.16 How do I swap A: and B:?
[From: rgeens@wins.uia.ac.be (Ronald Geens)]

To swap A: and B: drives :
1) open up your machine to see if look at the cable that interconnects the 2 drives.

2) if the cable is twisted, there is no problem, just switch the connectors from 1 drive to the other.And change the bios-setup.

3) if the cable isn't twisted (which is very,very rare), it's a little harder: leave the cables as they are, but change the jumpers on the drive. (this sounds a lot tougher, but it can usually be done without to much hassle. When the cable connecting the 2 drives is just a flat one (like the harddisk cable) then you must play with the jumpers on the drives: Most of the time, there is a jumper with 4 pins, with the following layout:


Where the * is the 4th unnumbered pin. Normally the A: drive will have a jumper on pin 2 & 4 and the B: drive on 1 & 4. Just change these jumpers around, (i.e. new A: 2&4, new B: 1&4) and change the BIOS configuration.

4) Don't panic if it doesn't work, just make sure all cables are conected properly and if that doesn't work just restore everything to its old state.
PS. By twisted cable, I mean that between the A: and B: drive, a few wires of the flat cable are turned around.

[From: sward+@CMU.EDU (David Reeve Sward)]

I have found two ways to do this: I originally switched their positions on the cable attached to the controller, and changed the BIOS to reflect this. I recently got a gsi model 21 controller for my IDE drive, and this controller allows you to specify which drive is A: and B: in software (it lights the LEDs in turn and asks which is A: and which is B:). This did not require a cable change (but I still changed by BIOS).

Q) 3.17 My floppy drive doesn't work and the light remains on, why?
If you've played around with the floppy cables at all, chances are you put one of them on backwards. In general, floppy cables aren't keyed to prevent this. Carefully find pin 1 on all floppy drives and the floppy controller and make sure they all line up with pin 1 on the cable. If you have trouble with this, "How do I find pin 1..." elsewhere in this FAQ may be of some help.

Q) 3.18 What is a 16550 and do I need one?
The 16550 is a UART with two 16 byte FIFOs. A UART is the part of a serial port that takes byte-wide (characters) data and converts it to bit-wide (serial) data, and visa versa. The FIFO is a buffer which can hold characters until the CPU is ready to remove it or until the serial line is ready to transmit it. The 'normal' UART in the PC (the 8250 or 16450) only has 1-byte FIFOs. The additional 15 bytes can be useful when the CPU is busy doing other things - if the CPU isn't able to remove data fast enough, it will be lost. The OS or program has to explicitly support 16550 to make full use of its advantages.

A very important thing to note is that under DOS, the CPU doesn't have anything else to do, so the 16550 is wasted. Only under multitasking operating systems does it really become useful. The 16550 will *not* make your file transfers any faster, it will only prevent data from being lost and relieve your CPU of some overhead. If you notice system performance dropping like a rock when file transfers are occurring, a 16550 may be helpful. If you see re-transmissions (bad packets) or "FIFO overrun's" during file transfers under a multitasking OS, try the same thing under DOS - if the errors go away, then chances are a 16550 will be useful. If they remain, then your problem is likely to be elsewhere.

Q) 3.19 Are there any >4 channel serial port cards?
[From: wkg@netcom.com (William K. Groll)]

Here is a partial listing of vendors or serial port cards with greater than 4 ports. In almost all cases cables and/or interface panels are required to make the physical connection to the phone lines. Some of these interfaces can be almost as expensive as the cards themselves, so find out what is needed before you order. Prices, if available in their current (late '94 or early '95 issue) catalog/price-list, are given below and do not include cables, connector panels, etc. unless noted. Some also offer driver software, either included with the card or at additional cost. Some of the cards have an on-board processor to handle the communications, while other lower cost boards require the main CPU to perform all of the housekeeping chores.

These are primarily manufacturers/distributors of industrial PCs for embedded applications, but they will sell mail-order in single quantities. I have not personally used _any_ of these cards, but believe the vendors to be reputable businesses. They offer warranties and some technical support, but as always ask before you buy and "caveat emptor".

Another source for information on this type of card is manufacturers of BBS software.

Advantech (408)245-6678, fax 245-8268
PCL-844, 8-port Intelligent RS-232 Card

Axiom (909)464-1881, fax 464-1882
C218, Intelligent 8-port Async Card;
C216, 16-port Intelligent RS-232 Interface Card

Contec (800)888-8884, fax (408)434-6884
COM-8SF(PC), Intelligent RS-232 Interface with 8 ports
(up to 4 boards/system): $495

CyberResearch (800)341-2525, fax (203)483-9024
many models from 4 to 32 ports, $359 to $2895
(appears to include cost of connectors)

Industrial Computer Source (800)523-2320, fax (619)677-0898
many models from 4 to 32 ports, $399 to $1099

Personal Computing Tools (800)767-6728, fax (617)740-2728
various 4, 8, and 16-port cards, $299 to $999

QuaTech (800)553-1170, fax 434-1409
various models with 4 or 8 ports, $299 to $675

Sealevel Systems (803)843-4343, fax 843-3067
3420, 8-port RS-232 card: $499 (includes cable with connectors)

Q) 3.20 Should I buy an internal or external modem?
[From: arnoud@ijssel.hacktic.nl (Arnoud Martens)]

While low speed modems are often only produced as an internal PC card, most modem manufacturers provide two versions of their higher speed modems:

1: internal ISA bus card, specially designed to work with the standard PC bus. You just plug it in and configure it to use on port.

2: external modem that has to be connected to the serial ports of your PC (com 1-4), using a serial RS232 cable.

In most cases the functionality of these two is equal. There are however some differences in using, maintaining and buying these modems. It is very difficult to give an definite answer as to which one is better, it completely depends on your own situation. Some of the points that are in favor of an external modem are:
* It has lights showing the status of the connection, this can be useful in those (rare) cases that you have problems with the connection.

* It can be used on a wide range of systems. External modems are connected using a RS232 cable, a standard that most computer systems support. So you can as easily use your external modem on a Mac, Amiga or Unix box as on your PC.

* It doesn't consume power inside the PC (it uses a normal net adapter), and doesn't produce any heat inside your PC.

On the other hand the internal modem has also a couple of advantages compared to an external modem:
* It is always cheaper, Somewhere in the order of 10% less compared to the same external modem.

* It doesn't need special serial hardware since it has already been integrated on the board, which will make it even more cheaper.

So basically if portability of your modem is an issue, you are better of with an external modem. But if you only intend to use the modem with your PC and don't have any power problems, an internal modem is the best choice.

Q) 3.21 What do all of the modem terms mean?
[From: arnoud@ijssel.hacktic.nl (Arnoud Martens)]

A modem (MOdulator-DEModulator) is a device capable of converting digital data from your computer into an analog signal that is suitable for transmission over low band width telephone lines. A modem thus makes it possible to connect two computers over a telephone line and exchange data between them.

Basically a modem picks up the phone, and dials a number. A modem on the other side will pick up the phone and the two modems will negotiate which protocol to use. When they agree the actual transmission of data can begin.

The major feature of a modem is the speed that it can achieve connecting to other modems. This speed is often expressed in baud or bits per second. The first is a feature of the line and specifies how much of the bandwidth of the phone channel is used and is fixed to 2400 baud. A baud is defined as the number of lines changes per second. Bits per second is the actual amount of data transmitted in one second. Most modems are capable of sending more than one bit per line transition by using very intelligent signal modulation techniques. So the bps can be eight times higher compared to the baud rate.

The modulation techniques that a modem uses are standarized by the ITU-T ( former CCITT), so that modems of different brands can connect to each other as they use the same modulation schemes. These standards are often incorporated in a protocol definition that is referred to by the letter V followed by a number. The most common protocols are:
V21: (300 baud)
V22bis: (2400 baud)
V32: (9600 baud)
V32bis: (14400 baud)

A modem is often advertised only by its fastest protocol, most of these modems "speak" slower protocols as well.

There are also standards on using data compression by the modem, such as MNP5 and V42bis, and error control protocols (V42 and MNP4). These standards can reduce the transmitted data by a factor four, by using advanced compression techniques.

To give you an idea a how fast fast is in modem technology: V32bis transmits somewhat like 1600 characters per second (that is ~33% of 1 page of text). Transferring a file of 1Mb takes about 12 minutes. Using V42bis can speed up transmission to 4000 characters per second for uncompressed data.

Apart from these standardized protocols there are also faster protocols which are supported by some modem manufacturers. But remember anything faster than 14k4 is *not yet* standarized, and often different manufacturers use their own modulation scheme that allows only their own modems communicate at that speed. The most common high speed protocols are:
V32 terbo (19200 baud)
V34 (28800 baud) or Vfast.

The standard for V34 is being worked on, it will be released somewhere in 1994. Some modem manufacturers already sell modems with the (prelimenary) V34 standard. If you are serious about buying a fast modem, upgradability to this standard should be provided by the manufacturer.

When you use your modem it is important to differentiate between command status and connect status of your modem. When you are connected to an another modem everything you send to the modem, will be transmitted to the other modem. In command mode everything you type will be recieved and interpreted by the modem. Command mode allows you to change the default settings for youyr modem.

In command mode it is likely that your modem will respond to the Hayes AT command set. "AT commands" all have prefix AT, and can be used to change the (default) settings of your modem. To check if your modem works, fire up a terminal program (such as kermit), connect to your modem (in kermit c [ENTER]) and issue AT [ENTER], if your modem works it should respond with OK. For a list of all "AT commands" see the manual of your modem, as most AT commands are modem specific.

If you buy a fax-modem, you should pay attention to a couple of things. First the modem must support Class 2 fax commands, furthermore automatic fax mode selection is a big pro. That means if you receive a call the modem is capable of detecting a fax message or a modem connection and act properly (start up a fax receiving program or spawn something like a login process on the connection).

Finally there is no best modem to get, brands and qualities change very fast, as do the prices. If you are interested in buying one, subscribe to the newsgroup comp.dcom.modems, most postings in this group are very brand oriented and you will recognize fast enough which users are satisfied over their modems and which are not.

Q) 3.22 Why does my fast modem connect at a lower speed?
For 28.8 modems that connect at lower speeds such as 22, 24 or 26.4, this is perfectly normal. The usable channel capacity of the telephone system with a ideal connection is just over 28.8k. In reality, you'll very rarely see a 28.8k connection so don't expect it. When the two modems connect, they will evaluate the connection quality and pick a corresponding speed. If you have your modem set up correctly, it will retrain every once in a while and increase or decrease your connection speed based on the current line quality.

For modems that connect at considerably lower than expected speeds (such as 2400 for a 9600+ modem), there are two possibilities. The first possibility is that the remote modem can't handle the higher speed. There is nothing you can do about this except call a faster modem. The other possibility is that you have your serial port / comm software set up incorrectly.

When you connect your PC to another machine through modems, there are actually three connections being made: PC1 to modem1, modem1 to modem2 and modem2 to PC2. The speed of the modem (2400,9600,14.4,28.8) is the rate (more or less) at which the modem1 will speak to modem2. The PC to modem connections are based on the speed your COM port is set to. If you set the COM port speed to 2400, the modem to modem speed will drop accordingly. For this reason, you want to set the COM port speed at least as high as the modem to modem speed. In actuality, the modem to modem protocol may support compression and achieve data transfers faster than the connection speed so you want to set your COM port higher than your modem to modem connection. For a 28.8 modem, set the COM port to 38.4k, 57.6k, or 115k. While higher is always (potentially) better, some software/operating systems have trouble with very high COM port speeds, so start with 38.4k and see how it goes.

Q) 3.23 This is covered in the comp.sys.ibm.pc.soundcard FAQ, archive name: PCsoundcard/soundcard-faq. Please refer to this document for more information.

Q) 3.24 Where can I find EISA/VLB sound and IO cards?
Chances are that you won't be able to find them anywhere, and if you do, they won't be worth the money. Sound and IO cards have very low bandwidth requirements, over 10 times lower than the ISA bandwidth and over 60 times lower than the EISA bandwidth. For this reason, there is no advantage in placing them on the more expensive EISA/VLB cards when the less expensive ISA will more than suffice, especially considering than all ISA cards will work in an EISA/VLB slot.

Q) 3.25 Where can I get DOS drivers for my ethernet card?
[From: ralf@alum.wpi.edu (Ralph Valentino)]

The first thing you need is a low level packet driver for your ethernet card. This driver links your card specific functions to a common software interface allowing higher level software to read and write to your ethernet card without knowing any of the hardware specifics. Ethernet cards usually come with a packet driver. If you didn't get one, try contacting the card manufacturer (they may have a www/ftp site, see the references section of this FAQ)

. Another option is using publicly available packet drivers. The Crynwr packet driver collection is free, supports a significant number of cards and comes with sources and documentation. You can find this package in the "pktdrvr" subdirectory on any of the Simtel mirrors. For instance:

The files of interest are:
pktdrvr11.zip - executable
pktdrvr11a.zip pktdrvr11b.zip pktdrvr11c.zip - sources

The included instructions explain how to install them. The file "software.doc" (within the zip archive) contains pointers to a number of other useful protocol drivers, which is the next thing you need.

The protocol driver sits on top of the packet driver and implements one of the many standard protocols (IPX, TCP/IP, etc).

IPX protocol drivers, needed for many multiplayer games, can be found in the same "pktdrvr" directory as the Crynwr packet drivers. Files of interest are:
novel.zip - IPX protocol driver from BYU
intelpd.zip - IPX protocol driver from Intel (newer)
Either of the above will do.

For a quick TCP/IP implementation allowing telnet and file transfers with both Unix and other DOS machines with very little setup, try Kermit. You can get Kermit from the Columbia University distribution site:
- everything you need for DOS and more

To make a connection, type:
set tcp/ip address *.*.*.* (where *.*.*.* is your IP address)
set port tcp/ip *.*.*.* (where *.*.*.* is the destination IP address)

Remember to type "set file type binary" at the Kermit prompt on both ends if you are transfering binary files (anything but unarchived text). See the documentation and on-line help for time transfer optimization as well as how to set the rest of the TCP/IP related parameters (netmast, broadcast address, bootp server, nameserver, etc) if you are interfacing to an existing network.

Another program of interest is NCSA Telnet.
ftp.ncsa.uiuc.edu:PC/Telnet/tel23bin.zip - binaries
< a href="ftp://ftp.ncsa.uiuc.edu:PC/Telnet/tel23src.zip"> ftp.ncsa.uiuc.edu:PC/Telnet/tel23src.zip - sources

Many commercial protocol drivers/applications are also available, including Windows for Workgroups, PC/TCP, WIN/TCP and PC-NFS, to name a few. See your local software store for information on these.

Q) 3.26 How does the keyboard interface work?
[From: jhallen@world.std.com(Joseph H Allen)]

The IBM keyboard is connected to the computer through a serial interface similar to a COM port. When you press a key, the keyboard sends a "scan-code" for that key to the computer. When you release the key, the keyboard sends a release code to the computer. If you hold down one key and press and release another key, the computer will receive the scan-code for the held key and a scan and release code for the other key. Since the release code for the held key was not received, the computer knows that the held key was down while the other key was pressed. In this way, the computer can handle the Shift, Alt and Ctrl keys (and any key could work like a shift key, since all keys work alike). The ROM BIOS in the computer buffers the data from the keyboard, translates the scan-codes to ASCII and handles the operation of the shift and lock keys. The keyboard itself also has a small buffer and there is hardware flow-control for preventing overruns. All of this seems simple and quite elegant, but by the time we get to the AT keyboard the details of the implementation are so complicated as to ruin an otherwise ideal keyboard.

The XT keyboard's interface almost captures the above elegance (indeed it is the only elegant thing about the XT, IMHO). The interface uses a 5-pin DIN connector with these signal assignments:
1 CLK/CTS (open-collector)
2 RxD
5 +5V

When the keyboard has a byte to send to the computer, it shifts 9 bits out to the data line (RxD) with nine clock pulses on the CLK line. The data format is 1 start bit, followed by 8 data bits. The baud rate is roughly 2000 bits per second and is not precisely defined. Once a byte is completely transmitted, the computer holds the Clear-To-Send (CTS) line low to prevent the keyboard from sending any more bytes until the keyboard interrupt handler reads the current one. Usually a simple 9-bit clearable TTL shift register is used to receive keyboard data. The 9th bit of the shift register is used to drive an open-collector buffer connected to the CTS line. When the start-bit gets all of the way through the shift register, it holds the CTS line low itself. Once the CPU reads the assembled byte, it has only to clear the shift register to release the CTS line and allow another byte to be received. Three TTL chips or a single PAL can implement an entire XT keyboard interface.

The data bytes which the XT sends are also simple. Codes 0-127 are the scan-codes. Codes 128-255 are the release codes- they're the same as the scan codes, but with the high bit set. The XT keyboard has only 84 keys, so not all of the scan-codes are used.

The only problems with the XT keyboard are the lock-status lights (Caps-lock, Scroll-lock and Num-lock) and the key repeat mechanism. The lock-status lights can get out of sync with the computer's idea of which lock keys are activated, but this only happens if someone resets the keyboard by unplugging it temporarily. When you hold a key down long enough, the keyboard starts repeating the scan-code for that key. The release code is still only transmitted once, when the key is released. The problem here is that the delay to the start of the repeats and the repeat rate were made too slow. Of course, the keyboard really doesn't have to handle repeat at all, since the computer knows when keys are pressed and released and has a timer itself. Old XT keyboard TSRs allowed you to adjust the repeat delay and rate by duplicating the key repeat mechanism in the computer.

Once IBM found that it had a nearly perfect keyboard it, of course, decided that it had to be almost completely redesigned for the AT. The keyboard didn't have to be redesigned- there were enough extra scan-codes for the AT's 101 key keyboard and the repeat mechanism could simply have been moved to the BIOS. But no, they had to redesign everything. Sigh.

The AT uses a 5-pin DIN and the PS/2 uses a smaller connector with the same signals:
1 CLK/CTS (open-collector)
2 RxD/TxD/RTS (open-collector)
3 Not connected or Reset
5 +5V

Now the interface is bi-directional. When the computer wants to send a byte to the keyboard, it asserts RTS and releases CTS. If you're lucky, the keyboard isn't deciding to transmit at the same time and it responds by giving 10 clock pulses (at about 10000 baud) on the CLK line. The computer shifts a frame out on TxD on rising clock edges. The frame format is now 1 start bit, 8 data bits and 1 odd parity bit. The keyboard takes RTS being held low as the first start bit, and the first data bit should be sent on TxD after the first clock edge is received. Yes, now you need a full UART for the keyboard interface since you have to both transmit and receive and generate and check parity (but it's still not RS-232- that would have been too logical). Why do you need parity checking on a three foot long keyboard cable? Because collisions can occur since the lines are so overloaded with signals with different meanings and parity provides the means for detecting these collisions.

The AT documentation says that pin 3 is "reserved", so the keyboard has to provide its own reset. But on the original AT, pin 3 was still Reset and IBM's own keyboards at that time needed Reset (original AT keyboards won't work on some old clones because of this). Don't ask me... I don't understand why they did this.

The protocol on the keyboard interface is now much more complicated. These bytes are defined:

ED Set leds depending on byte (bit 0 is Scroll lock, bit 1 is Num lock, bit 2 is Caps lock)
EE Echo EE (for testing?)
F0 Select mode 1, 2 or 3
F2 Send keyboard I.D.
F3 Set repeat delay and rate (byte is: 0ddbbaaa, delay is (dd+1)*250 msec, rate is (8+aaa)*2^bb*4 msec
F4 Clear buffer
F5 Restore default settings and wait for enable
F6 Restore default settings
FA Acknowledge
FE Error- please retransmit
FF Reset keyboard

Status Returns
00 Buffer overflow
AA Self-test passed
F0 Release code
FA Acknowledge last command
FD Self-test failed
FC Self-test failed
FE Last command in error; re-send
E0 scan/release code Extended keys in Mode 2

The computer and keyboard must acknowledge each command and key code with either FA if there was no error, or FE if the last command/key-code should be re-sent. There are three modes of operation for the keyboard, depending on which scan code assignments you want (these can often be set by a switch on the back of keyboard, except that if mode 1 is selected from the switch, the protocol is eliminated an the keyboard works exactly like an original XT keyboard- newer keyboards only support modes 1 and 3). In mode 1, the keyboard gives XT scan-codes. The keyboard handles the cursor keypad (which didn't exist on the XT) by simulating pressing or releasing a shift key (depending on whether shift or num-lock are pressed) and sending codes from the numeric keypad. Mode 2 works like mode 1, except that when the keyboard does the weird stuff with the numeric keypad it prefixes everything with E0 and the release codes are the scan-codes prefixed with F0. In mode 3, each key gets a unique code and the release codes work as in mode 2: the release are the scan-codes prefixed by F0.

When the AT keyboard is first reset it's supposed to send an AA if its self-test passed or FD or FC if it failed. But before it does this, it sends a continual stream of AAs with the parity incorrect. Once the computer sends an FE to indicate that there is a parity error, the keyboard stops sending bad AAs and sends a correct AA or an FD or FC. This sounds like someone made a quick fix in the keyboard firmware for mis-matched reset timing (the keyboard always finishes resetting before the computer so the computer could miss the AA/FD/FC).

Q) 3.27 Can I fake a keyboard so my computer will boot without it?
[From: jhallen@world.std.com (Joseph H Allen)]

The IBM Keyboard - how do you use a computer without a keyboard?
Sometimes a PC needs to be set up as a "turn-key" system with no keyboard for security reasons, or simply because the application doesn't need a keyboard. This causes a dead-lock problem when the system is booting: The BIOS will detect that there is no keyboard and display the message "keyboard failure - press F1 to continue," and the system becomes stuck.

There is usually a BIOS set-up option for disabling the keyboard test. Check the manual for your motherboard.

If your BIOS does not have this option, you're essentially screwed because there's no simple solution. You can't wire the DIN to fake the existence of a keyboard since the BIOS checks for a self-test result code generated by the keyboard. You have to implement a small protocol (byte-by-byte handshaking and ACK/NAK) to simulate a keyboard up to its self test. There are adaptors available which contain a small microcontroller programmed to do this. Another solution is to replace your BIOS with one which has the keyboard test disable option. However, you have to find one which matches your motherboard.

Ralph Valentino () (ralf@alum.wpi.edu) Ralph Valentino (ralf@worcester.com) (ralf@alum.wpi.edu) Senior Design Engineer, Instrinsix Corp.

Questions about this Page? Contact aolsz@computercraft.com

Return To The COMPUTERCRAFT Main Menu

Your Great New Jersey Web Site