DG/UX 1.00, released in March, 1985, was based on UNIX System V Release 2 with additions from 4.1BSD. By 1987, DG/UX 3.10 had been released, with 4.2BSDTCP/IP networking, NFS and the X Window System included. DG/UX 4.00, in 1988, was a comprehensive re-design of the system, based on System V Release 3, and supporting symmetric multiprocessing on the Eclipse MV. The 4.00 filesystem was based on the AOS/VS II filesystem and, using the logical disk facility, could span multiple disks. DG/UX 5.4, released around 1991, replaced the legacy Unix file buffer cache with unified, demand paged virtual memory management. Later versions were based on System V Release 4.
On the AViiON, DG/UX supported multiprocessor machines at a time when most variants of Unix did not. The operating system was also more complete than some other Unix variants; for example, the operating system included a full C compiler (gcc) and also a logical volume manager. The OS was small and compact, but rich in features. It was simple and easy to install and did not require vast resources of memory or processing power. For example, a six-way Pentium Pro-based AViiON would support several hundred users using text terminals.
The volume manager built into the OS was simple, but very powerful. All disk administration could be performed online, without taking any file system offline. This included extending, relocating, mirroring or shrinking. The same functions could be performed on the swap area, allowing in-place migrations of disk storage without downtime. DG/UX 5.4 supported filesystem shrinking, "split mirror" online backup, filesystems up to 2 TB, and filesystem journaling in 1991. Few vendors offered similar features at that time.
DG/UX had a high-performance and stable clustered filesystem. The Clariion storage nas was connected by high-voltage scsi controllers, and scsi-hubs. Each server had double scsi-controllers for failover reasons. Both controllers where master on the same bus, at the same time these filesystems where NFS-mounted form the cluster-master's floating ip. The data was written from each cluster node directly by the scsi bus, but the orchestration, the i-node tables, where written by the NFS-mount from each cluster members.