Several hypercube variations have been proposed in the literature. Nevertheless, several of them were developed to improve the topological properties of the hypercube. It is well-known that the hypercube does have very good topological properties, therefore, the viability of the latter variations is questionable; in fact, they often increase its already high VLSI complexity! Other variations employ hypercube building blocks that contain a special communication processor. Building blocks are connected together through their communication processors. Not only are such structures irregular but for large building blocks the employment of the communication processors results in substantial bottlenecks.
In contrast, I have introduced the family of reduced hypercube (RH) interconnection networks that have smaller VLSI complexity than hypercubes of similar size (i.e., with the same number of processors). RHs are produced from regular hypercubes by a uniform reduction in the number of channels(edges) for each processor(node)^*. This edge reduction technique produces networks with lower VLSI complexity than hypercubes that maintain, to a large extent, the powerful hypercube properties. Because of their reduced VLSI complexity, RHs facilitate the construction of massively parallel systems with powerful processors.
Extensive comparison of RH interconnection networks with conventional hypercubes of the same size has proven that they achieve comparable performance. It has been also shown that any RH can simulate simultaneously in an optimal manner several popular cube-connected cycles interconnection networks. Many algorithms have been developed for cube-connected cycles networks; these algorithms can be easily adapted for execution on RH systems. The performance of RHs for important algorithms, such as data broadcasting/reduction, prefix computation, and sorting, has been investigated and the results are very encouraging. Additionally, techniques have been developed for the efficient embedding of linear arrays, multidimensional meshes, and binary trees into RHs; these are data structures commonly used in the development of parallel algorithms. Finally, generalized reduced hypercube interconnection networks have been introduced for even more versatility.
* An RH can also be obtained by substituting a hypercube for each node in another hypercube; distinct subcubes in the former system are then used to implement connections in distinct dimensions of the latter hypercube.