(1st-March-2021)
• Another approach to dealing with the PC, is to work with it in partnership.
• Accelerator cards reside in the expansion slots and are used to speed up the NNW computations.
• Cheaper than NeuroComputers.
• Usually based on NNW chips but some just use fast digital signal processors (DSP) that do very fast multiple-accumulate operations.
• Examples:
• IBM ZISC ISA and PCI Cards:
• ZISC implements a RBF architecture with RCE learning (more ZISC discussion later.)
• ISA card holds to 16 ZISC036 chips, giving 576 prototype neurons.
• PCI card holds up to 19 chips for 684 prototypes.
• PCI card can process 165,000 patterns/sec, where patterns are 64 8-bit element vectors.
• California Scientific CNAPS accelerators:
• Runs with CalSci's popular BrainMaker NNW software.
• With either 4 or 8 chips (16-PE/chip) to give 64 or 128 total PEs.
• Up to 2.27GCPS. See their Benchmarks
• Speeds can vary depending on transfer speeds of particular machines.
• Hardware and software included
• DataFactory NeuroLution PCI Card:
• contains up to four SAND/1 neurochips.
• Cascadable SAND neurochips use a systolic architecture to do fast 4x4 matrix multiplies and accumulates.
• Four parallel 16 bit multipliers and eight 40 bit adders execute in one clock cycle. The clock rate is 50 Mhz.
• With 4 chips peak performance of the board is 800 MCPS.
• Used with the NeuoLution Manager and Connect scripting language.
• Feedforward neural networks with a maximum of 512 input neurons and three hidden layers.
• The activation function of the neurons can be programmed in a lookup table.
• Kohonen feature maps and radial basis function networks also implemented.
•
Comments