When I got my first job at Sony, I was assigned to work on the graphics library development of PlayStation2 gaming system.
My job at that time consisted of writing code with assembly-like programming language called microcode, packing that code together with polygon and texture data, sending them to Vector Units (aka GPU) via DMA, and letting GPU do the rest of work while minimizing involvement of CPU.
I had to adopt every technical tweak in order to get the best performance and achieve the highest frame rate out of that gaming machine and GPU. That’s simply because the library was intended to be used by so many developers that make blockbuster game titles and they relied on it.
Time passes and now GPU is used for more generic purposes including computation of neural network. If I were a software engineer who just graduated from university today, I would jump straight into the world of neural-network-on-chip bandwagon without thinking much.
I didn’t realize that my knowledge and experience of GPU programming had to do anything with AI back then. I believe NVIDIA didn’t realize it neither.
To me, it’s quite interesting to see young software engineers learning how to write code against GPU without CUDA these days, especially ones trying to use Raspberry Pi as a deep learning accelerator. Special thanks to Broadcom for making its VideoCore specification public.
At the same time, it’s sad to see that Japan is a bit behind of this movement. The country used to host many GPU engineers. We had Sony, Sega and Nintendo. Every gaming system had such a sophisticated graphic library that pulled nearly 100% performance from its GPU.
I cannot stop wondering what if the engineers behind these gaming machines were given a chance to work on today’s neural network chip development. Maybe Japan could be in different position in AI industry by now.