CG-Brandon .... Ah yes, I vaguely remember seeing this on AWS marketplace. I actually found an even easier option using Nvidia Volta in the Marketplace. It does not have CUDA or anything other than:
NVIDIA Volta Deep Learning AMI Release Version 19.02.0 includes:
- Ubuntu Server: 18.04
- NVIDIA Driver: 410.104
- Docker CE: 18.09.2
- NVIDIA Container Runtime for Docker: (nvidia-docker) v2.0.3
..... It's meant for pulling a container, which in my case was CaffeNV and DIGITS. Once I worked out the Docker files system, it was very easy to use. Just upload images and bring up DIGITS GUI ..... Neat, clean and tidy and the resulting model worked really well.
I followed all the links you recommended and, after reading some of the Darknet info, I got to wondering if my problem training MobileNet SSD was down to the last but one layer being set up for 20 classes rather than just one. From memory, there was nothing in the MobileNet SSD instructions which pointed to this, but when I was setting it all up I had a nagging feeling that the number of classes was going to cause a problem.
MobileNet SSD is very possibly the right network to use for training for Myriad X, especially if stick to 20 classes. I'm not sure how feasible / expensive it would be to set it up on AWS. I think I'd set it up again on the Jetson TX2 as it was 'reasonably' straight forward, but that gadget's now assigned back to bvlc_Googlenet so that's not going to happen now 🙁
Alternatively, since training bvlc_Googlenet is so easy, find a way of using Intel model optimiser to convert it. This would be my preferred route! 🙂
..... Just found this for using MO to convert DetectNet model and people seemed to have got it to work: