Once you have new services, to make it work to be able to accept requests from outside of the CCAI container and give back the result of one specific AI task, you will have to deploy those services in the CCAI container.
# Where to put those services file to
Please extract the CCAI release tar file, saying ‘ccaisf_release_xx-xxx.tar.gz’ and copy your files and directories organized in a runtime hierarchy to the folder *docker/app_rootfs*.
Models will be installed to the folder, ‘/opt/intel/service_runtime/models’ on host, CCAI will map the folder to container’s ‘/opt/fcgi/cgi-bin/models’, so you can write your debian package configuration files and build your deb package to install your models, otherwise you can also put your models to CCAI release folder to use CCAI’s helper script to build your deb package:
1. Put models to ‘ccaisf_release_xx-xxx/package/models’.
2. Modify ‘ccaisf_release_xx-xxx/package/models/debian/control’ to add your package.
3. Add a ‘service-runtime-models-xxx.install’ file to ‘ccaisf_release_xx-xxx/package/models/debian’ to install your models.
# How to enable services via API gateway
As we had described in chapter 6.3, for fast CGI services, you need to add/change conf files and put those conf files under specific folders so that API gateway will recognize your services and launch them according to the configuration file description.
For gRPC services, if it is a brand new services, you need to do the following steps to enable the service:
1. Create a folder under docker/app_rootfs/etc/sv/’, example: