For making the deliverables unified but still easy to validate basic functions of CCAI framework, we disabled all sample services by default from version 1.0-210201 due to, on the current stage, no landed user cases from customers. For testing purpose, to enable services, use these steps:
```
#Create a folder on the host side under /opt/intel/service_runtime.
replace/add/remove services in above “fcgi_targets:” section will replace/enable/disable related services from health_monitor list. For version 1.0-210201, now you can use 3 services as:
- fcgi_ocr.py
- fcgi_ocr
- fcgi_tts.py //(Please noted, there is no fcgi_tts)
- fcgi_speech //(Please noted, there is no fcgi_speech.py)
```
```
#Restart container, and enable the services.
```
```
$>sudo systemctl restart service-runtime
$>docker exec -it service_runtime_container /bin/bash -c 'cd /etc/runit/runsvdir/default; for s in /etc/sv/*; do ln -sf $s; done'
```
```
#Remove those created folders under /opt/intel/service_runtime and restart the container, there would make any enabled services disabled again.
```
```
$>sudo rm -R /opt/intel/service_runtime/rootfs
```
# High Level APIs test cases
Exposed high level APIs test cases which were provided in individual test script way, the usage of each case can be found in the following pages and the cases list are:
**(Most test cases have default input which are the preinstalled files under specific folders with the deb package installation. From the WW45’20 release, if you want to use your own input files like images (-i), text (-s), wav (-a), you now can pass input parameters to each test case.)**
## For testing all provided API in a bunch
You can test all python implementations with existing test case `set - test-script`/`run_test_script.sh`
Usage:
```
cd /opt/intel/service_runtime/test-script/
sudo ./run_test_script.sh
```
## For testing python implementation of related REST APIs.
`test-script/test-demo/post_local_asr_py.py` (the default input audio file(“-a”):
`how_are_you_doing.wav`; the default inference device(“-d”): GNA_AUTO)
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_asr_py.py -a “AUDIO_FILE” -d “DEVICE”
```
Result:
```
{
"ret": 0,
"msg": "ok",
"data": {
"text": "HOW ARE YOU DOING\n"
},
"time": 0.777
}
processing time is: 0.7873961925506592
```
`test-script/test-demo/post_local_classfication_py.py` (default input file is classfication.jpg if without input parameter )
**The default accelerator will be CPU if no change, once you set another accelerator with policy setting API, it will always take effect before you explicitly change it.**
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_policy_py.py -d CPU -l 1
```
Result:
```
successfully set the policy daemon
processing time is: 0.004211902618408203
```
`test-script/test-demo/post_local_tts_py.py` (default input file if without input parameter: test_sentence.txt)
**(So far, for easy testing of the pipeline, we had some rules on TTS input: the input must be an English string and be saved in test_sentence.txt.)**
`test-script/test-demo/post_local_speech_c.py` (default input file if without input parameter: dev93_1_8.ark)
**This case is used to verify GNA accelerators. The default setting is GNA_AUTO. If GNA HW is ready, the inference will run on GNA HW. Otherwise, it will run on GNA_SW mode to simulate GNA HW.**
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_speech_c.py
```
Result:
```
{
"ret":0,
"msg":"ok",
"data":{
"input information(name:dimension)":{
"Parameter":[8,440]
},
"output information(name:dimension)":{
"affinetransform14/Fused_Add_":[8,3425]
}
},
"time":0.344222
}
{
"ret":0,
"msg":"ok",
"data":{
"result":"success!"
},
"time":0.484783
}
fcgi inference time: 0.009104
processing time is: 0.0262906551361084
```
`test-script/test-demo/post_local_policy_c.py`
**The default accelerator will be CPU if no change, once you set another accelerator with policy setting API, it will always take effect before you explicitly change it.**
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_policy_c.py -d CPU -l 1
```
Result:
```
successfully set the policy daemon
processing time is: 0.0035839080810546875
```
## For testing C++ implementation of related gRPC APIs
f64fed060cf1 fe50c5747d46 "/start.sh" 2 minutes ago Up 2 minutes (unhealthy) 0.0.0.0:8080-8081->8080-8081/tcp service_runtime_container
service_runtime_container
restart container...
7f20255d36a9 fe50c5747d46 "/start.sh" About a minute ago Up About a minute (health: starting) 0.0.0.0:8080-8081->8080-8081/tcp service_runtime_container
container can automatic restart
```
## How it work (in brief)
The health monitor mechanism consisted of 2 parts: health-monitor daemon installed in the host system, and its agent installed inside the container.
The agent will check all background running services, daemons and API gateways in a 60 seconds (it is the default value, can be customized via parameter to start command) interval and report the healthy status to the host health-monitor. Meanwhile, under situations where daemon or services fail to respond, according to specific cases, the agent may try to restart those failed processes and then confirm those processes work normally, or rely on API gateways to restart related services and then confirm those services work normally. Whichever cases, the agent will report those information to health-monitor as record and the preconditions of taking additional actions if needed. If the agent itself or API gateways cannot make the processes work again, then that information will also be reported to the host health-monitor and then the health-monitor will decide how to restart the docker instance according to some predefined rules.
In the test case above, it will try to kill these services and container instance respectively and then re-check the status to make sure the health monitor mechanism works as expected. The output log ‘xxxxxx restart’ is meaning related services/components were killed and then restarted successfully.
For health monitor related log, you can find them by:
# Deb package for host installed application/service (if not install yet)
**Note: If not for testing the OTA process, then please uninstall existing packages before installing the new ones to avoid “possible” conflicts with OTA logic.**
# Deb package for host installed neural network models (if not install yet)
**Note: If not for testing the OTA process, then please uninstall existing packages before install the new ones to avoid “possible” conflicts with OTA logic.**