zhengly123
a1974aabcd
NNET supports TVM backend and kernels ( #78 )
...
* Add: mutator InfoGAN minimum test
* Add: cache and padding (bugs!!)
* Add: expression reader as a cmake target
* Fix: [Intermediate] NMutator::expressionToGraph
To be fix: matmul with implicit broadcast
* Add: matmul broadcast
* Fix: GraphObj ctor should use cloneTensor
* Fix: cuBLAS failure when codegen is enabled
* Add: Exception for checkCuError
* Fix: graph OpList ctor
* Add: expr simplication for TVM
* Add: TVM headers and CMake include paths
* Add: CMake config
* Add: PackedFunc (broken)
* Fix: remove cuCtxCreate which makes TVM fails
* Fix: membound_tvm
* Fix: test_memboundOp
* Add: PRelu Expr and AsTVMVisitor
* Add: Random generator
* Add: support TVM packed function
* Fix: specify runtime
* Add: CMake support of TVM
* Add: detailed output of Matmul
* Add: comments for Matmul
* Chore: format and comments
* Chore: GraphObj::selfCheck without assert control
* Fix: CMAKE_CXX_FLAGS in CMakeLists
* fix merge bug
* update api for mkl batchnorm test
* fix lotus env
* fig header bug
---------
Co-authored-by: Liyan Zheng <liyan-zheng@outlook.com>
Co-authored-by: huangshuhong <huangsh19@mails.tsinghua.edu.cn>
Co-authored-by: whjthu <haojie0429@gmail.com>
2023-04-18 00:26:36 +08:00
wendy12022
43d4798323
ADD: sub graph replacement. ( #56 )
...
reconfig: connections among op and tensor now is managered by GraphObj .
add some comments
merge from master
merge from master
ADD: sub graph replacement
reconfig inputs of op resize, due to the check of operator inputs.
ResizeObj::clone
clang format
fix some and add test for multi-output.
replacement support multi-inputs and multi-outputs.
add clone for all operators
add replaceSubGraph addSubGraph
remove extra code
add more test
remove extra print
Co-authored-by: Haojie Wang <haojie0429@gmail.com>
2023-04-17 13:09:07 +08:00
wendy12022
c8b2c8ed32
Cpu backend2 ( #77 )
...
fix review
change Device::MKL to Device::INTELCPU
fix mkl linkage
fix errors according to merge from master
now can call mkl backend
fix softmax/flatten with axis from onnx.
modify README.md
fix memory refree
add env_lotus_intelcpu.sh
fix compile
merge from branch cpu_backend
fix something add gather
fix something
FIX: directory rename from "mkl" to "intelcpu"
ADD: use oneMKL dpcpp interface to implement matmul kernel.
ADD: add dpcpp as compiler for mkl, and fix warnings for clang compiling.
add dpcpp kernel for pow.
ADD: mkl kernel for pad.
ADD: slice mkl kernel.
ADD: reshape/flatten/identity mkl kernel.
ADD: split mkl kernel.
fix compile error
FIX: fix flattenObj with axis.
ADD reduce_mean mkl kernel.
Add concat mkl kernel.
bathNorm for mkl kernel.
sigmoid mkl kernel.
ADD:add mkl kernel for pooling
add more tests for softmax
Now softmax cuda kernel supports any axises.
mkl kernel for softmax
softmax
add axis to softmax operator
add mkl kernel for abs tanh
ADD: relu kernel for mkl
fix binary mkl primitives.
add mkl kernel for binary operators
fix compiler error
move stream to runtime
clang format
add MemoryFormat for tensorObj.
use post_ops for fused conv/deconv
Distinguish mkl op_timer from cuda op timer.
add act optype to conv and deconv
add operator timer
add mkl kernel for convTransposed
minor fix for group conv
do not use cblas_sgemm_batch
CpuRuntimeObj->NativeCpuRuntimeObj
add matmul op for mkl
2023-04-17 12:15:23 +08:00
wendy12022
86ec4036ce
ADD: add mkl runtime for intel cpu , and add mkl kernel for matmul/conv/convtransposed. ( #61 )
...
* move memory format transformation to TensorObj
clang format
add MemoryFormat for tensorObj.
use post_ops for fused conv/deconv
Distinguish mkl op_timer from cuda op timer.
add act optype to conv and deconv
add operator timer
add mkl kernel for convTransposed
minor fix for group conv
do not use cblas_sgemm_batch
CpuRuntimeObj->NativeCpuRuntimeObj
add matmul op for mkl
* fix: fix bugs when rebasing from master
fix: fix bugs when rebasing from master
* fix: update api after rebasing
* fix: fix format; fix onnx import
* fix: fix clang-format
* [fix] fix conv_transpose test
* [fix] use stronger test case for transposed conv
* [fix] remove tensor memory format; fix mkl transpose conv
* [fix] add FIXME tag for op_timer python api
---------
Co-authored-by: whjthu <haojie0429@gmail.com>
2023-03-27 21:28:49 +08:00
whjthu
d9886e9de3
fix: remove inline keyword in class; rename getter and setter for inputOf and outputOf
2023-03-25 12:04:24 +08:00
YdrMaster
5aeacedab3
fix: 从模板导出每个类型的 python 接口
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-03-22 09:46:40 +08:00
YdrMaster
9db97eb212
refactor: 整合操作张量数据的方法
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-03-21 14:00:04 +08:00
YdrMaster
40fb8390b1
feat: 导入时保存权重
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-03-15 17:23:32 +08:00
YdrMaster
5b6698bac7
feat: 导出全图的输出张量到 onnx
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-03-15 17:23:32 +08:00
YdrMaster
3d122aebfe
feat: 支持导出浮点向量
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-03-15 17:23:32 +08:00
YdrMaster
f44a4daf70
feat: 导出未初始化的张量
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-03-15 17:23:32 +08:00
YdrMaster
ed81861375
temp: 实现初始值导入,但 resnet 报错
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-03-15 17:23:32 +08:00
YdrMaster
fe81fccf76
feat: 导出 OperatorObj
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-03-15 15:09:12 +08:00
YdrMaster
45a3cdfa30
feat: GraphObj 增加一个拓扑排序方法及其测试
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-03-15 15:09:12 +08:00
YdrMaster
f20e791cf5
style: 修改 graph.h/graph.cc
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-03-15 15:09:12 +08:00
Haojie Wang
0f52d04882
Merge branch 'master' into dev-onnx
2023-03-15 14:52:03 +08:00
deathwings602
40d1b1c91b
Add ConvTransposedNHWC ( #67 )
...
* Add: IT_ASSERT_TODO
* [WIP] Add: ConvTranspose2d mutation test
* add ConvTransposedNHWC
* fix test_cuda_transposed_2d
---------
Co-authored-by: Liyan Zheng <liyan-zheng@outlook.com>
Co-authored-by: huangshuhong <huangsh19@mails.tsinghua.edu.cn>
2023-03-01 14:15:02 +08:00
YdrMaster
6871fff02b
feat: 导出分配内存和运行推理的接口
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-23 11:08:00 +08:00
YdrMaster
4c7fdf44c5
feat: 前端支持 Conv 及单元测试
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-22 15:05:44 +08:00
YdrMaster
6a4de807e6
style: remove non-ascii comments from cpp
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-16 14:57:51 +08:00
YdrMaster
315763a83a
feat: 前端支持 pad 及单元测试
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-15 11:41:06 +08:00
YdrMaster
8fae67b4b4
feat: 前端支持 slice 及单元测试
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-14 17:35:18 +08:00
YdrMaster
341cf1f943
feat: 前端支持 pool 及单元测试
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-14 16:26:47 +08:00
YdrMaster
62ceb78ae3
feat: 前端支持 reduceMean 及单元测试
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-14 15:35:01 +08:00
YdrMaster
d11fb0ad5f
feat: 前端支持 gather 及单元测试
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-14 14:16:01 +08:00
YdrMaster
45aa0237da
feat: 前端支持 concat 及单元测试
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-14 13:42:35 +08:00
YdrMaster
a7e58bd8d0
feat: 补充 DataType 类型
...
- 增加了 6 个代数类型,与 onnx 的序号对应
- 现在可以导入 reshape 了
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-14 11:27:57 +08:00
YdrMaster
7626efbfa8
feat: 前端支持 reshape
...
- 无法测试,因为后端不支持 shape 的 INT64 类型
opt: ReshapeObj 构造改为全部传值并在内部 move
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-14 09:51:11 +08:00
YdrMaster
cca4d2a491
feat: 前端支持 batchNorm(无单元测试)
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-13 17:15:35 +08:00
YdrMaster
e194dd943b
feat: 前端支持 flatten 及单元测试
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-13 13:50:07 +08:00
YdrMaster
e4ec9c4230
feat: 前端支持 identity 及单元测试
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-13 12:26:11 +08:00
YdrMaster
7f0c8ebae3
feat: 前端支持 relu sigmoid tanh softmax abs 及单元测试
...
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-13 11:54:54 +08:00
YdrMaster
6e5beceadd
feat: 增加 add sub mul div pow 前端
...
- 添加每个算子的单元测试
- 添加线性回归模型导入测试
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-13 11:25:54 +08:00
YdrMaster
296fcc5aa0
feat: 创建 pyinfinitensor 前端
...
- python 前端项目结构及打包和安装脚本
- 后端编译出 so 改名为 backend,增加 GraphHandler 修改图结构
- ci 支持测试这些功能
Signed-off-by: YdrMaster <ydrml@hotmail.com>
2023-02-13 09:19:05 +08:00
zhengly123
c7ec9ee6e7
Add search engine ( #64 )
...
* Add: tensor fuid
* [Intermediate state] Add: Graph ctor for OpVec
* Add: clone for operators
* tmp: search_engine
* search: init search Engine.
* Add: dummy mutator for the test of search engine
* search: add print graph.
* search: add partition.
* search: update comments.
* Fix: remain FUID in Tensor::clone
* Chore: rename GUidBaseType to UidBaseType
* Fix: connect NMutator to SearchEngine
* Chore: output
* Fix test_memboundOp: nmutator uses input runtime
* Chore: clang-format
* Chore: clang-format
* Fix: comments in the review
---------
Co-authored-by: Liyan Zheng <liyan-zheng@outlook.com>
Co-authored-by: mazx <dyxdy@live.com>
2023-02-12 18:27:52 +08:00
zhengly123
4e0040c8a0
Add: connection among tensors and operators ( #45 )
...
* Add: refs_to_wrefs and wrefs_to_refs
* Add: op and tensor connection
* Add: inception-v3 block test
* Refactor: addOperatorAndConnect
Co-authored-by: Liyan Zheng <liyan-zheng@outlook.com>
2022-10-18 22:02:51 +08:00
zhengly123
1152adc94a
Add: python API for timing ConvTranspose ( #46 )
...
* Add: python interfaced for timing operators
* Fix: CUDA Runtime run
Co-authored-by: Liyan Zheng <liyan-zheng@outlook.com>
2022-10-07 16:03:11 +08:00
Hardy
b0c2a08252
Support bang c kernel wanghailu 0927 ( #43 )
...
* fix a little bug which found by new verison CMake
* add code for support BangC language kernel , just like Cuda kernel, not
library
* add bangc kernel
* support BangC kernel
* add code for support BangC kernel
* support bangc kernel
* fix some code from reviewer
* fix code of template fumction
* add code for support bangc kernel
* fix bangc format
Co-authored-by: wanghailu <wanghailu@qiyuanlab.com>
Co-authored-by: Haojie Wang <haojie0429@gmail.com>
2022-09-30 11:01:52 +08:00
wendy12022
fe14c91f54
ADD: Gather operator and cuda kernel. ( #41 )
...
fix a memory leak.
add tests.
ADD gather cuda kernel.
ADD gather operator
Co-authored-by: Haojie Wang <haojie0429@gmail.com>
2022-09-29 14:44:20 +08:00
wendy12022
3c6e208f42
ADD:concat/split operator and cuda kernels ( #29 )
...
* ADD:concat/split operator and cuda kernels
refector
minor change comment
ADD:concat/split operator and cuda kernels
merge split_kernel and concat_kernel to split_concat_kernel.
Revert "fix"
This reverts commit 459926be09a838658ec55f1e0a72b3cf17037d5c.
fix
ADD:concat/split operator and cuda kernels
change whole tensor name to composed tensor
fix some
remove unused header.
rebase
add CudaKernel
add test for split.
ADD split operator and cuda kernel.
modify test.
ADD:concat operator and cuda kernel.
ADD:concat/split operator and cuda kernels
fix some
remove unused header.
rebase
add CudaKernel
ADD:concat/split operator and cuda kernels
add test for split.
ADD split operator and cuda kernel.
modify test.
ADD:concat operator and cuda kernel.
* remove extra comment; typo fix.
Co-authored-by: Haojie Wang <haojie0429@gmail.com>
2022-09-29 11:01:30 +08:00
wendy12022
5560d0f2fb
ADD:pad/slice operator and cuda kernel. ( #39 )
...
fix compile error
refector
clang format
split test.
fix compile error.
ADD slice cuda kernel.
ADD slice operator.
ADD:pad operator and cuda kernel.
2022-09-29 10:29:24 +08:00
deathwings602
11d5aa1ccc
Add TVM codegen for MemboundOp ( #35 )
...
* Add: interface for membound TVM kernel and test
* add getAnsorCode
* add evaluation, but link failed
* add evaluation of kernel, but link failed
* Fix: link libcuda and nvrtc
* add print
* Add: const for source of copy
* compile and evaluate the kernel
* add compute
* fix gen_ansor_op.py
* fix membound_TVM
* format and fix CMakeLists.txt
* fix memory leak
Co-authored-by: Liyan Zheng <liyan-zheng@outlook.com>
Co-authored-by: huangshuhong <huangsh19@mails.tsinghua.edu.cn>
2022-09-22 18:06:45 +08:00
Hardy
c7c974f07a
Add bangc runtime and element-wise kernels
...
* add code for cambricon mlu, bang, cnnl
* add code for support cambricon mlu,cnnl,cnrt
* add code for support mlu
* add code for support cambricon cnnl
* add code for support mlu
* add code for mlu
* add code for mlu
`
* Update CMakeLists.txt
Co-authored-by: wanghailu <wanghailu@qiyuanlab.com>
Co-authored-by: zhengly123 <zhengly123@outlook.com>
2022-09-22 16:57:39 +08:00
Anmuliar
90eb9d05a8
Json perfrecord ( #32 )
...
Added perfengine serialization&deserialization and corresponding test case.
* Add: perfrecord json representation.
* Add: perfrecord virtual func. to_json&from_json.
* Add: perfengine serilization and deserilization.
* Modify: tune func type to supp derived struct serilization.
* Fix: structure after rebase
* Chore: Remove empty line in conv.h
Co-authored-by: wcz112 <wcz19@mails.tsinghua.edu.cn>
Co-authored-by: Liyan Zheng <liyan-zheng@outlook.com>
Co-authored-by: zhengly123 <zhengly123@outlook.com>
2022-09-22 15:34:34 +08:00
wendy12022
9032cbb973
Add: reshape/flatten/identity OP and cuda kernel ( #34 )
...
* ADD:reshape/flatten/identity operators and cuda kernel.
fix: use cudaMemcpyAsync
clang format.
ADD flatten/identity operator.
add test for reshape.
ADD: reshape operator and cuda kernel.
* Fix: seperate CUDA tests & remove old header
Co-authored-by: Liyan Zheng <liyan-zheng@outlook.com>
2022-09-21 14:04:30 +08:00
zhengly123
8f67a5cc76
Add: ConvTransposed ( #33 )
...
* Add: convTransposed2d operator
* Fix: IT_ASSERT namespace
* Add: nullptr check in as for Ref
* Fix: conv transpose operator and kernel
* Fix: makes PerfEngine singleton
* Add: ConvTransposed test
* Fix: rebase to master (PerfRecord shared_ptr)
* Revert: Ref with nullptr check
Co-authored-by: Liyan Zheng <liyan-zheng@outlook.com>
2022-09-19 15:05:39 +08:00
zhengly123
d39328afce
Fix: PerfRecord in shared pointers ( #31 )
...
* Fix: PerfData in a shared pointer
* Add: abstraction for kernels without configuration
Co-authored-by: Liyan Zheng <liyan-zheng@outlook.com>
2022-09-18 20:27:18 +08:00
Hardy
6ac106cba4
Add activation operators and kernels
...
* add code for activation operation
* add code for activation operation on GPU
* add test code for activation operation
* add code for activation operation
* add code for activation on gpu ,use cudnn
* add code for activation on GPU use cudnn
* Chore: add constants.h and remove comments
Co-authored-by: wanghailu <wanghailu@qiyuanlab.com>
Co-authored-by: Liyan Zheng <liyan-zheng@outlook.com>
2022-09-16 13:58:57 +08:00
zhengly123
172d03d6f2
Fix NNet tests after migration ( #27 )
...
* Fix: interpreter
```
4 - readlog (Failed)
8 - test_TConv2gemm (Failed)
11 - test_conv2conv (Failed)
12 - test_conv2gemm (Failed)
15 - test_g2bmm (Failed)
16 - test_guidedDLT (Subprocess aborted)
22 - test_mergeStage (Subprocess aborted)
```
* Exclude readlog from ctest
* Fix: change the path of logs
```
85% tests passed, 4 tests failed out of 27
Total Test time (real) = 100.69 sec
The following tests FAILED:
10 - test_conv2conv (Timeout)
11 - test_conv2gemm (Timeout)
15 - test_guidedDLT (Subprocess aborted)
21 - test_mergeStage (Subprocess aborted)
Errors while running CTest
```
- test_conv2conv 38529 ms total
- test_conv2gemm 37098 ms total
* Fix: test_mergeStage
* Fix: test_guidedDLT
```
Start 1: test_graph
1/27 Test #1 : test_graph ....................... Passed 0.05 sec
Start 2: test_hash
2/27 Test #2 : test_hash ........................ Passed 0.02 sec
Start 3: test_conv
3/27 Test #3 : test_conv ........................ Passed 4.98 sec
Start 4: test_Interpreter
4/27 Test #4 : test_Interpreter ................. Passed 6.30 sec
Start 5: test_OpSearch
5/27 Test #5 : test_OpSearch .................... Passed 0.02 sec
Start 6: test_Rule2VariableMerging
6/27 Test #6 : test_Rule2VariableMerging ........ Passed 0.03 sec
Start 7: test_TConv2gemm
7/27 Test #7 : test_TConv2gemm .................. Passed 29.45 sec
Start 8: test_as_tvm
8/27 Test #8 : test_as_tvm ...................... Passed 0.02 sec
Start 9: test_compareFormulas
9/27 Test #9 : test_compareFormulas ............. Passed 0.02 sec
Start 10: test_conv2conv
10/27 Test #10 : test_conv2conv ................... Passed 36.55 sec
Start 11: test_conv2gemm
11/27 Test #11 : test_conv2gemm ................... Passed 39.70 sec
Start 12: test_dlt
12/27 Test #12 : test_dlt ......................... Passed 0.03 sec
Start 13: test_exprHash
13/27 Test #13 : test_exprHash .................... Passed 0.02 sec
Start 14: test_g2bmm
14/27 Test #14 : test_g2bmm ....................... Passed 0.16 sec
Start 15: test_guidedDLT
15/27 Test #15 : test_guidedDLT ................... Passed 0.07 sec
Start 16: test_matchConv
16/27 Test #16 : test_matchConv ................... Passed 0.02 sec
Start 17: test_matchElementWise
17/27 Test #17 : test_matchElementWise ............ Passed 0.03 sec
Start 18: test_matchMatmul
18/27 Test #18 : test_matchMatmul ................. Passed 0.02 sec
Start 19: test_matchReshape
19/27 Test #19 : test_matchReshape ................ Passed 0.02 sec
Start 20: test_memboundOp
20/27 Test #20 : test_memboundOp .................. Passed 0.02 sec
Start 21: test_mergeStage
21/27 Test #21 : test_mergeStage .................. Passed 0.02 sec
Start 22: test_oobChecker
22/27 Test #22 : test_oobChecker .................. Passed 0.02 sec
Start 23: test_rangeMagnify
23/27 Test #23 : test_rangeMagnify ................ Passed 0.02 sec
Start 24: test_relaxation
24/27 Test #24 : test_relaxation .................. Passed 0.02 sec
Start 25: test_serializer
25/27 Test #25 : test_serializer .................. Passed 0.03 sec
Start 26: test_simplify
26/27 Test #26 : test_simplify .................... Passed 0.02 sec
Start 27: test_subset
27/27 Test #27 : test_subset ...................... Passed 0.01 sec
100% tests passed, 0 tests failed out of 27
Total Test time (real) = 117.72 sec
```
* Fix: format
* Replace nnet:Ref with infini::Ref
```
Start 1: test_graph
1/27 Test 1: test_graph ....................... Passed 0.02 sec
Start 2: test_hash
2/27 Test 2: test_hash ........................ Passed 0.02 sec
Start 3: test_conv
3/27 Test 3: test_conv ........................ Passed 4.45 sec
Start 4: test_Interpreter
4/27 Test 4: test_Interpreter ................. Passed 4.37 sec
Start 5: test_OpSearch
5/27 Test 5: test_OpSearch .................... Passed 0.02 sec
Start 6: test_Rule2VariableMerging
6/27 Test 6: test_Rule2VariableMerging ........ Passed 0.02 sec
Start 7: test_TConv2gemm
7/27 Test 7: test_TConv2gemm .................. Passed 23.40 sec
Start 8: test_as_tvm
8/27 Test 8: test_as_tvm ...................... Passed 0.02 sec
Start 9: test_compareFormulas
9/27 Test 9: test_compareFormulas ............. Passed 0.01 sec
Start 10: test_conv2conv
10/27 Test 10: test_conv2conv ................... Passed 32.28 sec
Start 11: test_conv2gemm
11/27 Test 11: test_conv2gemm ................... Passed 29.41 sec
Start 12: test_dlt
12/27 Test 12: test_dlt ......................... Passed 0.02 sec
Start 13: test_exprHash
13/27 Test 13: test_exprHash .................... Passed 0.01 sec
Start 14: test_g2bmm
14/27 Test 14: test_g2bmm ....................... Passed 0.14 sec
Start 15: test_guidedDLT
15/27 Test 15: test_guidedDLT ................... Passed 0.06 sec
Start 16: test_matchConv
16/27 Test 16: test_matchConv ................... Passed 0.02 sec
Start 17: test_matchElementWise
17/27 Test 17: test_matchElementWise ............ Passed 0.02 sec
Start 18: test_matchMatmul
18/27 Test 18: test_matchMatmul ................. Passed 0.02 sec
Start 19: test_matchReshape
19/27 Test 19: test_matchReshape ................ Passed 0.01 sec
Start 20: test_memboundOp
20/27 Test 20: test_memboundOp .................. Passed 0.02 sec
Start 21: test_mergeStage
21/27 Test 21: test_mergeStage .................. Passed 0.01 sec
Start 22: test_oobChecker
22/27 Test 22: test_oobChecker .................. Passed 0.01 sec
Start 23: test_rangeMagnify
23/27 Test 23: test_rangeMagnify ................ Passed 0.01 sec
Start 24: test_relaxation
24/27 Test 24: test_relaxation .................. Passed 0.01 sec
Start 25: test_serializer
25/27 Test 25: test_serializer .................. Passed 0.02 sec
Start 26: test_simplify
26/27 Test 26: test_simplify .................... Passed 0.01 sec
Start 27: test_subset
27/27 Test 27: test_subset ...................... Passed 0.00 sec
100% tests passed, 0 tests failed out of 27
Total Test time (real) = 94.47 sec
```
* Relax time limit for CPU conv
```
Start 1: test_graph
1/29 Test 1: test_graph ....................... Passed 0.02 sec
Start 2: test_hash
2/29 Test 2: test_hash ........................ Passed 0.02 sec
Start 3: test_conv
3/29 Test 3: test_conv ........................ Passed 4.47 sec
Start 4: test_matmul
4/29 Test 4: test_matmul ...................... Passed 2.61 sec
Start 5: test_pooling
5/29 Test 5: test_pooling ..................... Passed 2.57 sec
Start 6: test_Interpreter
6/29 Test 6: test_Interpreter ................. Passed 4.35 sec
Start 7: test_OpSearch
7/29 Test 7: test_OpSearch .................... Passed 0.02 sec
Start 8: test_Rule2VariableMerging
8/29 Test 8: test_Rule2VariableMerging ........ Passed 0.02 sec
Start 9: test_TConv2gemm
9/29 Test 9: test_TConv2gemm .................. Passed 23.32 sec
Start 10: test_as_tvm
10/29 Test 10: test_as_tvm ...................... Passed 0.02 sec
Start 11: test_compareFormulas
11/29 Test 11: test_compareFormulas ............. Passed 0.02 sec
Start 12: test_conv2conv
12/29 Test 12: test_conv2conv ................... Passed 32.12 sec
Start 13: test_conv2gemm
13/29 Test 13: test_conv2gemm ................... Passed 30.59 sec
Start 14: test_dlt
14/29 Test 14: test_dlt ......................... Passed 0.02 sec
Start 15: test_exprHash
15/29 Test 15: test_exprHash .................... Passed 0.01 sec
Start 16: test_g2bmm
16/29 Test 16: test_g2bmm ....................... Passed 0.14 sec
Start 17: test_guidedDLT
17/29 Test 17: test_guidedDLT ................... Passed 0.07 sec
Start 18: test_matchConv
18/29 Test 18: test_matchConv ................... Passed 0.02 sec
Start 19: test_matchElementWise
19/29 Test 19: test_matchElementWise ............ Passed 0.02 sec
Start 20: test_matchMatmul
20/29 Test 20: test_matchMatmul ................. Passed 0.02 sec
Start 21: test_matchReshape
21/29 Test 21: test_matchReshape ................ Passed 0.02 sec
Start 22: test_memboundOp
22/29 Test 22: test_memboundOp .................. Passed 0.02 sec
Start 23: test_mergeStage
23/29 Test 23: test_mergeStage .................. Passed 0.01 sec
Start 24: test_oobChecker
24/29 Test 24: test_oobChecker .................. Passed 0.02 sec
Start 25: test_rangeMagnify
25/29 Test 25: test_rangeMagnify ................ Passed 0.02 sec
Start 26: test_relaxation
26/29 Test 26: test_relaxation .................. Passed 0.02 sec
Start 27: test_serializer
27/29 Test 27: test_serializer .................. Passed 0.03 sec
Start 28: test_simplify
28/29 Test 28: test_simplify .................... Passed 0.02 sec
Start 29: test_subset
29/29 Test 29: test_subset ...................... Passed 0.00 sec
100% tests passed, 0 tests failed out of 29
Total Test time (real) = 100.65 sec
```
* Remove out-of-date tests
Co-authored-by: Liyan Zheng <liyan-zheng@outlook.com>
2022-09-13 15:17:22 +08:00
Hardy
03de74f4bc
Tensor serialization ( #25 )
...
* use protobuf for tensor data save,write,read, in chinese 序列化和反序列化
* add protobuf
* add code for tensor load & save from/to file
* add code for tensor laod & save
* add code for tensor load & save
* add code for tensor save & load
* add code for tensor save & load
* add code for save & load
* add code for load & save
* add code for tensor load & save
* add code for tensor save & load
Co-authored-by: wanghailu <wanghailu@qiyuanlab.com>
2022-09-13 11:27:41 +08:00