You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The structure of the files in this folder is as follows:
4
+
```
5
+
.
6
+
├── affine2neura
7
+
│ └── bert
8
+
├── arith2neura
9
+
│ ├── add.mlir
10
+
│ └── Output
11
+
├── c2llvm2mlir
12
+
│ ├── kernel.cpp
13
+
│ ├── Output
14
+
│ └── test.mlir
15
+
├── lit.cfg
16
+
├── lit.cfg.in
17
+
├── neura
18
+
│ ├── arith_add.mlir
19
+
│ ├── ctrl
20
+
│ ├── fadd_fadd.mlir
21
+
│ ├── for_loop
22
+
│ ├── interpreter
23
+
│ ├── llvm_add.mlir
24
+
│ ├── llvm_sub.mlir
25
+
│ └── Output
26
+
├── Output
27
+
│ └── test.mlir.script
28
+
├── README.md
29
+
├── samples
30
+
│ ├── bert
31
+
│ └── lenet
32
+
└── test.mlir
33
+
```
34
+
35
+
All of the above content can be divided into three categories
36
+
37
+
## 1 Conversion Test
38
+
We need to convert other dialects to our `neura` dialect for compilation optimization. In order to verify the correctness of conversions from other dialects to `nerua` dialect, we need to provide the appropriate test for a conversion pass from a dialect to `nerua` dialect.
39
+
40
+
For now, we have:
41
+
`affine2neura`: tests provided for `--lower-affine-to-neura`[To be provided]
42
+
`arith2neura`: tests provided for `--lower-arith-to-neura`
43
+
`c2llvm2mlir`: tests provided for `--lower-llvm-to-neura`
44
+
45
+
## 2 Neura Compiler Test
46
+
Tests for individual passes/pass pipelines at the `neura` dialect level.
47
+
48
+
## 3 Samples
49
+
A collection of real-world applications for generating unit small tests.
50
+
51
+
For now, [BERT](https://github.com/codertimo/BERT-pytorch) and [LENET](https://github.com/kuangliu/pytorch-cifar/blob/master/models/lenet.py) are included.
52
+
53
+
We generate the `linalg` dialect of these models via [Torch MLIR](https://github.com/llvm/torch-mlir). which is then lowered to `affine` dialect for further lowering.
54
+
55
+
Due to the data dependencies between loops in models, we are now unable to automatically extract each of these SINGLE loops from the model IR for individual tests.
56
+
57
+
But we can manually collect some small unit tests from these sample IRs. For example, you can write `c++` code of a loop from BERT by mimicing the its corresponding `affine.for` operations, then use [Polygeist](https://github.com/llvm/Polygeist) to convert these `c++` code into `affine` mlir for further lowering. And that's how we generated tests in `affine2neura/bert`.
0 commit comments