Nicolo's update
Small reusable blocks:
- The idea is to start by testing a function that can be reused (by using different input/output shapes) disregarding the functionality (Noemi is working with Dense - simpler than Conv)
- Core idea: fix the maximum size of input and output shape and create the "core". Then pass the actual input and output shapes as parameters (<= than the max ones) and perform the computation.
- Note: different cores could be instantiated, with different maximum size of input/output shapes
FloatQuant
- First sample version implemented (QONNX meeting in 2 days, we will give an update on that)
- Spotted a couple of bugs in Brevitas FloatQuant, I am now discussing with the guys if I should open a PR or some issues on GH
- Waiting for feedbacks from Yaman
Edge SpAIce new models
- Tested and they works, the performance are not degradated as it happened before
- There are mismatches between QONNX and hls4ml after pointwise conv. To try to understand why this happens I recreated a simple model composed by only the first part of the UNet model
- I opened a PR to automatically infer the datatype of a constant, namely
param_t
of LeakyReLU
- Preprocessing: I modified the input datatype to 4bits, as it is the first Quant node in QONNX. This remove the initial linear function that creates issues
---------------------
Stelios
- Opened PR on II fix for zero-padding and pooling
- Opened PR on Vitis Accelerator
- No progress on DSP packing
- Working on another FIFO matching
- Talked with PixESL people
- Working on getting the datasets
- There is another person working on clustering on SoCs
- New person from their side who is familiar with ML, will talk to him today
General
- Applied ML days
- Meeting with NTUA Wednesday