Skip to content

Commit 4f6c81b

Browse files
committed
Sofa session
1 parent 8b3e4b6 commit 4f6c81b

File tree

1 file changed

+106
-26
lines changed

1 file changed

+106
-26
lines changed

_drafts/2025-03-02-building-your-own-ai-and-e-ink-powered-art-gallery-local.md

+106-26
Original file line numberDiff line numberDiff line change
@@ -124,10 +124,9 @@ An explanation I would happily give without prompt.
124124
The reason it looks so realistic because, what you are looking at is actual ink.
125125
If you ever use a kindle or [Remarkable Tablet](https://remarkable.com/) you know what I am talking about.
126126
The screen consist of small "pixel" of oil, with different pigments.
127-
The pigments can then be moved up or down with a electromagnet, defining the color of the pixel. What to know more? Read it on Wikipedia.
128-
129-
- [wikipedia.org/wiki/E_Ink](https://en.wikipedia.org/wiki/E_Ink)
130-
- [wikipedia.org/wiki/Electronic_paper](https://en.wikipedia.org/wiki/Electronic_paper)
127+
The pigments can then be moved up or down with a electromagnet, defining the color of the pixel.
128+
What to know more?
129+
Read it on [wikipedia.org/wiki/E_Ink](https://en.wikipedia.org/wiki/E_Ink) and [wikipedia.org/wiki/Electronic_paper](https://en.wikipedia.org/wiki/Electronic_paper).
131130

132131
There are quite a few provioders out there, but the E-ink provider of choice we found is [waveshare.com](https://www.waveshare.com/).
133132
We choose them because others have had success with them and they have fairly good documentation and prices.
@@ -196,30 +195,61 @@ Anything drawing, painting, sketch related usually translates well
196195
For inspiration, there are many style libraries that has been created. We found that [midlibrary.io](https://midlibrary.io/) gave a quite good selection of style and artists that works well.
197196
Especially the black and white section.
198197

199-
### Note on Windows hosting and crong job
198+
### Hosting image generator service on windows
199+
200+
Since you are following this setup guide, I assume you have a graphics card, and then I will also assume that you are using it on a windows machine.
201+
The most easy way to setup a service is to setup Windows Subsystem Linux.
202+
203+
There was some problems with speed with Windows10 and WSL2, as the read/write to desk was very slow.
204+
Using Windows11 with WSL2 seems way more stable. And note that you need more space than you think to have a Linux subsystem.
205+
However, my experience is with Win11 and WSL2, getting CUDA access to to your windows GPU from linux, is quite smooth.
206+
Setup guide is as following
207+
208+
<details markdown="1">
209+
<summary><b>Setup Linux subsystem linux with CUDA</b></summary>
210+
211+
- Install cuda on windows (probably you already have that) [developer.nvidia.com/cuda-downloads](https://developer.nvidia.com/cuda-downloads)
212+
- Install wsl [learn.microsoft.com/en-us/windows/wsl/install](https://learn.microsoft.com/en-us/windows/wsl/install)
213+
214+
Open a terminal and install wsl
200215

201-
Some notes on the prompts and references
216+
wsl --install
202217

203-
High contrast
218+
When WSL is installed, update and setup linux
204219

205-
drawings
220+
# update apt
221+
sudo apt update
222+
sudo apt upgrade
206223

224+
Download CUDA bridge from Select Linux, x86, WSL-Ubuntu, 2.0, deb (network)
225+
[developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=deb_network](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=deb_network)
226+
as of writing this means
227+
Which means today running the following commands in WSL Ubuntu
207228

208-
## From gradient to binary photo (dithering)
229+
wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.1-1_all.deb
230+
sudo dpkg -i cuda-keyring_1.1-1_all.deb
231+
sudo apt update
232+
sudo apt -y install cuda-toolkit-12-3
209233

210-
error diffusion
234+
and lastly setup python, either with `conda`, `uv` or `pip`.
211235

212-
- why? low res
213-
- why? because default makes it grey
214-
- why? the default is implemented both on esphome and pillow
236+
</details>
215237

216-
Many to choose from
238+
And with that you should be able to use a Python environment with CUDA in a linux environment, hosted by Windows.
217239

218-
link to wiki
240+
With the linux subsystem we can setup a job for our service to run every 4am.
241+
Setup a cronjob with `crontab -e` with the following syntax
219242

220-
reasoning
243+
30 4 * * * cd ~/path/to/project && start-service
221244

222-
what?
245+
## Dithering, from a grey-scale photo to binary-black/white
246+
247+
When translating a photo from grey-scale to black-white (meaning binary here), we need to account for the error when we cannot represent grey.
248+
This is called [error diffusion/dithering](https://en.wikipedia.org/wiki/Dither) and is a well known issue.
249+
The default dithering algorithm on most systems is [Floyd-Steinberg dithering](https://en.wikipedia.org/wiki/Floyd%E2%80%93Steinberg_dithering),
250+
which is the most numerical accucate way of doing it.
251+
It works by splitting the error associated with going from grey to either black or white into the neighboring pixels, moving from top left.
252+
So if $$*$$ is the current pixel, the error would be distributed like this;
223253

224254
$$
225255
\begin{bmatrix}
@@ -228,6 +258,13 @@ $$
228258
\end{bmatrix}
229259
$$
230260

261+
However, in practise the numerically correct method dithers the error out in a very dense way, making the picture look very grey-ish.
262+
This is especially prominant in low-resolution images, in which we have.
263+
264+
With experience we found that the algorithm used in old Macs, [Atkinson Dithering](https://en.wikipedia.org/wiki/Atkinson_dithering),
265+
works really well for low resolution photos.
266+
The difference being that instead of diffusing the full error, only partial will be diffused.
267+
231268
$$
232269
\begin{bmatrix}
233270
& & * & \frac{\displaystyle 1}{\displaystyle 8} & \frac{\displaystyle 1}{\displaystyle 8} \\
@@ -236,21 +273,62 @@ $$
236273
\end{bmatrix}
237274
$$
238275

276+
The result is that the image will have more concentrated pixel areas and have a higher contrast. As seen by the following comparison.
277+
239278
![Dithering results]({{ site.baseurl }}/assets/images/eink_art/dithering_example.png)
240-
**Figure:** How a AI generated picture (A), and the default dithering algorithm (B) can be improved (C).
279+
**Figure:** A greyscale image (A), dithering using Floyd-Steinberg (B) and using Atkinson Dithering (C).
280+
241281
It might be a little difficult to see, but notice how (B) is more greay than (C).
242282
This is a lot more visually clear when applied on an actual physical low-res e-ink screen.
243283

244-
note on implementation
245-
numpy + numba
246-
python slow!
247-
C fast
248-
numba fast
284+
Now the implementation is doing a lot of for-loops, so Python is not really the best option.
285+
And Pillow only implemented Floyd-Steinberg.
286+
But using Numba we can get something working really quick.
249287

250-
note on multi-color dithering
288+
<details markdown="1">
289+
<summary><b>dithering_implementations.py</b></summary>
290+
291+
import numpy as np
292+
from numba import jit
293+
from PIL import Image
294+
295+
def atkinson_dither(image: Image.Image) -> Image.Image:
296+
img = np.array(image.convert("L"), dtype=np.int32)
297+
set_atkinson_dither_array(img)
298+
return Image.fromarray(np.uint8(img))
299+
300+
@jit
301+
def set_atkinson_dither_array(img: np.ndarray):
302+
"""changes img array with atkinson dithering"""
303+
304+
low = 0
305+
heigh = 255
306+
307+
frac = 8 # Atkinson constant
308+
neighbours = np.array([[1, 0], [2, 0], [-1, 1], [0, 1], [1, 1], [0, 2]])
309+
threshold = np.zeros(256, dtype=np.int32)
310+
threshold[128:] = 255
311+
height, width = img.shape
312+
for y in range(height):
313+
for x in range(width):
314+
old = img[y, x]
315+
old = np.min(np.array([old, 255]))
316+
new = threshold[old]
317+
err = (old - new) // frac
318+
img[y, x] = new
319+
for dx, dy in neighbours:
320+
nx, ny = x + dx, y + dy
321+
if 0 <= nx < width and 0 <= ny < height:
322+
# Make sure that img set is between 0 and 255 (negative error could surpass the value)
323+
img_yx = img[ny, nx] + err
324+
img_yx = np.minimum(heigh, np.maximum(img_yx, low))
325+
img[ny, nx] = img_yx
326+
327+
328+
</details>
329+
330+
If you are doing multiple colors you can diffuse the error per color channel.
251331

252-
- https://en.wikipedia.org/wiki/Floyd%E2%80%93Steinberg_dithering
253-
- https://en.wikipedia.org/wiki/Atkinson_dithering
254332

255333
## Choice of hosting model
256334

@@ -587,6 +665,8 @@ It was expensive, but worth it for the final touch.
587665

588666
## Known issues
589667

668+
> **NOTE:** WSL will shutdown if no shell is running, so you need to leave a terminal open on your machine
669+
590670
- Why do we need to revert the colors for the esphome setup?
591671
- Soldering could be an issue, check
592672
- Check which config you need to set the HAT

0 commit comments

Comments
 (0)