Category Archives: Computer

3D projection experiments

I’ve posted this last time

These are my experiments using python.

You can see that I had some trouble with projecting depth in some of these animations.
The sliders helped me to understand what was going on.

I’ve used some information from these books.

Code (much like my BBC Acorn Basic program)

import tkinter as tk
import math

def rotate(p, ax, ay):
    x, y, z = p

    # rotate around X
    cy, sy = math.cos(ax), math.sin(ax)
    y, z = y * cy - z * sy, y * sy + z * cy

    # rotate around Y
    cx, sx = math.cos(ay), math.sin(ay)
    x, z = x * cx + z * sx, -x * sx + z * cx

    return x, y, z


def project(p, blend):
    x, y, z = p

    # isometric
    iso_x = x - z
    iso_y = y + (x + z) * 0.5

    # perspective
    d = 4
    f = d / (d + z)
    per_x = x * f
    per_y = y * f

    # blend projections
    px = iso_x * (1 - blend) + per_x * blend
    py = iso_y * (1 - blend) + per_y * blend

    return px, py

CUBE = [
    (-1, -1, -1), (1, -1, -1), (1, 1, -1), (-1, 1, -1),
    (-1, -1,  1), (1, -1,  1), (1, 1,  1), (-1, 1,  1)
]

EDGES = [
    (0,1),(1,2),(2,3),(3,0),
    (4,5),(5,6),(6,7),(7,4),
    (0,4),(1,5),(2,6),(3,7)
]

root = tk.Tk()
root.title("Rotating Cube")

canvas = tk.Canvas(root, width=400, height=400, bg="black")
canvas.pack()

slider = tk.Scale(root, from_=0, to=1, resolution=0.01,
                  orient="horizontal", label="Isometric / Perspective")
slider.pack(fill="x")

ax = ay = 0

def draw():
    global ax, ay
    canvas.delete("all")

    blend = slider.get()
    points = []

    for p in CUBE:
        r = rotate(p, ax, ay)
        x, y = project(r, blend)
        points.append((200 + x * 80, 200 + y * 80))

    for a, b in EDGES:
        canvas.create_line(*points[a], *points[b], fill="white")

    ax += 0.02
    ay += 0.03
    root.after(16, draw)

draw()
root.mainloop()

BBC Acorn Basic – 3D projection

I found some of my old BBC Acorn basic programs a while ago.
And doing some research into 3D projections I ran some on a emulator.

Cleaned-up code for brandy basic

MODE 1
VDU 5

DIM X(7), Y(7), Z(7)
DIM SX(7), SY(7)

DATA -1,-1,-1, 1,-1,-1, 1,1,-1, -1,1,-1
DATA -1,-1,1,  1,-1,1,  1,1,1,  -1,1,1

FOR I=0 TO 7
READ X(I), Y(I), Z(I)
NEXT

DIM E(11,1)
DATA 0,1, 1,2, 2,3, 3,0
DATA 4,5, 5,6, 6,7, 7,4
DATA 0,4, 1,5, 2,6, 3,7

FOR I=0 TO 11
READ E(I,0), E(I,1)
NEXT

angleX=0
angleY=0
REPEAT
CLS

camera=5
scale=600

FOR I=0 TO 7
y1 = Y(I)*COS(angleX) - Z(I)*SIN(angleX)
z1 = Y(I)*SIN(angleX) + Z(I)*COS(angleX)
x2 = X(I)*COS(angleY) + z1*SIN(angleY)
z2 = -X(I)*SIN(angleY) + z1*COS(angleY)

zc = z2 + camera
IF zc=0 zc=0.01

SX(I) = 640 + (x2 * scale) / zc
SY(I) = 512 - (y1 * scale) / zc

NEXT
FOR I=0 TO 11
MOVE SX(E(I,0)), SY(E(I,0))
DRAW SX(E(I,1)), SY(E(I,1))
NEXT
angleX = angleX + 0.03
angleY = angleY + 0.02

WAIT 2
UNTIL FALSE

Some old notes

DIY VR using two cameras on a Raspberry 5

Above a screenshot of a browser screen (Left and Right in fullscreen)
Colors are a little of (codec Red/Blue problem?)
But the setup works!

I used a android phone in above setup.
I tried a Quest 2 VR set, but I couldn’t get the browser in full screen mode. (YET)

Hardware setup

Two Raspberry Pi Camera Modules, connected via the two 4lane-MIPI DSI/CSI connectors.

Manually focussed and using some 3D printed stands on a piece of wood will do for now.

I build a RTSP NGinx proxy to test, which I previously used for OBS. But there was too much latency.

So I used below webrtc setup, with a latency below 80ms.
(I previously did some test using Janus)

CODE:

wget https://github.com/bluenviron/mediamtx/releases/download/v1.16.1/mediamtx_v1.16.1_linux_arm64.tar.gz
tar xzvf media*
cp mediamtx.yml mediamtx.org

NEW mediamtx.yml

webrtc: yes
webrtcAddress: :8889

rtmp: yes
rtmpAddress: :1935

paths:
  dualcam:
    source: publisher

run it

./mediamtx mediamtx.yml

Next make a streamer.
This Python script takes two square camera inputs, merge them side-by-side to one image and pushed the H264 frame to MediaMTX

import numpy as np
from picamera2 import Picamera2
import subprocess
import time

#WIDTH = 1280
WIDTH = 720
HEIGHT = 720
FPS = 30
BITRATE = "2500k"
RTMP_URL = "rtmp://127.0.0.1:1935/dualcam"  # MediaMTX RTMP

# FFmpeg  raw frames / H.264 
ffmpeg_cmd = [
    "ffmpeg",
    "-y",
    "-f", "rawvideo",
    "-pix_fmt", "bgr24",
    "-s", f"{WIDTH*2}x{HEIGHT}",
    "-r", str(FPS),
    "-i", "-",
    "-c:v", "libx264",
    "-preset", "ultrafast",
    "-tune", "zerolatency",
    "-b:v", BITRATE,
    "-g", str(FPS),  # keyframe every second
    "-x264-params", "keyint=30:min-keyint=30:no-scenecut=1",
    "-pix_fmt", "yuv420p",
    "-f", "flv",
    RTMP_URL
]


ffmpeg = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE, bufsize=0)

picam0 = Picamera2(0)
picam1 = Picamera2(1)

cfg0 = picam0.create_video_configuration(
    main={"size": (WIDTH, HEIGHT), "format": "BGR888"}, controls={"FrameRate": FPS}
)
cfg1 = picam1.create_video_configuration(
    main={"size": (WIDTH, HEIGHT), "format": "BGR888"}, controls={"FrameRate": FPS}
)

picam0.configure(cfg0)
picam1.configure(cfg1)

picam0.start()
picam1.start()

print("Streaming to MediaMTX via RTMP...")

try:
    while True:
        f0 = picam1.capture_array()
        f1 = picam0.capture_array()
        combined = np.hstack((f0, f1))
        ffmpeg.stdin.write(combined.tobytes())
        time.sleep(1/FPS)
except KeyboardInterrupt:
    print("Stopping...")
finally:
    picam0.stop()
    picam1.stop()
    ffmpeg.stdin.close()
    ffmpeg.wait()

Open using http://REMOTEIP:8889/dualcam

Immich is amazing

I’m running this Google Photos alternative for a week now, and I am pleasantly suprised.

  • Face detection : spot on
  • Responsiveness : fast!, even with a large library
  • Android uploads : it just works! (I used Nextcloud before)
  • Movies : plays smoothly (there is a cast button for movies and images)

The face detection had only 1 mismatch in my library.

Negatives?

Well maybe album management, it could be better or more flexible

Some search tests:

  • Food – indeed found food
  • Rum – found drinks
    (I changed search query to OCR, and it gave me images with the word RUM on it) !
  • Dog – First ones are dogs indeed, after that other animals
  • Smiling / Kissing works
  • Hair/red/computer/music/comic

Amazing results!

Features (some)

  • Docker instance for simple upgrades
  • Facial Recognition
  • Hardware Transcoding
  • Hardware-Accelerated Machine Learning
  • Reverse Geocoding (see below)

Lets copy the rest of my photo libary to this server.
(Storage is on a 10Gbit fiberoptic iSCSI device)

Raspberry Pi 5 Projects

Again … out of SBCs
Where are all these things in my home. Someone is stealing Raspberry Pi’s, ESP32 and other sensors.
(Probably me)

So I’ve got multiple projects running on one RPi.

  • Dual Camera’s on top (brown ribbons), these are for VR streaming project.
  • Dual Camera’s on top. these are for a Red Light Green Light game. (Using motion detection on both camera’s for two players.
  • Below a INMP441 Mems microhone. This is a test for BirdNet recording.

All of the above are partially working. Code follows.

INMP441 is a tricky thing. I needed to do some bitbanging to get it working.

Loads of INMP441 info will be posted

New own-hosted spotify alternative

I’ve tested many opensource tools to have a personal own hosted spotify.
Now I have migrated to navidrome.

List of alternatives i’ve used:

  • …. to be filled in

I can access this with a browser or an android app named amcfy music.

Why?

  • I like self hosting stuff
  • I’ve got a lot of obscure music, which can’t be found on main streaming services
  • Our folkband stuff is for personal use only (Tapsalteerie/NaeBother)

I don´t have time posting other stuff, I’m balancing almost 10 projects at the same time ….

STM32 Nucleo-64 development board

I’ve been playing with all kinds of MicroControllers, but not this one.

Something new to learn.

The STM32 Nucleo-64 board provides a flexible way to try out the STM32 microcontroller. The Arduino Uno V3 can be connected as a shield.

STM32 excels in high-performance, deterministic industrial control with better real-time capability, lower power, and rich peripherals, using ARM Cortex-M cores, while ESP32 dominates IoT with built-in Wi-Fi/Bluetooth, lower cost, easier Arduino/PlatformIO access, and strong community, but with higher power and less precise real-time control (Xtensa cores), making ESP32 great for connected projects and STM32 for industrial/precision tasks.

STM32 (STMicroelectronics)
Strengths:

  • Performance: Superior real-time processing, deterministic behavior, efficient for complex control.
  • Power: Advanced low-power modes, excellent for battery-powered devices.
  • Peripherals: Rich, precise analog (ADC/DAC), extensive interface options (USB, SD, LCD).
  • Reliability: Strong for industrial, medical, and automotive applications.
  • Tools: STM32CubeIDE/MX, HAL/LL libraries.

    Weaknesses:
  • Higher cost and learning curve.
  • Requires external modules for Wi-Fi/Bluetooth.

ESP32 (Espressif Systems)
Strengths:

  • Connectivity: Integrated Wi-Fi and Bluetooth (BLE).
  • Cost & Ease: Cost-effective, easy entry with Arduino IDE/PlatformIO, great for rapid prototyping.
  • Community: Strong open-source community.
  • Features: Dual-core (often), built-in OTA updates, good for audio/AI.

    Weaknesses:
  • Less deterministic/real-time performance than STM32.
  • Higher active power consumption, less precise analog.
  • Can have complex debugging/compilation.
  • When to Choose Which
  • Choose STM32 for: Industrial automation, precise instrumentation, medical devices, complex motor control, low-power wearables, general embedded systems learning.
  • Choose ESP32 for: IoT devices, smart home products, Bluetooth beacons, educational projects, rapid prototyping, audio/voice applications.

Fireworks LED addition and modifying Arcade buttons

I’ve given people on the street control over my Xmas/Fireworks lights last month. (This month it is going to be converted to an interactive game)

I saw some LED strip dividers on Aliexpress, next year it’s going to have a star on top.

Like this….

Another LED related project I started today is a Whack-A-Mole game with multiple levels.
For this I need to convert a simple arcade button to a programmable multicolor version.

From single white LED to multi color, programmable.

Analog Meters to display CPU and memory load

While this is a old project from 2019, I decided to make a more responsive one, after my friend Tyrone mentioned a project somewhere on the internet (forgot where).
Time to dust off this project!

2019 version

Above version worked but was slow.
I used a python script to send values to de controller.

Memory setup was the same.

Below my new schematic, using an opamp to drive the analog meter.

Untested design .. Yeah I got bored on new year’s eve

Utilizing a MCP41000 digital potmeter and a LM358 signal amplifier I hope to get a more responsive setup.

Input to display MQTT and maybe Serial.

Old version