I use a bunch of different tools to create video’s or stream stuff. Below is some info about those tools.
Software:
Kdenlive – Linear video editor (Adding text, transitions, etc)
VLC media player – For example to embed video in OBS
OBS – Opensource Broadcast software, i use this also to record my screen – You can use this as a virtual webcam, so you can fool around with the image.
Audacity – For editing audio
QPrompt – Teleprompter
For OBS i made a shortcut/macro keyboard thingy. Based on an arduino pro mini. (Which can connect to a computer acting like a HID, for example a keyboard or mouse) I use this one to emulate keystrokes which i’ve configured in OBS to do:
Switch to scene 1
Switch to scene 2
Transition from scene to scene
Start streaming
Start recording
Mute
[empty] – sometimes used as “start virtual webcam”
Slow transition
Blank screen
Display overlay text
(Originally i planned to do this with a Nextion Display)
Mobile Phone holder, like a third hand
Sometimes a Nikon on a tripod is better.
Chromakey / Green screenPortable versionGreen screens .. loads of funVideo grabbers
I don’t have a dedicated webcam for my battlestation. So i mainly use a Actioncam (4k) which can be connected via USB. Or i use a Nikon together with the Camlink.
So i record using my mobile, webcam, screen record Edit using Audacity and Kdenlive.
When recording with OBS i use MP4 as a container, this is a no-brainer to embed in websites. Use mkv when recording long shots, or when connections can break. (A mp4 will be corrupted)
Composite video print designed and ordered from china.
Changed some vlans in my network. I need to think of a way to extract/migrate domoticz 433 info into a new instance. For example .. i’ve got some instances in my device list which are only being controlled by domoticz, there is no remote i can reuse.
Tried welding again, because i could not do it for a long time, i noticed i have to practice again after 2 years. (I’ve got a dedicated power outlet outside now .. 🙂
Last 8mm films work done. (Converted all of my dad’s old 8mm reels)
Designed a hidden remote cabinet, holding remotes out of sight for the occasions when automation doesn’t work.
Designed also a wooden wall with hidden cabinets in our bedroom.
Searx is a free and open-source metasearch engine, available under the GNU Affero General Public License version 3, with the aim of protecting the privacy of its users. To this end, Searx does not share users’ IP addresses or search history with the search engines from which it gathers results.
It’s easy to install using docker, but i wanted to add my own mysql server data. ( pipetune search engine data in below example ) There are many search plugins and quite hackable. But there was a missing python module in a docker image.
ModuleNotFoundError: No module named ‘mysql’
So i build a new docker image based on the original
# Install docker and docker-compose
cd /usr/local
git clone https://github.com/searxng/searxng-docker.git
cd searxng-docker
Edit the .env file to set the hostname
Generate the secret key sed -i "s|ultrasecretkey|$(openssl rand -hex 32)|g" searxng/settings.yml
Edit the searxng/settings.yml file according to your need
Check everything is working: docker-compose up
Run SearXNG in the background: docker-compose up -d
I’ve changed the docker-compose.yaml
Changed
< image: searxng/searxng:latest
into
> build: .
And
changed the listen address
< - "127.0.0.1:8080:8080"
into
> - "8080:8080"
Created a Dockerfile
FROM searxng/searxng:latest
RUN pip install mysql-connector-python
I started to get some composite video generated with a arduino for my 6502 project.
UPDATE: 20221021
It is based on Grant Searle’s design, and yesterday I had some signals on my scope which looked like a screen with a character. But my monitor would not recognize a usable signal.
Today I tried a second version and another set of chips and crystals.
It looks like a signal, but I can’t see a clock pulse from the crystal?! So .. how?
Maybe I used a bad power supply. And killed something?
UPDATE: 20221021
After switching to another power supply, and checking the atmega328p fuses again (also wrong) .. at least SOME success!
Still a little sync problem, but i’ve got a blinking cursor!
There are a lot of old develop boards for all kinds for cpu’s.
These where build to learn machine code programming. Mostly made in the 80’s, and based on populair cpu’s at that time.
I own a some of these SDK’s (System Design Kits)
8085 – SDK85 i bought recently 8085 CPU
Microprofessor-1 (MPF-1) Z80 CPU
And my own 680x based computer
Most of these use a keyboard scanner which is also connected to 7 segment displays.
The way they work is practically the same. There is a VIA or PIA. Versitile interface adaptor, or Peripheral interface adaptor. These have two times 8 bits to control devices. When using 4 bits and convert these to 16 lines by using a 75ls145 for example. If you put a counter on those 4 bits, you sequently activate 1 of 16 lines. These lines you can use to scan a keyboard matrix OR display a character on a 7 segment display. These display’s won’t hold the data (and show the character) when not activated. The trick is to update de display fast enough so you don’t see the flickering on/off.
Activate a line and read a byte with the VIA = Reading keyboard row Activate a line and write a byte with the VIA = Display on a segment
These VIA/PIA’s where made with specific timings to match the CPU. 6522/6820/8255
Below you see some different implementations of these keyboard/display combo’s
Thaler 6502 kit
Microprofessor MPF-1 kit (ignore red circle)
SDK85 kit
Eltec 6800
My version using darlington arrays (ULN2003)
When looking at the 8085 version you see transistors being a ULN2003 is a chip with those transistors/amplification enclosed. It doesn´t draw much current from the bus, and diodes protect the way the current flows.
Februari 2021 i made a website to view images and movies in a browser to do some quick sorting. (borrowed some code from a codepen page i recall correctly) At the time i didn´t have a good way to view webp webm media. I wanted to view multiple files at the same time, and make it short and simple.
BTW no webserver needed, just open the file from a directory! jpg’s png’s webm webp mp4 svg and animated gifs work. (maybe more, didn’t test more, whatever your browser supports)
With recent updates of the chrome browser the video attributes to mute is broken, i so made a workaround. Also everything is in one file now. Except for one issue .. i couldn´t create one file for images AND videos.
There is a piece of javascript i could not fix … yet I have to do execute a document.createElement which is different for images and videos. Also the attributes of video are mute,autoplay,loop,playinline
We bought some servers a while ago, but these have old ILO versions (2).
To manage these servers via ILO was no problem until modern browsers refuse to connect to these web services, because of TLS 1.0 issues.
So what i did was using a second user account on my workstation with a old (downloaded from a ESR archive) version firefox. To administer the ILO
wget https://ftp.mozilla.org/pub/firefox/releases/50.0/linux-x86_64/en-US/firefox-50.0.tar.bz2
extract in other users homedir
usage:
# ssh with X forwarding and start old version
ssh -X otheruser@localhost firefox/oldfirefox
While this was working for me on a debian based machine it didn’t work for my friend who was using Fedora on Wayland.
So i made a more generic solution which would work always. Also when working from windows.
I downloaded a old Fedora version ISO. https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/15/Fedora/x86_64/iso/ Using the DVD iso i knew the old JAVA was present.
So i started virt-manager and created a new virtual machine
Select your downloaded Fedora 15 ISO
Where is the thin option?!??!
Create a disk image for the OS, don’t worry about the size we are going to shrink it to a minimum (thin provisioned)
Booting from ISO
Do not forget to tick Customize now
Disable all thats not needed! .. Only Gnome Graphical internet .. and JAVA
Create users, complete the install and reboot. Test your installation. Shutdown
sudo qemu-img info /var/lib/libvirt/images/fedora15.qcow2 When above gives you a RAW image, you need to convert from RAW to QCOW first. Mine showed a 9G qcow2 image .. far to large
I had a crash recently on one of my raspberry-pi’s .. SDcard failure, they are not made for a lot of write actions. In the past i’ve changed some images to read only and with a r/w overlay. Also tmp filesystems in memory .. all not ideal.
So i’ve started to make every RPi ssd bootable.
I’ve got several ssd already from other projects. Sata to USB adaptors are cheap, only a few euro’s.
Steps to take:
Download Raspberry Pi Imager tool
Choose OS > Misc Utility Images > Bootloader > USB Boot
Select storage and write to a temporary sd-card (Not needed any more after flashing for normal operations)
Boot USB with this Micro-SDcard .. i didn’t have a screen connected .. So i just waited a few minutes
While i was waiting i wrote a OS image to the SSD using the same imager tool
Choose OS > select sata/ssd drive
Change options (cog), enable ssh, choose hostname and set password
Write to drive
Remove sdcard from RPi attach ssd/sata and boot
My 3D printed sdcard case, luckily there was still one in there (32GB kindda big, but it was only for temporary use .. 16GB was broken ..
So .. without attaching a screen or keyboard, just a network cable. I have a running OS on a RPi from SSD
Last year i made a script for a friend who wanted to detect visually if his garden sprinkler was on or off. A few days ago i saw someone who wanted to see if things where moving in his house. (didn’t trust his landlord i think) But he only had a dumb/simple/cheap camera .. so it had no motion detection.
I was thinking of my script, and could easily adapt it for this usage.
Most ipcams have somekind of URL/API you can use to capture a image. Some examples below
So using below script i can capture a image, compare it to the previous, and when it’s above a certain threshold sends a email.
#!/bin/bash
# Only uses wget and image-magick
treshhold=500
fuzzyness=20%
# CHANGE WEBCAM THINGY TO OWN URL AND CREDENTIALS
wget -q "http://webcamip/cgi-bin/api.cgi?cmd=Snap&channel=0&user=user&password=password" -O previous.jpg
while true; do
wget -q "http://webcamip/cgi-bin/api.cgi?cmd=Snap&channel=0&user=user&password=password" -O current.jpg
value=$(compare -fuzz $fuzzyness previous.jpg current.jpg -metric mae diff.jpg 2>&1 | cut -f1 -d.)
if [ $value -gt $treshhold ] ; then
echo "ping $treshhold"
echo "Something moved" | mail -s "Movement" user@example.com -A diff.jpg
fi
# Comment below if you want to compare against a base line .. not previous image
cat current.jpg > previous.jpg
sleep 60
done
Example previous picture
Example current picture
I got mailed with result
Hints tips:
Use crop to detect only a part.
copy current.jpg to a second file
Use painting black a part and compair with different treshhold fuzzyness to get different hotspots.
Below detects RED, use above ide with crop to detect red/green/blue leds
compare -verbose -metric mae 1.jpg 2.jpg /tmp/1.diff
1.jpg JPEG 2560x1920 2560x1920+0+0 8-bit sRGB 248819B 0.050u 0:00.057
2.jpg JPEG 2560x1920 2560x1920+0+0 8-bit sRGB 248949B 0.030u 0:00.137
Image: 1.jpg
Channel distortion: MAE
Channel distortion: MAE
red: 12517.5 (0.191005)
green: 11967.1 (0.182607)
blue: 12492.8 (0.190628)
all: 12325.8 (0.18808)
1.jpg=>/tmp/1.diff JPEG 2560x1920 2560x1920+0+0 8-bit sRGB 1.19495MiB 1.470u 0:00.197
"If something is worth doing, it's worth overdoing."