tinit - tiny init for microvm

tinit, a tiny but valid init for microvm

Github repo

What is tinit?

tinit, basic and tiny init for microvm, it's a valid init, but it's not a full init, it's just a simple init, it's not a replacement for systemd, it's just a simple init for microvm.

  • It mounts required filesystems.
  • It configures the network.
  • It configures the hostname.
  • It runs the user specified command.

Compile from source

  • You need to install the following packages:
  • make
  • gcc
  • git

  • Clone the repository:

git clone https://github.com/mofm/tinit.git
  • You can edit the Makefile to change the installation directory or CFLAGS or etc.
  • Compile:
make

mv-kernel - kernel build tool for microvm

mv-kernel, kernel configuration, build tool for micro virtual machine.

Github repo

1. Quick Start

mv-kernel, fast way to build kernel for micro virtual machine.

1.1. Clone mv-kernel

$ git clone https://github.com/mofm/mv-kernel.git
$ cd mv-kernel

1.2. Build

Download kernel source code, and build kernel.

$ make all

Finally, you can find the kernel image in images directory.

2. All target of Makefile

  • Only download and exract kernel source code.
$ make download
  • Configure kernel.
$ make config
  • Build kernel.
$ make build
  • Download and extract kernel source code, configure kernel, build kernel. Execute all target.(download + config + build)
$ make all
  • Clean kernel source code for reconfigure and rebuild.(soft clean)
$ make clean
  • Clean all build files, kernel directory and kernel images.(hard clean)
$ make clean-all
  • Help
$ make help

meta-econ - openembedded/yocto layer

meta-econ, OpenEmbedded/Yocto layer for minimal systemd-nspawn containers. (E-Con Linux Distribution)

Github repo

e-Con Linux Distribution

e-Con Linux is an embedded Linux Distribution optimized for systemd-nspawn containers. It is built on OpenEmbedded. This distribution is focus on minimal and slim container image.

There are two different core image:

  • e-Con Tiny Core Image
  • e-Con Core Image

e-Con Tiny Core Image

e-Con Tiny Core Image is really so tiny. It's built with musl and busybox. It works successfully with systemd-nspawn. Tested! (Recommended)

e-Con Tiny Core Image packages:

base-files
busybox
busybox-inittab
init-ifupdown
libattr1
musl
netbase
os-release
packagegroup-core-boot
shadow-base
shadow-securetty
util-linux-sulogin

compressed rootfs size: ~660 KB :) uncompressed rootfs size: ~1.2 MB

e-Con Core Image

e-Con Core Image size is larger than e-Con Tiny Core. It is built with glibc and systemd. Natively, actually systemd-nspawn only supports systemd. So it is built with systemd.

e-Con Core Image packages:

base-files
busybox
dbus-1
dbus-common
dbus-tools
kmod
libacl1
libattr1
libblkid1
libc6
libcap
libcrypt2
libdbus-1-3
libexpat1
libkmod2
liblzma5
libmount1
libnss-myhostname2
libsystemd0
libz1
netbase
os-release
packagegroup-core-boot
shadow-base
shadow-securetty
systemd
systemd-compat-units
systemd-serialgetty
systemd-udev-rules
systemd-vconsole-setup
udev
util-linux-agetty
util-linux-fsck
util-linux-mount
util-linux-sulogin
util-linux-umount
volatile-binds

But even so its compressed rootfs size: 4.9 MB uncompressed size: ~17MB

nspctl for systemd-nspawn containers

nspctl, management tool for systemd-nspawn containers.

Github repo

Why nspctl?

There are different tools for systemd-nspawn containers. You can use native tools ('machinectl' command) to manage for containers. But systemd-nspawn, machinectl or other tools do not support non-systemd containers. (non-systemd containers: containers with another init system from systemd. Such as systemv, openrc, upstart, busybox init, etc.)

nspctl supports containers with any init system. nspctl provides almost all of the features that machinectl provides.

Currently implemented features are:

  • Lists

    • running containers

    • stopped containers

    • all containers

  • Containers info

  • Containers status

  • Start the container

  • Stop the container

  • Reboot the container

  • Remove the container

  • Enable the container (the container to be launched at boot)

  • Disable the container at startup

  • Copy files from host in to a container

  • Login the container shell

  • Pull and register containers(raw, tar and docker images)

  • Bootstrap Debian container ("jessie" and newer are supported)

  • Bootstrap Ubuntu container ("xenial" and newer are supported)

  • Bootstrap Arch Linux container

  • Bootstrap Alpine Linux container("v3.13" and newer are supported)

  • Remove hidden VM or container images

  • Remove all VM and container images

  • Run a new command in a running container (non-interactive shell)

  • Renames a container or VM image

  • import raw, tar and directory container images

Installation

Requirements:

  • Python >=3.8

Dependencies:

  • systemd-container package

For Debian and Ubuntu:

$ apt-get install systemd-container

For Centos, Fedora or Redhat Based Distributions:

$ yum install systemd-container

or

$ dnf install systemd-container

Install:

From Github:

  • Clone this repository:

$ git clone https://github.com/mofm/nspctl
  • and install via pip:

$ pip install nspctl/

If you would like to install for your user:

$ pip install --user nspctl/

and you need to add '.local/bin' directory to your path

$ export PATH="~/.local/bin/:$PATH"

Usage:

Synopsis:

nspctl [ arguments ] [ options ] [ container name | URL | distribution ] [ ... ]

Commands:

  • list : List currently running (online) containers.

$ nspctl list
  • list-stopped : List stopped containers.( shortopts: 'lss')

$ nspctl list-stopped
$ nspctl lss
  • list-running : List currently running containers.(alias: 'list', shortopt: 'lsr')

$ nspctl list-running
$ nspctl lsr
  • list-all : List all containers.(shortopt: 'lsa')

$ nspctl list-all
$ nspctl lsa
  • info NAME : Show properties of container.

$ nspctl info ubuntu-20.04
  • start NAME : Start a container as system service.

$ nspctl start ubuntu-20.04
  • reboot NAME : Reboot a container.

$ nspctl reboot ubuntu-20.04
  • stop NAME : Stop a container. Shutdown cleanly.(alias: 'poweroff')

$ nspctl stop ubuntu-20.04
  • terminate NAME : Immediately terminates container without cleanly shutting it down.

$ nspctl terminate ubuntu-20.04
  • poweroff NAME : Poweroff a container. Shutdown cleanly.

$ nspctl poweroff ubuntu-20.04
  • enable NAME : Enable a container as a system service at system boot.

$ nspctl enable ubuntu-20.04
  • disable NAME : Disable a container as a system service at system boot.

$ nspctl disable ubuntu-20.04
  • remove NAME : Remove a container completely.

$ nspctl remove ubuntu-20.04
  • shell NAME : Open an interactive shell session in a container.

$ nspctl shell ubuntu-20.04
  • copy-to NAME SOURCE DESTINATION : Copies files from the host system into a running container.

$ nspctl copy-to ubuntu-20.04 /home/hostuser/magicfile /home/containeruser/
  • clean : Remove hidden VM or container images. This command removes all hidden machine images from /var/lib/machines/.

$ nspctl clean
  • clean-all : Remove all VM or container images. This command removes all machine images from /var/lib/machines/.

$ nspctl clean-all
  • exec NAME 'COMMAND' : Runs a new command in a running container.

$ nspctl exec ubuntu-20.04 'cat /etc/os-release'
  • rename NAME NEWNAME : Renames a container or VM image.

$ nspctl rename ubuntu-20.04 ubuntu-newimage
  • usage : nspctl usage page

$ nspctl usage
  • --help : display help page and exit

$ nspctl --help or -h

Container Operations:

  • pull-tar URL NAME : Downloads a .tar container image from the specified URL.(tar, tar.gz, tar.xz, tar.bz2)

$ nspctl pull-tar https://github.com/mofm/meta-econ/releases/download/v0.3.0-r2/econ-tiny-nginx-20220123-qemux86-64.tar.xz econ-nginx
  • pull-raw URL NAME : Downloads a .raw container from the specified URL.(qcow2 or compressed as gz, xz, bz2)

$ nspctl pull-raw https://download.fedoraproject.org/pub/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.raw.xz fedora-cloud-base-35
  • import-raw IMAGE NAME : Execute a machinectl import-raw to import a .qcow2 or raw disk image.

$ nspctl import-raw Fedora-Cloud-Base-35-1.2.x86_64.raw.xz fedora-cloud-base-35
  • import-tar IMAGE NAME : Execute a machinectl import-tar to import a .tar container image.

$ nspctl import-tar econ-tiny-nginx-20220123-qemux86-64.tar.xz econ-nginx
  • import-fs DIRECTORY NAME : Execute a machinectl import-fs to import a directory image.

$ nspctl import-fs econ-tiny-nginx-20220123 econ-httpd
  • bootstrap NAME DIST VERSION : Bootstrap a container from package servers. Supported Distributions are Debian, Ubuntu, Arch Linux and Alpine Linux.

$ nspctl bootstrap alpine-3.15 alpine latest-stable
$ nspctl bootstrap ubuntu-20.04 ubuntu focal
$ nspctl bootstrap debian-latest debian stable
$ nspctl bootstrap arch-test arch

bohca - yet another bookmanager

Introduction

This is a simple bookmark manager for the web. It keeps your favorites books, articles, songs or whatever else you come across while browsing and allows you to search them.

The project is written with django 4.1 and Python 3.9.

Github repo

Features

  • Simple and intuitive interface
  • Bootstrap 4 (static files included)
  • Procfile for deployment
  • SQLite database by default
  • Search by title, description, links and tags
  • Separate settings for development and production(environment variables)
  • Add, edit and delete bookmarks, tags and categories
  • Backup bookmarks to CSV
  • Restore bookmarks from CSV
  • Export bookmarks to HTML (Firefox and Chrome compatible - tested)
  • Custom admin interface, with search and filters (view, edit, delete all bookmarks, tags and categories)

Roadmap

  • Backup and restore bookmarks
  • Export bookmarks for browsers
  • Browser extension (Chrome, Firefox, Safari)

Installation

  • Create a virtual environment and activate it
python3 -m venv bohca-venv
source bohca-venv/bin/activate
  • Clone the repository
git clone https://github.com/mofm/bohca.git
  • Install the requirements
pip install -r requirements.txt
  • Freeze requirements
pip freeze > requirements.txt
  • Create environment files
mkdir .env
touch .env/development.env
touch .env/production.env
  • Add the following variables to 'development.env' file

For production environment, use environment variables from 'production.env' file

DEBUG="True"
SECRET_KEY=your_secret_key
ALLOWED_HOSTS= "127.0.0.1, localhost"
  • Run the migrations
python manage.py makemigrations
python manage.py migrate --run-syncdb
  • Create a superuser
python manage.py createsuperuser
  • Collect static files
python manage.py collectstatic
  • You can test out the application by running the following command:
python manage.py runserver 0.0.0.0:8000
  • Open your browser and go to http://localhost:8000

Deployment

  • Install gunicorn
pip install gunicorn
  • Create gunicorn systemd service files for production environment
sudo vi  /etc/systemd/system/gunicorn.service
  • Add the following content to the 'gunicorn.service' file
[Unit]
Description=gunicorn daemon
After=network.target

[Service]
Type=notify
User=user
Group=user
EnvironmentFile=/path/to/bohca_venv/bohca/.env/production.env
WorkingDirectory=/path/to/bohca_venv/bohca
ExecStart=/path/to/bohca_venv/bin/gunicorn --workers 3 --bind 0.0.0.0:8000 bohca.wsgi:application
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=mixed
TimeoutStopSec=5
PrivateTmp=true

[Install]
WantedBy=multi-user.target
  • Start the gunicorn service
sudo systemctl start gunicorn
  • Enable the gunicorn service
sudo systemctl enable gunicorn
  • Check the status of the gunicorn service
sudo systemctl status gunicorn
  • Create nginx configuration file for production environment
sudo vi /etc/nginx/sites-available/bohca
  • Add the following content to the bohca file
server {
    listen 80;
    server_name your_domain_name;

    location = /favicon.ico { access_log off; log_not_found off; }
    location /static/ {
        root /path/to/bohca_venv/bohca/staticfiles;
    }

    location / {
        include proxy_params;
        # proxy_pass gunicorn_socket or http://;
    }
}
  • Enable the bohca configuration file
sudo ln -s /etc/nginx/sites-available/bohca /etc/nginx/sites-enabled
  • Test the nginx configuration file
sudo nginx -t
  • Restart the nginx service
sudo systemctl restart nginx
  • Check the status of the nginx service
sudo systemctl status nginx

License

GNU General Public License v3.0

feeder - RSS feed reader

Introduction

The goal of this project is to create a simple, yet powerful, reader for RSS feeds. It reads RSS URLs in the feed.ini file and fetch the content of the feeds.

The project is written with django 4.1 and Python 3.

Github repo

Features

  • Simple and intuitive interface
  • Bootstrap 4 (static files included)
  • Procfile for deployment
  • SQLite database by default
  • Separate settings for development and production(environment variables)
  • Custom admin interface
  • User authentication and Permissions for viewing and editing feeds
  • Users add favorite feeds to their profile

Installation

  • Create a virtual environment and activate it
python3 -m venv feeder-venv
source feeder-venv/bin/activate
  • Clone the repository
git clone https://github.com/mofm/feeder.git
  • Install dependencies
pip install -r requirements.txt
  • Create environment files
mkdir .env
touch .env/development.env
touch .env/production.env
  • Add environment variables for your development environment

For production environment, use environment variables from production.env

SECRET_KEY= (generate a random string: create new secret key)
DEBUG=False
ALLOWED_HOSTS=127.0.0.1 feeder.example.com
  • Initial database schema and migrate
python manage.py makemigrations
python manage.py migrate
python manage.py migrate --run-syncdb
  • Create superuser
python manage.py createsuperuser
  • Collect static files
python manage.py collectstatic
  • You can test out the application by running the following command:
python manage.py runserver 0.0.0.0:8000

Deployment

  • Create gunicorn systemd service files for production environment
sudo vi  /etc/systemd/system/gunicorn.service
[Unit]
Description=gunicorn daemon
After=network.target

[Service]
Type=notify
User=user
Group=user
EnvironmentFile=/path/to/feeder_venv/feeder/.env/production.env
WorkingDirectory=/path/to/feeder_venv/feeder
ExecStart=/path/to/feeder_venv/bin/gunicorn --workers 3 --bind 0.0.0.0:8000 feeder.wsgi:application
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=mixed
TimeoutStopSec=5
PrivateTmp=true

[Install]
WantedBy=multi-user.target
  • Create schedule jobs systemd service file for production environment

This is used to check RSS URLs from feed.ini file and fetch new feeds every 2 minutes

sudo vi  /etc/systemd/system/schedule_jobs.service
[Unit]
Description=Feederjobs daemon
After=network.target

[Service]
Type=Simple
User=user
Group=user
EnvironmentFile=/path/to/feeder_venv/feeder/.env/production.env
WorkingDirectory=/path/to/feeder_venv/feeder
ExecStart=/path/to/feeder_venv/bin/python manage.py startjobs
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=mixed
TimeoutStopSec=5
PrivateTmp=true

[Install]
WantedBy=multi-user.target
  • Start and enable gunicorn service
sudo systemctl start gunicorn.service
sudo systemctl enable gunicorn.service
  • Login to admin interface and create new category(Example: Tech)

  • Add new RSS URL to feed.ini file

vi  /path/to/feeder_venv/feeder/feed.ini
[EFF]
feed = https://www.eff.org/rss/updates.xml
link = https://www.eff.org
logo = https://www.eff.org/sites/all/themes/phoenix/images/logo-monogram.svg
category = Tech
  • Start and enable schedule jobs service
sudo systemctl start schedule_jobs.service
sudo systemctl enable schedule_jobs.service

Finally, you can test out the application by browsing to http://localhost:8000/

Notes

  • This is personal project. So It may have missing features.
  • You can create users and permissions for viewing and editing feeds via admin interface.(rssfeeder.view_feed)
  • Homepage shows the feeds in the default category. If category key doesn't exist or empty value in feed.ini section, it shows the feeds in the default category.
  • Four categories are Tech, Science, News and Videos hardcoded in the application 'url.py' file. But didn't create an object in categories for them. If you want to use them, you can create these categories in the admin interface.
  • You can add new categories in the admin interface. Also, you can add navbar.html file in the templates directory to customize the application.

gentoo-zsh-completions for oh-my-zsh

gentoo-zsh-completions plugin for oh-my-zsh. forked from gentoo-zsh-completions main repository, and modified for Oh-my-zsh.

Installation:

$ git clone https://github.com/mofm/gentoo-zsh-completions.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/gentoo-zsh-completions

To use it, add 'gentoo-zsh-completions' to the plugins array in your zshrc file:

plugins=(... gentoo-zsh-completions)

Gentoo kernel build script

Automating the kernel compile, configuration copying, systemd-boot and grub2 configuration updating, old kernel cleaning script for Gentoo Linux.

Usage:

  • First, clone this repository and change directory:

git clone https://github.com/mofm/kernel-build.git
cd kernel-build
  • Change script permissons:

chmod +x build-kernel.sh
  • If you are using systemd-boot you should first learn rootfs disk UUID or LVM partition and then add it to the script:

    UUID="dfc588c0-edd4-8543-a3fe-d7d49bd8f141"

Optional: You can add specific kernel parameters to ROOTFLAGS.

  • Now you execute the script like so:

./build-kernel.sh systemd-boot
  • If you are using grub2:

./build-kernel.sh grub2

Vim Tips: Channel

Vim supported channels for inter-process communication and job channels communicate with other processes. Also, The vim packages contains the demo channel server(demoserver.py). You can find demoserver.py script in "$VIMRUNTIME/tools/". "$VIMRUNTIME" path, may change according to your distro:

My path, "/usr/share/vim/vim82/tools/demoserver.py" on Gentoo Linux.

  • First, run the demo server in a terminal.

$ python /usr/share/vim/vim82/tools/demoserver.py
  • And then, run Vim in another terminal.

  • For detailed information, I am quoting from the script:

    # Server that will accept connections from a Vim channel.
    # Run this server and then in Vim you can open the channel:
    #  :let handle = ch_open('localhost:8765')
    #
    # Then Vim can send requests to the server:
    #  :let response = ch_sendexpr(handle, 'hello!')
    #
    # And you can control Vim by typing a JSON message here, e.g.:
    #   ["ex","echo 'hi there'"]
    #
    # There is no prompt, just type a line and press Enter.
    # To exit cleanly type "quit<Enter>".
    #
    # See ":help channel-demo" in Vim.
    #
    # This requires Python 2.6 or later.

systemd-boot update script

systemd-boot does not automatically regenerate entry configuration files like update-grup or grub-mkconfig. So you can use below script for Gentoo Linux.

#!/bin/bash
#
# This is a simple kernel hook to populate the systemd-boot entries
# whenever kernels are added or removed.
#

# The UUID of your disk.
# Note: if using LVM, this should be the LVM partition.
UUID="CHANGEME"

# Intel microcode file name
MCODE="CHANGEME"

# Any rootflags you wish to set. For example, mine are currently
# "subvol=@ quiet splash intel_pstate=enable".
ROOTFLAGS="CHANGEME"

# Our kernels.
KERNELS=()
FIND="find /boot -maxdepth 1 -name 'vmlinuz-*' -type f -not -name '*.old' -print0 | sort -Vrz"
while IFS= read -r -u3 -d $'\0' LINE; do
        KERNEL=$(basename "${LINE}")
        KERNELS+=("${KERNEL:8}")
done 3< <(eval "${FIND}")

# There has to be at least one kernel.
if [ ${#KERNELS[@]} -lt 1 ]; then
        echo -e "\e[2msystemd-boot\e[0m \e[1;31mNo kernels found.\e[0m"
        exit 1
fi

# Copy the latest kernel files to a consistent place so we can
# keep using the same loader configuration.
LATEST="${KERNELS[@]:0:1}"
echo -e "\e[2msystemd-boot\e[0m \e[1;32m${LATEST}\e[0m"
cat << EOF > /boot/loader/entries/gentoo.conf
title   Gentoo Linux
linux   /vmlinuz-${LATEST}
initrd  /${MCODE}
initrd  /initramfs-${LATEST}.img
options root=UUID=${UUID} rw ${ROOTFLAGS}
EOF
#done

# Copy any legacy kernels over too, but maintain their version-
# based names to avoid collisions.
if [ ${#KERNELS[@]} -gt 1 ]; then
        LEGACY=("${KERNELS[@]:1}")
        for VERSION in "${LEGACY[@]}"; do
            echo -e "\e[2msystemd-boot\e[0m \e[1;32m${VERSION}\e[0m"
            cat << EOF > /boot/loader/entries/gentoo-${VERSION}.conf
title   Gentoo Linux ${VERSION}
linux   /vmlinuz-${VERSION}
initrd  /${MCODE}
initrd  /initramfs-${VERSION}.img
options root=UUID=${UUID} rw ${ROOTFLAGS}
EOF
        done
fi

printf "\n"
printf "Updating systemd-boot firmware \n"
bootctl update

# Success!
echo -e "\e[2m---\e[0m"
exit 0

This script forked from here.