September 19 Meeting Notes

Meeting report:  Attendance was a little lite with only 9 in attendance however we had a good meeting.
 
Clint started off with his newly found favorite version of Linux, SharkLinux OS, which is built on Ubuntu 16.04 LTS which is good for updates until 2021!  Besides being an great desktop distribution wtih the ability to install anything in the Ubuntu world, it is actually designed as your personal cloud environment complete with Qemu/KVM virtualization plus vagrant for VM's and provisioning, "cloud environments" and containers such as Docker, Kerbernetes, and LXC along with a number of management tools including Cockpit and Quacamole.  It runs a very efficient Mate desktop which is very fast with its classic menu structure, panel, and desktop and even support Clint's touch screen laptop, a P400.
 
Clint got involved with SharkLinux when it was first listed on Distowatch and made the "feature" of the week: https://www.distrowatch.com/weekly.php?issue=20170619 (the third week of June).  On first use, SharkLinux had a number of issues which he communicated to the developer and they have since been resolved into a solid system.  The main website for SharkLinux is http://www.sharklinuxos.org/ where you find quite a bit about the features and downloads along with "bug" reporting feature.  However, one thing that was missing was documentation on how to use SharkLinux so he has developed his own documentation which is included here and gone over at the meeting.  The deverloper has included some of this on the SharkLinux website.
 
Usage Documentation:
 
Cockpit:
A remote manager for GNU/Linux servers
Cockpit is a server manager that makes it easy to administer your GNU/Linux servers via a web browser.  Cockpit started as Red Hat project and supports management of systemd as well as other services.  Access is via https://localhost:9090 but you can add other instances of servers where cockpit is installed in the cloud and manage them as well. 
Cockpit makes it easy for any sysadmin to perform simple tasks, such as administering storage, inspecting journals and starting and stopping services.
Jumping between the terminal and the web tool is no problem. A service started via Cockpit can be stopped via the terminal. Likewise, if an error occurs in the terminal, it can be seen in the Cockpit journal interface.
Cockpit was featured in the June 2017 issue of Linux Pro Magazine. Current information can be found at http://cockpit-project.org/running.html, http://cockpit-project.org (home page) and https://github.com/cockpit-project/cockpit
Other Docs: http://www.projectatomic.io/docs/
 
Ubuntu Cloud:
Ubuntu Cloud is supported via a local instance of qemu/kvm. On first run, SharkLinux creates the local Ubuntu Cloud Instance for you.  To install a virtual machine from the Ubuntu Cloud, you run "Ubuntu Cloud" from the Virtual-Machines group on SharkLinux where you will prompted for a machine name, username and password.  Then you will be offered a choice of 5 Ubuntu releases from Precise 12.04 through Zesty 17.04.  It will grab/sync the daily cloud image from the Ubuntu Cloud.  More information on use of the Ubuntu Cloud can be found at https://www.ubuntu.com/cloud.  Once "built" locally in qemu/kvm, you can access the virtual machine via the virt-manager (gui) and login usingthe provided username and password.
 
Vagrant:
Vagrant provisioning of virtual machines in the qemu/kvm environment is provided on SharkLinux.  In Virtual-Machines, there are two instances name "SharkLinux" and "Xenial64", both of which use vagrant to provision the respective virtual machines.  Additional vagrant support is provided by a Vagrant Box converter to the qemu/kvm environment as well as the raw vagrant tools for provisioning.  qemu/kvm is the default provider for the machines provisioned using vagrant. Vagrant documentation starts here https://www.vagrantup.com/intro/getting-started/ and the HashiCorp hosts many of the cloud images used by SharkLinux
One example is the SharkLinux virtual machine that is installed via vagrant.
==> default: Adding box 'SharkLinux/SharkLinux' (v3.2) for provider: libvirt
    default: Downloading: https://vagrantcloud.com/SharkLinux/boxes/SharkLinux/versions/3.2/providers/libvirt.box
Once installed, the vm is created in qemu/kvm you are automatically logged in via a terminal session using SSH keybased authentication.  Exiting out automatically shuts down the domain in qemu/kvm.
vm's can be starged via the vagrant up which will open the SSH connection as well using virsh and the graphical virt-manager.
Other Vagrant Commands of use:
vagrant box add python-classroom /home/tinslecl/Downloads/FedoraLabs/Fedora-Cloud-Base-Vagrant-26-1.5.x86_64.vagrant-libvirt.box
vagrant init python-classroom
vagrant up python-classroom
vagrant ssh python-classroom
vagrant global-status <= display vagrant instances with ID.
 
SharkCloud:
SharkCloud is a cloud environment provided by SharkLinux and a graphical "SharkCloud" tool is provided for creating virtual machines from the SharkCloud environment, similar to the Ubuntu Cloud but with more choices of both Ubuntu and Debian environments.  These images are built in the local qemu/kvm instance even though the message is "Launching KVM Cloud Image". The virtual machines can be started using the SharkCloud menu or via libvirt which includes virt-viewer, virsh, or the graphical virt-manager and accessed using the username/password created during the install. Using the SharkCloud, you can create a "temporary" instance in the SharkCloud.
Stack Technologies:
 
LxDevstack, AWSLocalStack, and Kubernetes are also support on SharkLinux.  Networking configuration and firewall rules are provided for these technologies as part of the default installation.  Access to the implementation scripts is found in the virtual-machines group.
 
LXDock/QLC:
For LXDock please refer to official docs for upto date information.
qlc  (quick linux container)
wrapper program with LXD/LXDock
 
Install:
LXD - installed by default
LXDock - autobuild script in software section
- or - use command 'qlc-init'
How to use:
qlc <name> <arg>
-start  creates new env
start may be replaced by a path to a shell script for provisioning
Other commands:
-start/up  start a container
-stop/halt stop a container
-shell/bash Access the shell
-destroy/delete bye bye container
 
Other notes:
All containers launched with QLC:
Have a user account mirroring the username of the creator.
Have a shared folder to allow easy "passing of files" to/from host.
 
Docker Container Usage :
 
Tutorial on usage https://www.howtoforge.com/tutorial/docker-installation-and-usage-on-ubuntu-16.04/
See Also:
https://www.howtoforge.com/tutorial/how-to-use-docker-introduction/
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-16-04
https://docs.docker.com/engine/userguide/
Sample Session (Getting Started)
docker version
systemctl status docker
docker search ubuntu
docker pull ubuntu
docker images
 
Kubernetes - minikube
https://kubernetes.io/docs/getting-started-guides/minikube/
"Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day."  kubernetes website description.
 
Sample startup on SharkLinux: (creates a VM named minikube in KVM)
→ minikube start
There is a newer version of minikube available (v0.22.2).  Download it here:
https://github.com/kubernetes/minikube/releases/tag/v0.22.2
To disable this notification, run the following:
minikube config set WantUpdateNotification false
Starting local Kubernetes cluster...
Starting VM...
SSH-ing files into VM...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.
minikube includes integrated docker support as well.
 
Firejail :
Firejail is a SUID program that reduces the risk of security breaches by restricting the running environment of untrusted applications using Linux namespaces and seccomp-bpf. It allows a process and all its descendants to have their own private view of the globally shared kernel resources, such as the network stack, process table, mount table.
 
Written in C with virtually no dependencies, the software runs on any Linux computer with a 3.x kernel version or newer. The sandbox is lightweight, the overhead is low. There are no complicated configuration files to edit, no socket connections open, no daemons running in the background. All security features are implemented directly in Linux kernel and available on any Linux computer.
 
Official Docs: https://firejail.wordpress.com
 
LinuxBrew:
Linuxbrew is a fork of Homebrew, the macOS package manager, for Linux.
It can be installed in your home directory and does not require root access. 
Can install software to a home directory and so does not require sudo
Install software not packaged by the native distribution
Install up-to-date versions of software when the native distribution is old
Use the same package manager to manage both your Mac and Linux machines
For usage instructions refer to official docs
Docs: http://linuxbrew.sh
 
Netdata:
netdata is scalable, distributed real-time performance and health monitoring:
Everything netdata does is per-second so that the dashboards presented are just a second behind reality, much like the console tools do. Of course, when netdata is installed oubuntu cloudn weak IoT devices, this frequency can be lowered, to control the CPU utilization of the device.
netdata is adaptive. It adapts its internal structures to the system it runs, so that the repeating task of data collection is performed utilizing the minimum of CPU resources.
The web dashboards are also real-time and interactive. netdata achieves this, by splitting the work load, between the server and the dashboard client (i.e. your web browser). Each server is collecting the metrics and maintaining a very fast round-robin database in memory, while providing basic data manipulation tasks (like data reduction functions), while each web client accessing these metrics is taking care of everything for data visualization. The result is:
minimum CPU resources on the servers fully interactive real-time web dashboards, with ubuntu cloud some CPU pressure on the web browser while the dashboard is shown. Docs: https://github.com/firehol/netdata/wiki
 
Cloud VM Management tools
=========================
Guacamole:
Guacamaole -  Shark Specific Details ; The server is accecible at 127.0.0.1:8080. The default User/Passcode = admin/password. For general documentation and usage refer to official docs https://guacamole.incubator.apache.org/doc/gug/
 
Kimchi:
Kimchi is an HTML5 based management tool for KVM. It is designed to make it as easy as possible to get started with KVM and create your first guest.
Kimchi manages KVM guests through libvirt. The management interface is accessed over the web using a browser that supports HTML5.
Official Docs: https://github.com/kimchi-project/kimchi/tree/master/docs
 
Additional information shared at the meeting:
 
Clint's customizations:
 
apt-get install wine
apt install libdvdnav4 libdvdread4 gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly libdvd-pkg
dpkg-reconfigure libdvd-pkg
dpkg --get-selections |grep lxc
apt-get install openssh-server
dpkg-reconfigure openssh-server
systemctl status sshd
iptables -I INPUT 4 -p tcp --dport 22 -j ACCEPT
apt-get install iptables-persistent
iptables -nvL --line-numbers
apt-get install k3b
apt-get install cups
 
=====================================================================================
http://www.sharklinuxos.org/downloads
https://sourceforge.net/projects/sharklinux/files/Live-CD/
https://atlas.hashicorp.com/bento/boxes/ubuntu-16.04
=====================================================================================
 
Clint also demonstrated the ease at which additional Virtual Machines (VMs_ could be provisioned and used, both as stand-alone VMs as well as "cloud derived" vagrant "boxes" and images.
 
While Clint's system was an i7 with 8 GB of RAM, he also tested this an old 2005 dual core 4 GB and a more current Dell Precision i5 and 4 GB of RAM with good success and performance.

Next on the agenda, was the Fedora 26 Python Classroom which Clint had installed as a virtual machine, in both full desktop verision and the command line based virtual box.  The downloads for these install images can be found at https://labs.fedoraproject.org/python-classroom/download/index.html.
 
More information the Red Hat Python Classroom can be found at
https://fedoramagazine.org/introducing-python-classroom-lab/ and
https://labs.fedoraproject.org/en/python-classroom/ 
https://fedoraproject.org/wiki/Changes/PythonClassroomLab
 
Additionally, this tidbit on installing a vagrant box:
How to add a downloaded .box file to Vagrant
https://stackoverflow.com/questions/22065698/how-to-add-a-downloaded-box-file-to-vagrant
================================================
→ vagrant box add python-classroom /home/tinslecl/Downloads/FedoraLabs/Fedora-Cloud-Base-Vagrant-26-1.5.x86_64.vagrant-libvirt.box
==> box: Successfully added box 'python-classroom' (v0) for 'libvirt'!
→ ~/Downloads/FedoraLabs
→ vagrant init python-classroom
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
→ vagrant up --provider libvirt
→ vagrant ssh
================================================
Training resources discussed:
https://docs.python.org/3/library/idle.html
Udemy - low cost courses
https://www.udemy.com
CodeAcademy - Free Training
https://www.codecademy.com/
Red Hat Free RHEL Training
www.edx.org partnership
https://www.edx.org/course/fundamentals-red-hat-enterprise-linux-red-hat-rh066x
 
Pi-Hole
Pi-hole®: A black hole for Internet advertisements
https://pi-hole.net/
Installation script: curl -sSL https://install.pi-hole.net | bash
(You can actually download the install script first and examine it)
 
Pi-Hole actually serves as a DNS server for your entire home network, blocking all DNS requests for known advertising sites, over 117,000 of them.  Clint connected to his home network and showed where Pi-Hole had blocked over 1000 DNS sites in the last 24 hours, significantly reducing network traffic generated by these advertising sites.  He has it set up as his primary DNS server for his home router so that all the devices on his network use it for DNS queries.  Designed to run on the Pi single-board-computer with 512 MB of RAM and can run on the Pi 1 as well as the latest Pi 3.  Clint is running his on a Pi 3 because he wanted to optimize performance including installing the "Rasbian Lite" minimal version.  The result is of this is that he does see faster network performance and the Pi 3 is litterally "coasting" with a very low load average of .02 (and the current uptime is 16 days) and the cpu temperature hangs at 46 degrees C.  The power supply is an older Pi minimal 2 amp and runs cool to the touch.  His Pi 3 sets in an enclosed metal case (no fan) down on the floor and connects to his network via a single cat5 cable via a network switch.   The power supply discussion was interesting in a couple of ways: 1) All 2 amp power supplies are not created equal (do not supply 2 Amps) and 2) Wireless/Bluetooth connectivity draws the most current to where even a 2.5 Amp supply might not be sufficent.  On person said he was using a 3 Amp supply and still had problems at times with "everthing working". Powered USB hubs should be used if you have a lot of USB peripheals or even self-powered devices using USB connectivity.
 
Clint's home router is one of the latest ASUS models, an RT-AC87U AC2400 which is one of hte fastest Wi-Fi routers available, which supports some interesting technologies including seperate radio systems for the 2 GHZ (A,B,G) and 5 GHZ (N,AC) bands plus hardware NAT and lots of configuration options including load balancing between ISP providers.  He recently replace his Netgear R-7000 Nighthawk AC1900 router, which is a great hole house router in itself.  While Clint was logged into his home network, he gave a quick tour of some of the features of the ASUS router.
 
Another part of the Pi conversation was about the other single board computers such as the Orange Pi and other "me toos".  The bottom line here is that the Pi 3 is the most advanced and supported SBC available.  Also, the Pi 2B v1.2 has been upgraded to the same quad-core SOC (System On a Chip) as the Pi 3 and while only clocked at 900 MHz, it maybe more stable in some applications given its slower speed.
 
Next Meeting Conversation.
Part of the sign in process was to identify some possible future meeting topics and presenters. The only two topics suggested were a VPN server and setting up the Pi-Hole on a home network.  Next month, we are tentatively planning a OpenVPN server demo.  But we need more ideas and more presenters!
 
Our next meeting will be October 17th at 6:30 pm, same location, Ustick Library Bitterbrush Room which worked very well for us at tonights meeting.