LXD OpenVSwitch and VLANs

LXD is a fantastic container virtualization tool that comes by default with Ubuntu. In one of my applications I needed to have many containers each within it’s own VLAN network.
So I used OpenVSwitch in combination with LXD to achieve this.

There is no inherent facility in LXD to provide VLAN tag numbers to the interface. So it is necessary to use a “Fake bridge”. I managed to do it after reading this article by Scott – VLANs with Open vSwitch Fake Bridges

Let’s say the OpenVSwitch bridge is named vm-bridge and we want to add 10 fake bridges ranging from VLAN 20 to 30. Here’s how I did it:

for i in $(seq 20 30); do
ovs-vsctl add-br vlan$i vm-bridge $i

In LXD you can specify the bridge to which it will connect containers to, so I created 10 containers using a similar loop 😀
Further to bind each container to the fake bridge this step is needed:

for i in $(seq 20 30); do
lxc config device set ct$i eth0 parent vlan$i

You can no longer count on reliability of budget smartphones or Android One

The story here is about my bad experience with a Nokia 7 Plus smartphone which is a certified Android One device.

I have been using Android over the last 7-8 years or so, and like every geek out there I was involved in flashing custom ROMs and tracking XDA forums for new builds. I even had one of the best phones suited for this purpose – The LG Nexus 4.

Read More

A small review of JBL E65BTNC

In 2016, I came across a nice deal for an on-ear headphone – The Motorola Tracks Air. It was selling on Flipkart for ₹2500. That was steal deal, considering the original price is ₹8990. I used it for on and off for quite some time but the on-ear type meant it started hurting my ears when using them for more than 30 minutes. So gradually the usage waned off and I stopped using it. Usually I do not buy new stuff unless the previous one I have is completely dead, this is especially true in case of electronics. Because of the online shopping deals it’s very easy to accumulate unnecessary junk. Just like that due to impulsive purchases without much thought I have a few electronic junk lying around which is in pristine condition not used even a single time.

In order to get rid of the Tracks Air headphones which I had, I tried putting an ad for it on OLX India site which is a famous marketplace for pre-owned stuff. I have successfully sold quite a lot of things on the platform and even bought a few. But for some reason people didn’t seem to be interested in this headphone at any price, so I gave it to someone I knew and didn’t have any headphone for free. At least something lying in my junk is useful to someone.

Read More

Maintain lead acid batteries regularly

Thursdays are usually maintenance day for the electrical power supply company in my area. So there was nearly a full day power cut. Luckily, I have a UPS so that sorts out the problem for 8-9 hours. The lead acid battery I use for my UPS is about 3-4 years old, and these being unsealed batteries they last long, really long if maintained properly.

In the past I have had one such battery last for a decade before requiring a replacement.

Unsealed lead acid batteries require two important maintenance activities:

  1. Topping up distilled water every 6 months
  2. Applying petroleum jelly / grease on the terminals to prevent corrosion
Read More

Ubuntu 18.04 add e1000e Intel driver to dkms

Note: The compile process appears to be broken for driver version 3.8.4. So the following steps will not work for that version. This post will be updated when a suitable fix is found for the same.

Here’s a quick guide on how to add the Intel e1000e driver to DKMS (Dynamic Kernel Module Support) so that it gets installed / uninstalled automatically with future kernel updates and removals.

Download the driver from Intel website https://downloadcenter.intel.com/download/15817

As of my writing this article, the e1000e version is On download the tarball I get e1000e-

Extract it to /usr/src:

tar -xzf e1000e- -C /usr/src

Create a dkms.conf in /usr/src/e1000e- with following contents:

MAKE[0]="make -C src/"

Next, we have to tell DKMS that such a module has been added and build it for each of the kernels we have on the system:

dkms add -m e1000e/
for k in /boot/vmlinuz*; do
  dkms install -k ${k##*vmlinuz-} e1000e/

Finally, reboot the system and the new module should be live.

Date range in a MariaDB query using the Sequence Engine

One of my applications involved generating a date-wise report for items created on that day and we needed zeroes against the count of items on the date which had no entries.

User selects a date range and the application must generate this report. Not so easy if I had not come across the MariaDB Sequence Storage Engine!

Sequences have long been good features in databases like Oracle, PostgreSQL and the likes, I absolutely had no idea of it’s existence in MariaDB — just came across it while browsing the documentation of MariaDB.

Here’s a sample of my use case:

MariaDB [test]> create table items (id int unsigned primary key auto_increment, date_created datetime not null);
Query OK, 0 rows affected (0.061 sec)

MariaDB [test]> insert into items (date_created) values ('2019-01-01'), ('2019-01-05'), ('2019-01-06'), ('2019-01-06'), ('2019-01-01'), ('2019-01-10'), ('2019-01-09'), ('2019-01-09'), ('2019-01-09');
Query OK, 9 rows affected (0.032 sec)
Records: 9  Duplicates: 0  Warnings: 0

MariaDB [test]> select * from items;
| id | date_created        |
|  1 | 2019-01-01 00:00:00 |
|  2 | 2019-01-05 00:00:00 |
|  3 | 2019-01-06 00:00:00 |
|  4 | 2019-01-06 00:00:00 |
|  5 | 2019-01-01 00:00:00 |
|  6 | 2019-01-10 00:00:00 |
|  7 | 2019-01-09 00:00:00 |
|  8 | 2019-01-09 00:00:00 |
|  9 | 2019-01-09 00:00:00 |
9 rows in set (0.001 sec)

MariaDB [test]> select date(date_created), count(id) from items group by date(date_created);
| date(date_created) | count(id) |
| 2019-01-01         |         2 |
| 2019-01-05         |         1 |
| 2019-01-06         |         2 |
| 2019-01-09         |         3 |
| 2019-01-10         |         1 |
5 rows in set (0.001 sec)

MariaDB [test]> 

After a couple of attempts with the samples provided in the MariaDB documentation page, I managed to devise a query which provided me exactly what I needed, using SQL UNION:

MariaDB [test]> select dt, max(cnt) from ( select cast( date_add('2019-01-01', interval seq day) as date ) dt, 0 cnt from seq_0_to_11 union select cast( date(date_created) as date ) dt, count(id) cnt from items where date(date_created) between '2019-01-01' and '2019-01-11' group by date(date_created) ) t group by dt order by dt;
| dt         | max(cnt) |
| 2019-01-01 |        2 |
| 2019-01-02 |        0 |
| 2019-01-03 |        0 |
| 2019-01-04 |        0 |
| 2019-01-05 |        1 |
| 2019-01-06 |        2 |
| 2019-01-07 |        0 |
| 2019-01-08 |        0 |
| 2019-01-09 |        3 |
| 2019-01-10 |        1 |
| 2019-01-11 |        0 |
| 2019-01-12 |        0 |
12 rows in set (0.001 sec)

Yeah, that’s basically filling in zero values for the dates on which there were no entries. Can this be done using RIGHT JOIN? I tried to but couldn’t form a JOIN condition. If you know drop a comment!

Multi-WAN DNS in pfSense

Update: I later figured out there are many other places pfSense restarts Unbound, so this is simply not worth the effort. I reversed the changes & moved Unbound to another box and using just DNS forwarder on pfSense — which is used by the Unbound server.

Having multiple broadband connections at home, I have a pfSense which takes care of load balancing and firewalling. pfSense is pretty good in almost everything, except one thing that was annoying me a lot — That it restarted the DNS Resolver (Unbound) every time either of my WAN connections restarted (one of my ISPs restarts the connection periodically), and the traffic originating from the box itself cannot be load balanced across multiple connections due to a limitation in FreeBSD’s implementation of pf itself – it is unable to set the correct source address.

It’s quite annoying that – even when you use the forwarding mode of Unbound, your DNS still goes through a single WAN interface. Moreover, Unbound doesn’t seem to do parallel querying across DNS servers. So if you have listed multiple DNS servers as forwarders it will try them one by one as they fail. Suppose, the WAN interface from which DNS traffic is outgoing is running at full capacity – a download or somebody is streaming a video, then your browsing becomes slow as well – but the browsing itself may go through another WAN connection. Notably, for having a stable multi-WAN setup in pfSense – you have to use forwarding mode. The gateway switching for the box itself doesn’t work reliably in my experience, due to which I’ve had to face “host not found” error messages even when one of the connections was up.

Solution: Use both DNS Forwarder (DNSMasq) and DNS Resolver together. Why not just forwarder? Because Unbound is a better DNS server – in terms of performance and security. And DNSMasq has an excellent feature that’s probably not available in any DNS Forwarder or Resolver – the ability to send parallel queries to the upstream servers. It’s called the all-servers option and what it does is – when it receives a query for domain X, it will send the query to all the servers and return the first response it receives – so you get the response extremely fast.

Basically, run the forwarder on a different port, listening on local host and let it do the actual forwarding to your DNS servers instead of Unbound directly talking to your upstream DNS servers. You may argue that the performance benefits of using Unbound may get negated by using the DNS Forwarder as upstream to Unbound – probably yes, but probaby no too – because of the parallel query part. Moreover, DNSMasq is exclusively used for relaying queries between Unbound and upstream servers – nothing else, not even caching. So that should possibly remove any bottlenecks in their. Personally I’ve used DNSMasq as the only server in the past and it was pretty fast in terms of user experience – I haven’t compared benchmarks and I’m not interested in doing that either.

So go to Services => DNS Forwarder in pfSense and configure it with following settings:


Then in the custom options box, put the following:


You may be wondering why I’ve specified a different resolv.conf file here, it’s to prevent DNSMasq from using Unbound as a DNS server. That can be achieved by tweaking the “Disable DNS Forwarder” flag in System => General setup, but that would mean the box’s DNS will become slower. I want to use Unbound for the pfSense box too, but DNSMasq should not use Unbound.

Then in DNS Resolver settings:

Disable Forwarding Mode – Otherwise pfSense will put in the configured upstream DNS  servers in Unbound’s configuration.

In custom options box:

  do-not-query-localhost: no
  name: "."

Further, to prevent Unbound from restarting whenever your WAN connection is restarted –

  • SSH to your pfSense box
  • Open /etc/rc.newwanip in your favorite editor, go to line 228 which calls a function services_unbound_configure(). Comment the line.
  • Then open /etc/inc/system.inc, go to line 175 which opens /etc/resolv.conf for writing and replace it with:
$fd = fopen("/var/run/resolv.conf", "w");

Then in System => General setup, tick the box which says “Do not use the DNS Forwarder/DNS Resolver as a DNS server for the firewall”. Now, pfSense will write the real resolv.conf based on your configured DNS servers or whatever your ISP supplies (configurable in General Setup) to /var/run/resolv.conf and in /etc/resolv.conf just put one line:


This will ensure that the pfSense uses Unbound as it’s resolver, so immune to WAN failures.

So now, the DNS works something like this:

Client => Unbound => DNSMasq => Upstream DNS Server

Here, client can be any client on the LAN or pfSense itself. And because of the all-servers feature of DNSMasq, both WAN connections will get used for DNS traffic and choking of one line will not cripple DNS.

A WAN monitor running on Google AppEngine written in Go language

As I stated in my earlier post, I have two WAN connections and of course, there’s a need to monitor them. The monitoring logic is pretty simple, it will send me a message on Telegram every time there’s a state change – UP or DOWN.

Initially this monitoring logic was built as OpenWrt hotplug script which used to trigger on interface UP / DOWN events as described in this article. But then I got a mini PC box and it runs Ubuntu and a pfsense virtual machine. While I could build the same logic by discovering hooks in the pfsense code, but it’s too complex and moreover it doesn’t really make sense to monitor the connection of a device using the same connections!

Perfect, time for a new small project. I was trying to learn Go language, what can be a better way to learn a new programming language other than solving a problem? I build my solution using Google AppEngine in Go.

Why AppEngine? Well, yes I could use any random monitoring service out there but I doubt any such service exists which sends alerts on Telegram. Also, AppEngine is included in the Google Cloud Free Tier. So it makes a lot of sense here. My monitoring program runs off Google’s epic infrastructure and I don’t have to pay anything for it!

If you’ve looked at Go examples, it’s pretty easy to spin up a web server. AppEngine makes running your own Go based app even easier, though with a bit of restrictions which is documented nicely by Google in their docs. The restrictions are mostly about outgoing connections and file modifications. While I don’t need to read/write any files, but I need to make outgoing connections, for which I used the required libraries.

AppEngine app always consists of a file app.yaml which describes the runtime, and url endpoints. So here’s mine:

runtime: go
api_version: go1.8

  - url: /checkisps
    script: _go_app
    login: admin
  - url: /
    script: _go_app

Now the main code which will handle the requests:

package main

import (


var isps = [2]ispinfo.ISPInfo{
		Name:       "ISP1",
		IPAddress:  "x.x.x.x",
		PortNumber: 80,
		Name:       "ISP2",
		IPAddress:  "y.y.y.y",
		PortNumber: 443,

func main() {
	http.HandleFunc("/", handle)
	http.HandleFunc("/checkisps", checkisps)

	for index := range isps {
		isps[index].State = false
		isps[index].LastCheck = time.Now()


func handle(w http.ResponseWriter, r *http.Request) {
	location := time.FixedZone("IST", 19800)
	w.Header().Add("Content-Type", "text/plain")

	for _, isp := range isps {
		fmt.Fprintln(w, "ISP", isp.Name, "is", isp.Status(), ". Last Checked at", isp.LastCheck.In(location).Format(time.RFC822))

func checkisps(w http.ResponseWriter, r *http.Request) {
	for idx := range isps {
		oldstate := isps[idx].State

		if oldstate != isps[idx].State {
			defer isps[idx].SendAlert(r, w)

I separated the code into two packages to keep it clean, so here’s the ispinfo package:

package ispinfo

import (


type ISPInfo struct {
	Name       string
	IPAddress  string
	PortNumber uint16
	State      bool
	LastCheck  time.Time

var telegram_bot_token := "<<< bot token >>>"
var telegram_chat_id := "<<< chat id >>>"

func (i *ISPInfo) Check(r *http.Request) {
	ctx := appengine.NewContext(r)

	host := fmt.Sprintf("%s:%d", i.IPAddress, i.PortNumber)
	timeout, _ := time.ParseDuration("5s")
	conn, err := socket.DialTimeout(ctx, "tcp", host, timeout)
	i.LastCheck = time.Now()

	if err == nil {
		i.State = true
	} else {
		i.State = false


func (i *ISPInfo) SendAlert(r *http.Request, w http.ResponseWriter) {
	ctx := appengine.NewContext(r)
	client := urlfetch.Client(ctx)

	message := fmt.Sprintf("%s is %s", i.Name, i.Status())
	params := url.Values{}
	url := fmt.Sprintf("https://api.telegram.org/bot%s/sendMessage", telegram_bot_token)

	params.Add("chat_id", telegram_chat_id)
	params.Add("text", message)

	response, err := client.PostForm(url, params)

	if err != nil {
		fmt.Fprintln(w, "Error sending message ", err.Error())


func (i *ISPInfo) Status() string {
	if i.State {
		return "UP"

	return "DOWN"

Since the connection status needs to be monitored periodically define a cron job for it, in cron.yaml:

  - description: "check connection status"
    url: /checkisps
    schedule: every 5 mins

gcloud app deploy app.yaml cron.yaml  in the directory and the app is ready!

This is a small monitoring service that managed to build in a couple of hours while learning Go language and the AppEngine API. It should take hardly an hour for a pro. Also I didn’t really follow the correct packaging principles – the ispinfo package exposes pretty much all fields. This could have been better.

The code is available in my github repository in case you’re interested in it.

Monitoring your internet connections with OpenWRT and a Telegram Bot

For the past 5 years or so, I have been using a single ISP at home and mobile data for backup when it went down. But since last few months, the ISP service became a bit unreliable – this is more related to the rainy season. Mobile data doesn’t give fiber like constant speeds I get on the wire. It’s very annoying to browse at < 10 Mbps on mobile data when you are used to 100 Mbps on the wire.

I decided to get another fiber pipe from a local ISP. One needs to be very unlucky to have both going down at the same time – I hope that never happens. Now the question is how to monitor the two connections: Why do I need monitoring? – so that I can inform the ISP when it goes down, with the fail-over happening automatically thanks to OpenWRT’s mwan3 package, I won’t ever know when I am using which ISP (unless I am checking the public IP address, of course).

The solution: A custom API and a Telegram bot. For those not aware about Telegram, it is an amazing messaging app just like Whatsapp with way more features (bots, channels), and does away with some idiosyncrasies of Whatsapp such as restricting you to always have the phone connected.

A Telegram bot is fairly simple to write, you just have to use their API. Now this bot is just going to send me messages, I am never going to send any to it, so implementing my WAN monitor bot was very easy.

My router is a TP Link WR740N which has 4 MB flash – so it is not possible to have curl with SSL support which is required by the API. I wrote a custom script which can be called over HTTP and plays well with the default wget. The script is present on a cloud server which can, obviously, do the SSL stuff.

A custom wrapper to Telegram API to send message in PHP:


$key = '<a random key>';

if ($_REQUEST['key'] != $key) {
    die("Access Denied");

$interface = $_REQUEST['interface'];
$status = $_REQUEST['status'];

$interface_map = array(
    'wan1' => 'ISP1',
    'wan2' => 'ISP2'

$status_map = array(
    'ifdown' => 'Down',
    'ifup' => 'Up'

$message = "TRIGGER: ${interface_map[$interface]} is ${status_map[$status]}";

$ch = curl_init("https://api.telegram.org/bot<bot ID>/sendMessage");

curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, array(
    'chat_id' => '<your chat id>',
    'text' => $message

The <your chat id>  part needs to be discovered once you send a /start command to your bot and use Telegram’s getUpdates method. You will get it in API’s response JSON. $key  is just a security check to prevent external attacks on the script.

And this script is called on interface events by mwan3 (/etc/mwan3.user ):

wget -O /dev/null "http://<server ip address>/wanupdate.php?interface=$INTERFACE&status=$ACTION&key=<your random key>" >/dev/null 2>/dev/null &

Shell script to monitor connections by cron directly from the server:


nc -w 2 -z <ip address> <port number>

nc -w 2 -z <ip address> <port number>

sendmsg() {
    curl https://api.telegram.org/bot<bot id>/sendMessage -d chat_id=<chat id> -d text="$1" &> /dev/null

if [[ $isp1_status -ne 0 ]]; then
    sendmsg "MONITOR: ISP1 is Down"

if [[ $isp2_status -ne 0 ]]; then
    sendmsg "MONITOR: ISP2 is Down"

The above script uses netcat to do the link test using a TCP connection to a port number which is port forwarded to a server because I found ping was doing some false positives. I couldn’t reproduce it when I was trying it manually but I used to get DOWN messages even though the connection was working.

One must wonder though, how will the message reach me via Telegram when both ISPs go down at the same time – well I leave that job to Android and mobile data. Android switches to mobile data as soon as it finds WiFi doesn’t have internet access.

Asterisk PJSIP wizard and phone provisioning

So after setting up Asterisk with a working DAHDI configuration for the PBX project, next was configuration for IP phones using PJSIP and provisioning them.

Asterisk has a built-in module called res_phoneprov which handles HTTP based phone provisioning but that didn’t work for me – I just couldn’t have it generate XML configuration for the phones that we had, i.e. Grandstream GXP1625.

The server on which I had configured PBX was multi-homed, as in it was part of multiple networks. But there was no reason to run the service on all interfaces except the VLAN on which we were going to connect the phones.

Read More