One day, I was studying for one of my exams, and this thought came to my mind – Why do colleges keep the minimum required attendance level so high? The first answer that comes to anyone’s mind is: Because students bunk classes.
Okay, but why do students bunk classes?
The reason is simple: Because the lectures are boring. The hard fact is, very few people take up teaching out of dedication. Most of the people are those who for some reasons, could not get a job in their industry and are in the teaching job.
My college specifies a minimum attendance requirement of 75%. And this is almost same across major colleges in India. I don’t know what’s the case in IITs, NITs, etc. I have a few friends from those but nobody ever complained about attendance.
It’s semester end again at college and we’ve got hell lots of journal-writing work. The journal-writing work is a indespsible part of every semester in Engineering, at least in my University and possibly every other University out there.
The general trend in my college, a lot different from others is – A student (usually the topper) is given the format and sample by the professors. They are then instructed to write it (we do it by hand, unfortunately), distribute their copies to friends, and the chain goes on. So basically, every student has the same journal, differing by some extent due to unavoidable errors that creep in the copy-chain and laziness.
The job of writing journals isn’t very hard that one cannot write it by using their brain, but it is a very boring task to do. Hence everyone (that includes me) refers to be somewhere in the middle of the copy chain, and never in the front. Everyone mindlessly copies the things without even thinking what they are writing in their journals – this means they even copy nonsense. But there’s one thing for sure – nobody would try (exceptions do exist, and they are of the order of 0.01%) to even understand or alter the code which was originally written.
Since Python 3 is the future, I directly started with Python 3 for my projects. While, There are some frameworks for Python 3, but I’m not a fan of frameworks. I prefer to glue the components myself and write the application – even if it involves more work and boilerplate code, and that’s because of freedom.
When you use a particular framework, you’re bound to a few rules and and modifying the behavior of the rules gets quite difficult, unless you know the framework you’re using from head to tail completely in detail. Half knowledge is very bad.
I knew about CherryPy since Python 2, and when I studied how to write applications using that, it became my favorite framework. CherryPy is basically a minimal framework, or more specifically a server which handles the common headaches that are required to do when writing a web application in Python. The common headaches are like mapping a request into Controller/Action, handling HTTP errors, authentication, etc. No, these tasks are not difficult, but are rather time consuming because of the size of the code involved.
So, why not use a pre-built framework? I’m negating my own statement, eh? Yes. In this case, freedom is not a strict requirement as much as it is required in the application logic, and CherryPy supports Python 3 – Awesomeness.
CherryPy can be deployed by various methods – It has an inbuilt HTTP server, it can be used as a FastCGI, SCGI and CGI server as well. The point is, HTTP parsing is slow, and doing a slow task using an interpreted language – doesn’t get me. I could very well use CherryPy’s HTTP server as the server for website and stay quiet. That doesn’t work because we’ll be running a relatively slower code (as compared to C/C++, in which web servers are usually written) even for static resources! This is a big waste of resources.
CherryPy uses the Flup1 module for FastCGI/SCGI/CGI implementation and unfortunately, there’s isn’t a working release of the same for Python 3. It’s development seems to have stalled. I was able to install and use the Flup module in Python 3. It installed successfully, but it seems some of the code in it wasn’t ported to Python 3, running 2to3 fixed it. What about any Bugs in it? – The biggest problem. It doesn’t have a public release! If the author had released a version, I would have just used it.
uWSGI is a application container written in pure C. It has a lot of features like multiple protocol support, process management, easy configuration and a lot. You can learn more about it at the official website.
As per the Python PEP-33332, the WSGI application (a callable) should be named as
application, should accept two parameters and return bytes after calling the function that comes as second parameter (usually named as
The same specification is used by uWSGI. But the problem here is, how to get CherryPy running with uWSGI, since the default method is to spawn a HTTP server if you use
cherrypy.quickstart() or, use the
cherryd command. You can spawn a FastCGI/SCGI/CGI server with the
cherryd command (which requires the Flup module).
In the uWSGI case, the server processes are handled by uWSGI, so you need not spawn any processes in your application. After a lot of searching around, reading manuals and experimenting I finally found a workaround to get this thing working.
Here’s a simple code which deploys two apps (basically two classes) using cherrypy:
#!/usr/bin/python3 # Script written by NileshGR (http://nileshgr.com) # This script is under BSD license import cherrypy class One: @cherrypy.expose def index(self): r = cherrypy.response r.headers['Content-Type'] = 'text/plain' return "Hello World" class Two: @cherrypy.expose def default(self, *args, **kwargs): r = cherrypy.response r.headers['Content-Type'] = 'text/plain' content = "Positional arguments\n\n" for k in args: content += k + "\n" content += "\nKeyword arguments\n\n" for k in kwargs: content += k + ": " + kwargs[k] + "\n" return content def application(environ, start_response): cherrypy.tree.mount(One(), '/', None) cherrypy.tree.mount(Two(), '/par', None) return cherrypy.tree(environ, start_response)
The first two classes are CherryPy applications. Notice the last part, after line #28, we’re defining a function named
application(environ, start_response) as specified by PEP-33332.
In the function, we mount the first application at mount point
/ and the second application at mount point
/par and finally
return cherrypy.tree(environ, start_response which is transfers control to CherryPy. The secret here is in the fact that,
cherrypy.tree is a WSGI compatible application and which is why this works!
Quoting the text from cherrypy.tree doc page (pydoc):
cherrypy.tree = class Tree(builtins.object) | A registry of CherryPy applications, mounted at diverse points. | | An instance of this class may also be used as a WSGI callable | (WSGI application object), in which case it dispatches to all | mounted apps.
Starting the uWSGI server to run our CherryPy application:
uwsgi --http :8080 --wsgi-file cherrypy_uwsgi.py
--http to uWSGI because I don’t have a WSGI capable HTTP server on my PC where I do all this development and test work. Anyway, if HTTP with uWSGI works, then the other protocols like WSGI, FastCGI, etc. that uWSGI implements should also work (isn’t that obvious?).
uwsgi --help for more information on how to spawn a server using the protocol you need.
Screenshot of the app running in my browser:
This post is about configuring Trac and Git to work together properly. As you might be knowing, Git is a DVCS – Distributed Version Control System and Trac is a project management system. By default, Trac supports SVN, but there are plugins for Git and Mercurial. I don’t know if there are plugins for other repositories like Bazaar, etc.
In the default configuration in Trac’s Git plugin, the repository is not cached and setting
repository_sync_per_request to blank doesn’t stop Trac from syncing repositories on each request. This is a big performance disadvantage and can big a real big trouble if you have lots of repositories with quite a lot of commits.
SQLite is probably the world’s simplest Relational Database Management System. Basically it’s a C library which can be embedded in programs easily. There is no server/client mechanism, the database is a single file. For small work loads, it often makes no sense to use a big RDBMS package like MySQL or PostgreSQL, unless of course you need the special features they provide.
So, I came across this situation where I need replicate SQLite database. The problem arised because, data redundancy was needed across multiple servers and the program in question was supporting SQLite, MySQL and PostgreSQL; but one of the servers had only workload for PostgreSQL database and installing MySQL for the small amount of data the program handled wasn’t sensible. The other two servers had pure MySQL workload. Also, the updates needed to be propagated. So there is the deadlock.
I searched around to find nothing useful, but I remembered there is a cron called incrond which can watch files and directories for events using INOTIFY and execute commands when specific events occur. The solution is almost there. All I need is a script to copy the database file to other servers when data was written to it. I wrote a simple script which would copy the files to other servers; at first I tried rsync with incremental updates, but it didn’t work, because SQLite doesn’t delete data actually when rows are deleted as written in this FAQ at official website. The data is simply marked for deletion and reused during future inserts.
So I guess you now know why incremental updates won’t work: even if I delete a row, the size of database is going to remain same. Actually it should work with rsync’s checksum method, but it didn’t for me. Nevertheless, since the data size was pretty small, I used scp to transfer the database.
The script is ready, now you need a incrontab entry for IN_MODIFY event i.e. to run the script when the file is modified. Here’s a small example:
/var/lib/database.sqlite IN_MODIFY /scripts/copy_database.sh
That’s it. Whenever the file is modified, it will be transferred to the other servers. Wait, the story doesn’t end there.
I tried modifying the database on primary server and it did properly replicate to other servers, but the new changes weren’t visible neither on the primary server nor on the other servers. In the sense, the program could see the deleted row. This is mainly because of caching in memory. The solution was to reload the program (which didn’t cost much in this case). So I just added the commands for reloading the program in the script before syncing the database to other servers on the primary server and then two ssh commands in the script which execute the reload command on the other two servers.
/etc/init.d/program reload rsync rsync ssh server1 '/etc/init.d/program reload' ssh server2 '/etc/init.d/program reload'