Controlling central heating with Arduino and Raspberry Pi


When Arduino and Raspberry Pi released first versions, I did buy one of each. Being a gadget man and … well, a man, I played for a bit with my new toys and left them in a drawer.

Time has passed and not much happened with any of them. As I became increasingly unhappy with the central heating controller in my current house, I decided to take the Ino and Pi out of the drawer and actually build central heating controller that I would be happy with.

The design phase

First requirement of the new heating control system was the least intrusive installation as possible.

The dial thermostat that I have at home, works as a simple switch: switching the heating on when temperature falls bellow pre-set; and switching heating off when temperature rise above another pre-set. I decided to use this simplicity in my design. All I needed to do is hook with one 230V cable into that thermostat (230V, hell yeah).

With entry point sorted, I went into the controller bit. Arduino board will control the Relay, which will simply switch heating on/off. For the temperature reading I chose the Digital Thermometer which offered more stable reading than the Analog one (included in Arduino Uno starter kit). I’ve also added LED to indicate when the heating was on.

Leveraging serial port used to program Arduino, I decided that the board will send temperature updates and receive setup command from Pi via USB connection. It will also be powered via the same. This also solved a problem of 2 separate power adapters for both Arduino and Raspberry Pi.

The build phase

I’ve been trying few things before arriving with the below solution.

Electronics

The Arduino circuit is very simple.

circuit-smaller

Three elements connected to Arduino with a couple of resistors, not much.

The LED is not necessary, it’s there to indicate when the heating is switched on.

The final prototype doesn’t look very attractive, but the lot is hidden under the furniture and the only elements sticking out are the temperature sensors and a bit of LED.

Final prototype of my controller

 

 

Software

I had to write three separate pieces of software:

Arduino

For the temperature sensor I included two extra libraries, the OneWire protocol library and a DallasTemperature sensor library. I use 0.5 centi-degree approximation of a temperature reading.

Temperature reads are sent via Serial Port on every loop. Arduino also expects the Float number on a Serial Port. The received number indicates the desired room temperature for Arduino.

To limit the sensor reading fluctuations, the control of relay changes after at least 10 successful consecutive reads of the same temperature from the sensor.

Raspberry Pi

The software that runs on Raspberry Pi does the following:

  • It waits on the temperature updates from Arduino, and stores the Updates in Memory (for the latest update) and in simple file based H2 database (for historical data),
  • It exposes REST API for UI to get the temperature information and receive the new settings,
  • It schedules temperature changes according to schedule stored in the JSON file.

I started the code in Python, but it was running slow. I did a simple comparison of execution time for Prime number algorithms, and Java 8 was beating Python. On a single core Raspberry Pi 1, it was a good incentive to change the platform. I chose Kotlin programming language as it was new to me and I wanted to learn it.

As a framework for Event driven application I choose Vert.x 3. For Serial Port communication, a bit dated RXTX library.

The UI / Phone controller app

The Web App I build works on computers as well as mobile devices. I did choose React as a UI framework with Material UI components. The lot is build with Webpack into a small set of html/js files.

The testing phase

The testing phase involved connecting everything together, starting it, and hoping that I will not sense the smell of burning electrics and experience no explosions. In other words, a standard scientific and engineering approach :) .

I do run the setup for 4 weeks continuously and it not failed so far.

Summary

This is the first time I used my skills to build something that interacts with physical world. It gave me a great feeling of achievement and satisfaction. I know that I could buy something that looks much nicer and probably works better but I learned a lot during the process.

Raspberry Pi 3 was released while I was building my design and I switched to it. You can see it in the pictures. I also want to switch Arduino Uno prototyping board to Arduino Nano.

All the code and more detailed technical description is available for you to grab from my GitHub repository on https://github.com/greggigon/Home-Temperature-Controller .

5 key learnings from the Innovation Spike

Innovation

Last week I took part in the innovation spike at RBS. This post is my personal retrospective of the spike, findings and lessons learned.

Outline:

  • What is an innovation spike?
  • Format of the spike
  • Technologies and tools used
  • Findings and lessons for the future

What is an innovation spike?

In RBS the innovation spike means an event where small team works in a startup mode, trying to deliver new product or a service by using new technologies. The reasons for the spike are to explore new technologies as well as prove technical possibility and produce proof of concept for business.

The prototype that we delivered was functional enough to spawn discussions around business model and to take a look at product from the user perspective. It was also small enough to simply throw away if business assumptions doesn’t hold or technology is not good enough.

It is possible there is a better name for this event, however “Innovation spike” sounds great and naming things is hard.

Format of the spike

Our small team had 6 developers, graphics designer and a product owner. As a team we had all the technical and non-technical skills required to work on every aspect of product development.

We did prepared a little bit for the spike. Our product owner had a pretty good idea of business that we would like to validate. We had a small infrastructure prepared with some basic tools and did read on the choice of technology for the spike.

The spike took place in The Bakery London, which is a great place for a startup gigs with fantastic people ready to jump on problem and help. We run it for 4 days (Tuesday to Friday) with one day of preparation (Monday) and demo to business on the last day.

Technologies and tools used

There was a number of candidate technologies we wanted to try during the spike. We ended up with the following:

  • Microsoft Azure for our infrastructure needs. Ubuntu VMs with Docker for spike tools and for our applications deployment
  • GitLab, Let’s – Chat, Taiga.IO, Jenkins, Docker, Docker registry with Front-end and ApacheDS as LDAP authentication was our development tools stack.
  • Meteor framework for all front-end and some backend development and Vert.x 3 with Java as a backend API services.
  • VirtualBox with Vagrant for consistent development VMs.
  • Atom, Visual Studio Code and IntelliJ as development IDEs

Some of the technologies were familiar to us, some were entirely new. We picked the above technologies as we wanted to give them a try and provide feedback to the rest of the bank.

At the end we didn’t use Taiga.IO at all as good old fashioned boards with stickies worked great.

Findings and lessons learned

The bellow findings represent my thoughts after holding conversations with others in the team.

1. Location is important

Being away from the office in a different location changes how you think. I mean it. Many things adds up towards it. Lack of dress code and non corporate style of office made everyone feel relaxed. Food and drinks took away the need of thinking about those things during the day. Little technical touches made us being more productive. Things like fast and easy Wireless Internet, big flatscreen connected to Apple TV so that everyone could share their screen when needed.

IMG_20150728_135347~2

2. Cross functional, co-located team is important

It was great for all of us to sit around one big table, where we could pair with each other, ask each other questions or simply showcase what we’ve just done. Business (or product) owner could provide constant feedback and get us on the right path. Spike team in turn could immediately report back what was technical doable or what would take more time than we had.

The range of different skills across our team meant that we could handle all aspects of delivery lifecycle: design, development, testing and release.

There was simply no time wasting and no communication issues.

3. Having a leader is important

Having a leader is important as we got constantly refocused on the goal and next piece of work to handle.

A leader is not there to manage! Role of a leader is to propose initial structure and adapt it as we go along. Innovation spike is no time and place for a project manager.

In our case, our Mighty Leader prepared everything for the spike, time, place and the right people. On the first day of spike he introduced initial structure. We had a session with product owner, than we jumped to a small design phase and we started coding. In average, once every 3 hours we stopped and synched on what we managed to do.

4. Coming prepared is important

As I mentioned earlier the innovation spike is about exploring new technologies. It’s a learning exercise. However, we did prepare a bit and here’s why. Having a good starting point saves time.

Not everyone have to prepare on every aspects of the spike. I took time to learn a bit about Azure Cloud and prepare some basic infrastructure for our team. Others did complete introduction and tutorials on Meteor framework. Someone else created Mockup APIs. We all learned from each other. Someone always had an answer to a problem others were facing. Being prepared is always good.

5. Simple and good enough is GOOD … ENOUGH

Simple and good enough is more than enough for the innovation spike because it’s easier and faster to make. At the end, the final product is something that could be simply thrown away or taken as a proof of concept for a bigger project.

We were constantly reminded about not paying to much attention to unnecessary details, but to focus on the business functionality and pushing boundary of the tested technology.

Summary

Innovation spike was great learning experience. Finding out about new technologies and business opportunities. It was also great fun to be on. The key to its success was location, cross-functional team and being prepared. Having a good and focused leader made us very productive and good enough approach caused us to deliver successful and working proof of concept in a very short time.

What is happening with My Personal Kanban

Some time has passed since my last blog post. I have been working on quite a few things recently. My Personal Kanban is one of them.

I did run a short survey between some of the My Personal Kanban users, asking them about things they would like to see in future releases. Some of the most important include:

  • seamless synchronisation with Cloud across many devices
  • mobile version of the application
  • tagging of cards that makes them searchable
  • Master Board with cards across all Kanbans

At this moment I’m heavily working on the development of the Synchronisation with Cloud. It will replace rather unfriendly process of saving and loading from Cloud. I’m buried in rather complex algorithms for resolving Conflicts between Kanbans.

I’m also re-architecting and modularising Client side services, so they could be reusable in more than one version (same services to be used in Web version and Mobile version).

One of the major changes would involve delivery or deployment mechanism. The application itself will no longer be available as downloadable ZIP file, but a static website that will work offline, Chrome App and Firefox Extension and Mobile HTML5 application for both iOS and Android platforms.

Kanban is also getting a major UI redesign with help from my friend and a great designer Mike.

I am still juggling 9 to 5 day job, preparing to present on a conference and attempting to finish “Principles of Reactive Programming” on Coursera. That is the main reason why new release of My Personal Kanban is taking longer.

Path to work

The road is long but there is light at the end of a tunnel. Stay tuned.

New My Personal Kanban 0.8.0 released – upgrades and new features

I’ve released new version of My Personal Kanban, browser based Kanban board.Kanban column

There are new features as well as simple upgrades to libraries, etc. I also removed Bootstrap UI library.

New features in the latest 0.8.0 release include:

  • Per column limits – possibility to setup limit of cards in the column. If the limit is reached, it’s impossible to add or drag more cards into column.
  • No restrictions to the number of columns. Previously you could only create Kanbans with 3,4,6 and 8 columns. Now it is possible to create Kanban board with any number of columns between 2 and 10.
  • It is also possible to add and remove columns via a column settings menu.

Future plans for My Personal Kanban will include auto-sync with cloud and some major changes in the cloud sync protocol. I need to make those changes before I start development of a mobile version of MPK.

I do hope that new features come in handy and that you will keep on using MPK as your personal Kanban board.

My Personal Kanban version 0.7.0 released

Kanban columnI’ve released a new version of My Personal Kanban. My Personal Kanban is a very simple in-browser Kanban Board application. It is designed to work with no Internet connection, persisting content in a modern browser’s data store. MPK can also store your Kanban encrypted in the Cloud with full data privacy.

New features follow closely previously delivered functionality extending into the specific requests by some MPK users raised on GitHub.

New features in the latest 0.7.0 release include:
• Importing previously exported Kanban from text (JSON) file. It’s a follow up on Export functionality from previous version.
• Change colour of a Column. This functionality comes with a new Column settings button.
• Possibility to select existing Kanban as a Template for New Kanban. If there is a specific structure, column names and colours that you like, you can reuse the setup when creating new Kanban.
• Each Kanban has a unique URL in the Browser address bar, which makes it possible to open or bookmark specific Kanban (this change forced me to introduce Angular.js router, a bit of info for devs).

As well as changes in the latest 0.7.0 release, there are also:
• Updates of libraries to latest versions
• Bugs fixes

As I’m getting closer to release of version 1.0.0, My Personal Kanban is getting future complete. Some of the new functionality that will come before final release includes:
• Pomodoro timer
• Blocked section in the columns
• Import/Export to CSV file

I’m also planning a Mobile version to follow on both iOS and Android platforms.

I would love to hear from you if you are using My Personal Kanban, in what way and what functionality is missing.

Greg

My approach to JSONP limitations

Why JSONP?

Kanban columnDuring the development of My Personal Kanban I stumbled across interesting problem. One of My Personal Kanban features that I was developing, was the possibility to upload Kanban to Google Cloud (Google App Engine application).

My Personal Kanban is designed to work off the local file system, without the need of Internet connection. It means that trying to send something into Interweb is going to hit modern Browser security settings.

The browser security feature prevents web request from being made to a site with a different domain. It also stops from making a request to anything other the same file if you have opened any web application from the local file system. This kind of behaviour is default across any modern web browser.

Fortunately, there is a special type of requests browser will allow happening and it’s JSONP. It’s a GET request with a callback parameter. Callback is a name of the function that the browser will call when it receives successful response from the web.

HTTP GET request limits

My Personal Kanban is written in JavaScript with Angular; server side on Google App Engine is a very simple Servlet written in Groovy.

When I finished my first implementation and started to test it locally with Google App Engine SDK it all looked good. However, upload stopped working when I tested with a real GAE deployment.

Quick research confirmed that different web server implementation might have different settings for Maximum Length of HTTP GET parameters. Those GET parameters are used to send JSONP request data to the web server. Google App Engine has different limit than GAE SDK (which uses Jetty).

I also discovered that the upload was working in one browser but stopped working in another. As it turns out, browsers have its own limits of length of HTTP GET request. It is literally the length of the URL you will put in your browsers address box.

My Personal Kanban is sending a bit of data. It’s not Megabytes, however still too much for the HTTP GET parameters.

Choices of workaround

I thought of writing My Personal Kanban as Chrome extensions. It would enable me to overcome the JSONP limitations however it would bind my application to work on Chrome only. So I ditched the idea.

I decided to chop data into small chunks and send it to the server. To minimize errors of the transfer I invented this Client-to-Server protocol:

  1. All calls during transmission to server are in order, so the data can be assembled in correct way.
  2. First a handshake is made announcing the beginning of transmission for specific user, with information of how many data chunks will be sent.
  3. Chunks of data are sent only if previous chunk was sent successfully.
  4. Finally MD5 hash of data is sent so the server can verify that what was received is correct.

On server side I decided to store chunks for a user in session which in case of Google App Engine is stored in Memcache backed by Data Store. What happens on the server:

  1. Server receives a handshake and creates new Array for data chunks (or removes previous one if transfer was not completed or not successful).
  2.  Server receives data chunks and places them in the Chunk Array stored in session.
  3. When server receives Kanban hash it concatenates the Array in order into String and validates against received hash. If the hash is valid it stores the Kanban in the Data Store.

Details and code samples

The uploaded Kanban is Encrypted with user key and kept on the Cloud encrypted to ensure data privacy. I’ve chosen Rabbit as encryption algorithm. As a side effect to encryption, transmitted data doesn’t need to be encoded and sanitized.
For encryption I’m using fantastic CryptoJS library. CryptoJS also includes MD5 hash implementation.


md5Hash : function(stringToHash){
	return CryptoJS.MD5(stringToHash).toString();
},

encrypt: function(stringToEncrypt, encryptionKey){
	var utfEncoded = CryptoJS.enc.Utf8.parse(stringToEncrypt);
	return CryptoJS.Rabbit.encrypt(utfEncoded, encryptionKey).toString();
},

decrypt: function(stringToDecrypt, encryptionKey){
	var notYetUtf8 = CryptoJS.Rabbit.decrypt(stringToDecrypt, encryptionKey);
	return CryptoJS.enc.Utf8.stringify(notYetUtf8);
}

Thanks to fantastic Angular Promise API it is very easy to implement ordered transmission of data.

var encryptetKanban = cryptoService.encrypt(kanban, this.settings.encryptionKey);
var kanbanInChunks = splitSlice(encryptetKanban, 1000);

var promise = sendStart(kanbanInChunks.length);
angular.forEach(kanbanInChunks, function(value, index){
	promise = promise.then(function(){
		return sendChunk(value, index + 1);
	});
});

return promise.then(function(){
	return checkKanbanValidity(encryptetKanban);
});

My Personal Kanban is Open Source project and available for browsing at https://github.com/greggigon/my-personal-kanban/

Conclusion

The above approach made it possible to upload data to the server from any browser eliminating issues related to default browser’s security settings.
Unfortunately it comes with a cost of more HTTP requests and custom server side coding.

For small amount of text data it is good enough for me, perhaps it would be good enough for you. Let me know your thoughts.

My Personal Kanban – use your own local cloud

I’ve finished new release of My Personal Kanban 0.5.0 and  version of My Personal Kanban Server. New Cloud features enable to Upload and Download Kanban to any Cloud server, accessible via web.

Cloud Setup Menu

My Personal Kanban Server will accept uploads and downloads from My Personal Kanban and store it on disk. You don’t need to generate key, however you can use the same one, you use with MPK Cloud.

Details of how to install server and run it can be found here: https://github.com/greggigon/my-personal-kanban-server .

Technology involved in the MPK Server

I decided to learn Clojure and write the Server in Clojure. I thought the problem was simple to implement it while learning new programming language. I’ve picked Ring to help me. It provided just enough to handle web requests and left me with everything else to code.

Clojure is great language and I hope I will be using it more in the future.