blog

Paper accepted into Imperial College Energy and Performance Colloquium 2012

My submission to the Imperial College Energy and Performance Colloquium 2012 has been accepted. It's just an extended abstract which briefly outlines some ideas for my PhD research.

The paper is:

  • Kelly J, Knottenbelt WDisaggregating Multi-State Appliances from Smart Meter Data. Imperial College Energy and Performance Colloquium. 29 May - 1 June 2012.  PDF

Abstract:

Smart electricity meters record the aggregate consumption of an entire building.  However, appliance-level information is more useful than aggregate data for a variety of purposes including energy management and load forecasting. Disaggregation aims to decompose an aggregate signal into appliance-by-appliance information.

Existing disaggregation systems tend to perform well for single-state appliances like toasters but perform less well for multi-state appliances like dish washers and tumble driers.

In this paper, we propose an expressive probabilistic graphical modelling framework with two main design aims: 1) to represent and disaggregate multi-state appliances and 2) to use as many features from the smart meter signal as possible to maximise disaggregation performance.

A new language for mathematical computing: Julia

Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. The library, mostly written in Julia itself, also integrates mature, best-of-breed C and Fortran libraries for linear algebra, random number generation, FFTs, and string processing. 

More info: The Julia Language and Why We Created Julia and A Matlab Programmer's Take on Julia.  Sounds pretty awesome.

Incidentally, the third link includes a quote which pretty much exactly captures my current feelings about Matlab:

The Matlab language is slow, it is crufty, and has many idiosyncracies... I strongly disagree, however, with the opinion, common among some circles, that Matlab is to be dismissed just because it is crufty or "not well designed". It is actually a very productive language that is very well suited to numerical computing and algorithm exploration. Cruftiness and slowness are the price we pay for its convenience and flexibility.

I fundamentally disagree with the last statement though.  Cruftiness and slowness should not be the price we pay for convenience and flexibility.  Matlab could've been designed to be both high-performance and productive.  For example: one source of slowness and cruftiness is that objects are usually passed by value, not by reference (yes, I know MATLAB does copy-on-write... which is great... until you want to write to an object).  I think that defaulting to pass-by-value is simply a design mistake.  Pass by reference wouldn't prevent MATLAB from doing the things it does, and would make it faster.

Awesome stats, machine learning & information theory videos on YouTube

I'm still very much enjoying the Coursera / Stanford Probabilistic Graphical Models course but occassionally I need to turn to another source to help explain the concepts.  I've just re-descovered MathematicalMonk on YouTube.  He has over 200 videos on machine learning, information theory and stats.  The videos I've sampled so far have been excellent.  Very lucid. 

Added list of academic writing

Just a very quick note to say I've started a list of my academic "publications".  It's pretty anemic at present.  But hopefully that'll change soon!

Summary of "green" features we've added to our house

Our house is a solid-walled house built around 1905.  Being end-of-terrace, it used to be very cold in winter.  We've gradually insulated over the past three years.  In terms of thermal performance, the house should now perform roughly on a par with a new build.  The majority of the work has been insulating the walls.  I did the bedrooms, living room and dining room and we used builders to do the bathroom.  In total, the energy-saving measures now installed include:

  • 65-80mm of rigid-foam insulation on all external walls (mostly DIY; some done by builders during other work)
  • at least 270mm of glass-wool insulation in the loft (DIY)
  • insulated the suspended timber floors in the living room and dining room (DIY)
  • we worked with a local sash window maker to put high performance double glazing units into wooden frames for the front of the house
  • lots of draught proofing and a focus on airtightness during the DIY refurbishment
  • mechanical ventilation with heat recovery in the bathroom (it works very well)
  • fitted wet underfloor heating in the living room (DIY).  UFH is wonderful!
  • solar thermal (evacuated tube) fitted professionally (would have done it DIY if it weren't for the new regs)
  • light pipes to bring natural light into the kitchen and corridor (installed by builders)
  • home-made 450 litre rain water tank in back garden, with piping running under living room floor to bring rain water to front garden
  • thermostatic radiator valves on all radiators; new condensing boiler with walk-about thermostat (which is great)... plan to install room-by-room digital radiator controls

Overall it has been a lot of work and at times it's felt overwhelming.  But we're pretty much finished with the insulation and there's absolutely no question that the house is considerably easier to heat and more comfortable than it was.

Getting LaTeX and Lyx to use ACM SIG class file

Installing the ACM SIG LaTeX class file on Ubuntu using tex-live2011 and using it in Lyx.

First, download the ACM class file and let LaTeX know about it (modified from Ubuntu wiki):

MATLAB notes

Just some random notes about MATLAB.

Summer schools & workshops on smart energy / disaggregation

This is just a stub entry for now... I will flesh it out in coming months.  I aim to list any summer schools, workshops and conferences which are relevant to smart meter disaggregation.

Concrete example of floating point arithmetic behaving in unexpected ways

I've heard lots of people say that it's best to use a floating point number only when you really need to.  During my MSc we learnt about how floating point numbers are encoded and did little pencil-and-paper exercises to demonstrate how decimal fractions are converted into surprisingly odd floating point representations.  I've read about computer arithmetic errors causing the failure of a patriot missile.  But the following little problem that I've just bumped into seems to be a very clean, concrete way to demonstrate that floating point numbers are to be handled with care.   Here's the example... if I subtract 0.8 from 1, the remainder is 0.2, right?  So let's try asking Matlab or C++.  Try evalating the following:

(1 - 0.8) == 0.2

This expression will return a boolean.  It's simply subtracting 0.8 from 1 and then asking if the answer is equal to 0.2.   Rather surprisingly, it returns false.  Why?  Because 0.2 cannot be precisely represented in binary floating point; the significand is 1100 recurring.  0.2 decimal = 3E4CCCCD in 32-bit floating point (hex representation). Now if we convert from binary floating point back to decimal, we get: 3E4CCCCD = 2.0000000298023223876953125E-1   (You can learn more about floating point arithmetic on WikiPedia and to tinker with this nifty floating point converter applet.)  The bottom line is: if the quantity you're trying to represent can easily be represented using integers, then it's probably best to do so.  e.g. if you're trying to represent monetary values in C++, and you know you'll only be interested in values of a specific precision (like 0.1 pence) then you could build a simple Money class which internally represents money as integers.

There's lots of good discussion (and links) of the limitations of floating point here: http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems

Update 18/6/2012

I've just learnt that Python can cope with decimal numbers if you import decimal:

1 - 0.8

0.19999999999999996

(1-0.8)==0.2

False

import decimal

1-0.8

0.2

(1-0.8)==0.2

True

Update 21/11/2013

This is a good explanation of the "leakyness" of FP: John D. Cook: Floating point numbers are a leaky abstraction.

Stanford's free online Probabilistic Graphical Models course

Just a very quick note to say I'm a week into Stanford's free online Probabilistic Graphical Models course.  It's really, really good and I'm learning loads (although does require a fair amount of work).  The online course covers the same content as Stanford's postgraduate PGM course (it's not watered down like Stanford's free online Machine Learning course) and has interesting programming assignments.  Very juicy stuff and it should substantially improve my ability to refine and implement some of my hand-wavy ideas.

This is the first on-line course I've taken and I'm very impressed.  It seems to be a near-perfect mix of the best bits from "real" lectures and the best bits from studying alone with a text book.  i.e. it's engaging and "human" like a lecture; but you also have the option to pause / rewind (like reading a text book) to think things through.

Pages

Subscribe to RSS - blogs