Learning Machines: Encoder

For our first assignment we were asked to create an encoder – something that would take a string and shorten the amount of data in the string, from Wikipedia:

For example, consider a screen containing plain black text on a solid white background. There will be many long runs of white pixels in the blank space, and many short runs of black pixels within the text. A hypothetical scan line, with B representing a black pixel and W representing white, might read as follows:

WWWWWWWWWWWWBWWWWWWWWWWWWBBBWWWWWWWWWWWWWWWWWWWWWWWWBWWWWWWWWWWWWWW
With a run-length encoding (RLE) data compression algorithm applied to the above hypothetical scan line, it can be rendered as follows:

12W1B12W3B24W1B14W

This can be interpreted as a sequence of twelve Ws, one B, twelve Ws, three Bs, etc.

 

Having played with image manipulation using Python before, I thought I’d try to encode a black and white image. Using the Python Imaging Library (PIL) I’d start by getting the image’s pixel information (a set of lists within a larger list), map the pixels to B or W and put them into a new list, then compare each item in the list and count/illustrate runs of Bs and Ws.

I got to a point where I was getting runs, but my counting logic seemed slightly off (when to increase the count, mostly). Eve and I got together and compared code, collaborating on an encoder with assistance from Kat Sullivan. Eve’s code and mine had as similar comparison and counting logic, but meeting with Kat proved useful because she showed us that rather than compare the first item to the one after, we could do it to the one before since when we got to the end of the list, there’d be nothing to compare it to.

The version Eve and I worked on is here:

Taking what we learned, I went back to my image encoder to improve it. The first problem I had on my hands was that I was wrong to assume that my black and white image was ONLY pure black and white pixels – in fact, there were some pixels that were neither. Greyscaling was an option – but would give me a range of values from 0-255 so in general, mapping to the alphabet was out of the question.

I adapted the code we worked on to use a specified image as input.

Code can be found at:

https://github.com/zoebachman/encoder

(Visited 71 times, 1 visits today)

18. September 2016 by zoe.bachman.itp
Categories: Learning Machines | Tags: , | Leave a comment

Leave a Reply

Required fields are marked *