Sunday, December 10, 2017

Scheduling machine learning jobs to run in sequence

This might save you a few minutes of research time: I sometimes need to set up a number of Keras (or TensorFlow) runs to occur in sequence to run overnight, while I am away from work, etc. I don't want the processing to stop if any single job fails.

I use Task Spooler that is in Ubuntu and other Linux distros and can be installed on MacOS using "brew install task-spooler". Note, on Ubuntu, the command is tsp


You can schedule any shell command command to run by prepending "ts". Examples:

cd run1
ts python variational_auto.py
cd ../run2
ts python lstm_text_model.py
ts # get a list of all queued, running, and finished processes
ts -c 3 # get the stdout for process 3
ts -t 3 # get the tail output for process 3
ts -C # clear the list of finished jobs

This simple setup is not really appropriate for doing hyper parameter tuning but is useful to set up a series of runs.

Sunday, December 03, 2017

Forget the whole world and focus on your world

Some people dream of “making it big,” dreaming of starting the next Facebook or Google. Poor people fantasize about becoming very wealthy. I think this is misplaced focus.

I prefer to live in and think about a much smaller world:

  • Being of service to family and friends and enjoying their company 
  • Being of value to coworkers and customers
  • Providing value and entertainment to people who read my books
  • Getting a lot of exercise and eating great food, in small portions
  • Enjoy reading great books and watching great movies

Yes, that is enough for me. I leave changing the world for other people.

Thursday, November 23, 2017

Easy Jupyter notebook setup on AWS GPU EC2 with machine learning AMI

The Amazon machine learning AMI (link may change in the future) is set up for CUDA/GPU support and preinstalled: TensorFlow, Keras, MXNet, Caffe, Caffe2, PyTorch, Theano, CNTK, and Torch.

I chose the least expensive g2.2xlarge EC2 instance type with a GPU and used the One Click Launch option (you will need to specify a key file pem file for the AWS region where you are starting the instance). to have an instance running and available in about a minute. This GPU instance costs $0.65/hour so remember to either stop it (if you want to reuse it later and don't mind paying a small cost of persistent local storage) or terminate it if you don't want to be charged for the 60GB of SSD storage space associated with the EC2.

I am very comfortable working in SSH shells using Emacs, screen, etc. When an instance boots up, the Actions -> Connect menu shows you the temporary public address which you can use to SSH in:

ssh -i "~/.ssh/key-pair.pem" [email protected]

I keep my pem files in ~/.ssh, you might store them in a different place. If you haven't used EC2 instances before and don't already have an pem access files, follow these directions.

Anaconda is installed so jupyter is also pre-installed and can be started from any directory on your EC2 using:

jupyter notebook

After some printout, you will see a local URI to access the Jupyter notebook that will look something like this:

    Copy/paste this URL into your browser when you connect for the first time,
    to login with a token:
        http://localhost:8888/?token=f25c0cdf24b128c0c7ae2ce92bc0583934bc0c1293f83ccf

In another terminal window start another SSH session but this time map the local port 8888 to port 8888 on the EC2:

ssh -L 8888:127.0.0.1:8888 -i ".ssh/key-pair.pem" [email protected]

Now on your laptop you can attach to the remote Jupyter instance using (your token will be different):
http://localhost:8888/?token=f25c0cdf24b128c0c7ae2ce92bc0583934bc0c1293f83ccf

Alternative to using SSH tunnel:

A nice alternative is to install (on your laptop - no server side installation is required) and use sshuttle. Assuming I have a domain name attached to the sometimes running EC2, I use the following aliases in my bash configuration file:

alias gpu='ssh -i "~/.ssh/key-pair.pem" [email protected]YDOMAIN.COM'
alias tun="sshuttle -r [email protected]MYDOMAIN.COM 0/0 -e 'ssh -i \".ssh/key-pair.pem\"'"

Note: Keeping an Elastic IP Address attached to a EC2 when the EC2 is usually not running will cost you about $3.40/month, but I find having a "permanent" IP address assigned to a domain name is convenient.

Goodies in the AWS machine learning AMI:

There are many examples installed for each of these frameworks. I usually use Keras and I was pleased to see the following examples ready to run:
addition_rnn.py
antirectifier.py
babi_memnn.py
babi_rnn.py
cifar10_cnn.py
conv_filter_visualization.py
conv_lstm.py
deep_dream.py
image_ocr.py
imdb_bidirectional_lstm.py
imdb_cnn.py
imdb_cnn_lstm.py
imdb_fasttext.py
imdb_lstm.py
lstm_benchmark.py
lstm_text_generation.py
mnist_acgan.py
mnist_cnn.py
mnist_hierarchical_rnn.py
mnist_irnn.py
mnist_mlp.py
mnist_net2net.py
mnist_siamese_graph.py
mnist_sklearn_wrapper.py
mnist_swwae.py
mnist_tfrecord.py
mnist_transfer_cnn.py
neural_doodle.py
neural_style_transfer.py
pretrained_word_embeddings.py
reuters_mlp.py
reuters_mlp_relu_vs_selu.py
stateful_lstm.py
variational_autoencoder.py
variational_autoencoder_deconv.py

There are many other examples for the other frameworks TensorFlow, MXNet, Caffe, Caffe2, PyTorch, Theano, CNTK, and Torch.

Sunday, November 12, 2017

My blockchain side project to 'give something back'

I am very busy with my new machine learning job but I always like to try to split off some of my own free time for occasional side projects that I think will help me learn new technologies. My latest side interest is in blockchain technologies and specifically I am interested in blockchain as a platform and environment for AI agents.

I liked Tim O’Reilley’s call for action for corporations and people to take a longer term view of working for things of long term value to society in his recent keynote speech: Our Skynet Moment 

While I consider myself to be a talented practitioner for building machine learning and general AI applications since 1982, I don't feel like I work at the level of creating any groundbreaking technologies myself. So, as far as 'giving something back' to society, it seems like my best bet is in putting some energy into distributed systems that push back against centralized control by corporations and governments, things that enpower people.

Although it is really early days, I think that the Hyperledger projects look very promising and I like how this organization is organized in a similar fashion as the Apache Foundation.

I would like to start slow (I don't have much free time right now!) and will record any open source experiments I do at my new site hyperledgerAI.com. I may or may not finish a book project on any open source software that I write: hyperledgerAI: Writing Combined Blockchain and Artificial Intelligence Applications. I will start with small well documented sample applications built on Hyperledger.

Thursday, September 28, 2017

New job and two deep dives into tech

I haven't written a public blog post in four months because I have been busy moving to another state and starting a new job at Capital One (Master Software Engineer, role is tech lead and manager of a small machine learning team). Life has been really good: excitement of new job challenges and Carol and I have been enjoying the university town of Urbana/Champaign Illinois.

I am also finishing up two course specializations at Coursera: Probabilistic Graph Models and Deep Learning. The deep learning class series is just a review for me, and in contrast I find the PGM class series very challenging (I am taking these PGM classes at a slow and relaxed pace - Coursera lets you split classes to multiple sessions).

I am reading two books that I can highly recommend: Fran├žois Chollet's book "Deep Learning with Python" that serves as an idea book for advance use of deep learning using Keras. Fran├žois is the creator of the Keras framework and his new book is a treasure store of new ideas and techniques.

I am also (slowly) reading "Deep Learning" by Goodfellow, Bengio, and Courville. This book aims to be a self-contained reference for deep learning, including a very good prelude containing all the math you need to understand the rest of the book.

I have been taking pictures in the area around Urbana/Champaign. Yes, it is flat here but still beautiful and a nice place to live. I will post some pictures soon.

Tuesday, May 23, 2017

I updated my Natural Language Processing (NLP) library for Pharo Smalltalk

I have recently spent some time playing around in Pharo Smalltalk and in the process made some improvements to my NLP library: I changed the license to MIT and added summarization and sentence segmentation. Older code provides functionality for part of speech tagging and categorization.

Code, data, and directions are in my github repository nlp_smalltalk.

My first experience with Smalltalk was early 1983. The year before my company had bought a Xerox 1108 Lisp Machine for me and a Xerox technical sales person offered me a one month trial license for their Smalltalk system.

Pharo Smalltalk very much impresses me both for its interactive programming environment and also for the many fine libraries written for it.

I don't spend much time programming in the Pharo environment so I am very far from being an expert. That said, I find it a great programming environment for getting code working quickly.

Thursday, April 06, 2017

I am using Lisp more, and big changes to my consulting business

I haven't been using Lisp languages much in the last five or six years since I started using Ruby or Haskell more often to experiment with new ideas and projects. Now that I am winding down my remote consulting business (more detail later) I want to spend more time writing:

I have three book projects that I am currently working on: "Practical Scheme Programming (Using Chez Scheme and Chicken Scheme)", "Ruby Programming Playbook",  and a fourth edition update to "Loving Common Lisp, or the Savvy Programmer's Secret Weapon". All three of these books will be released using a Creative Commons no commercial reuse, no modifications, share with attribution license, so copies of these eBooks can be shared with your friends.

I was very excited when the fantastic Chez Scheme system was open sourced but some of the excitement was soon tempered by the time required to get much of my existing Scheme code running under Chez Scheme and R6RS. To be honest, much of the work for this book is simply for my own use but I hope that I can help other Scheme developers by organizing my work to get set up for Chez Scheme development, by supplying a large number of useful code "recipes", and documenting techniques for using Chez Scheme. I used to mostly use the (also excellent) Gambit Scheme, created by Marc Feeley, for writing small utilities that I wanted to compile down to compact native code. Scheme works better than Common Lisp or Clojure for generating small and fast executables with fast startup times.

Another writing project I am working on now is to update and expand my book Loving Lisp book (it has sold well, and previous buyers get a free upgrade when I am done with the rewrite later this year). There are several new examples I am adding, many of which will simply be rewrites of Haskell, JavaScript, and Java programs that I have developed in recent years. Rewriting them in idiomatic Common Lisp will be fun.

I have a long history using Common Lisp, starting when I upgraded my  Xerox 1108 Lisp Machine in 1983. I have written two successful commercial products in Common Lisp, used a derivative of Common Lisp to program a Connection Machine, and used CL on several large projects.

To brush up on Common Lisp, I am (once again) working through several books written by people with far better Common Lisp programming skills than I have: Edi Weitz, Peter Norvig, and  Paul Graham. My Common Lisp code sometimes suffers from using too small of a subset of Common Lisp and I don't always write idiomatic Common Lisp code. It is definitely time for me to brush up on my skills as I rewrite my old book.

I have only written one Ruby book (for Apress) in the past but since I use Ruby so much for small "getting stuff done" tasks, I have intended to finish a new book on Ruby that I started writing about two years ago.

I hope to have all three of these writing projects complete in the next 18 months. When I published my last book "Introduction to Cognitive Computing" last month, I counted my titles and realized that I have written 24 books in the last 29 years. I love every aspect of writing, and I have met many awesome people because I am an author, so the effort is very worthwhile.

re: large changes to my consulting business:


I have combined doing remote and onsite consulting for the last 18 years. I used to very much enjoy both remote and onsite work but in the last few years I have found that I don't enjoy working remotely as much as I used to while I have been very much enjoying my onsite gigs where I get to meet my customers and not just talk on the phone. Working onsite is a very different experience. So, I have stopped accepting remote work except for longer term projects with some onsite "face time" with customers.

Sunday, March 05, 2017

Technology, antifragile businesses, and workflow

I have been enjoying Nassim Taleb's book 'Antifragile' in which I have learned (or better understood) how difficult to impossible it is to predict the future, especially events with a low probability. Taleb does convince that it is possible and desirable to rate personal habits, health issues, business, governments, etc. as to how fragile <--> robust <--> antifragile they are. Robust is good, antifragile is even better.

It is fragile, for example, to depend on the salary from one company to support your family while investing heavily in that company's stock. It is more robust having a side business to earn extra money and to broadly distribute long term investments. It is antifragile to own multiple businesses. Taleb argues, and I agree, that it is better to earn less but have safer more distributed income streams. Personally, I have three businesses: software development consulting, writing books, and I am a landlord for income properties. I am in the process of opening a fourth business, iOS and macOS apps for two very different use cases (4/26/2017 edit: I finished prototyping my iOS nutrition/recipe iOS app, but I am putting on hold further iOS developement for now).

I read a lot, both books and long essay format things on the web. For years I have categorized and collected useful articles and snippets of text and relied on text search to find what I need later. Since most of my reading is done on my iPad Pro, I have changed the way I manage research materials: I use the iOS app Notability to quickly markup what I am reading with ideas for business, application ideas, etc. I then export what I am reading into one of a hierarchy of folders on Google Drive. I favor Google Drive over iCloud or DropBox because all PDFs on Google Drive are easily searchable. Using an Apple Pencil makes the Notability app more convenient. In any case, it is much more useful to capture my own thoughts along with research materials.

This setup replaces a custom 'Evernote like' system I wrote a few years ago that by using a Firefox plugin I wrote I could capture snippets of what I am reading on the web to a custom web app I wrote. It was a pain to maintain this system for my own use, and it was in no state to develop into a product. Relying on a commercial product and Google Drive for storage is much better.

I have drastically reduced the complexity of my working tools. I have shelved the multitude of Linux systems I have (it would embarrass me to admit how many I have), and now I just use an iPad Pro, a MacBook with and external Retina display, and I have a 60 GB RAM, 16 core VPS I can spin up and down as I need it. Almost all of my development work is now in Ruby (using RubyMine), Java (using IntelliJ), and Haskell (using Emacs and Intero). Having a much simpler work environment cuts my overhead. I have also simplified publishing to the web, favoring using Heroku, Google AppEngine, and serving static sites using Google cloud storage.

Saturday, January 14, 2017

Happy New Year 2017

Happy New Year everyone!

We live in interesting times. We are witnessing exponential growth in technologies and social and economic change. I am going to share my personal views on these two topics and then conclude with my plans for 2017 for leading a free and inspired life.

It is difficult for us humans to really understand exponential growth, as we are seeing in artificial intelligence and other technologies like genetic engineering. One personal way to come to grips with exponential growth is to conduct a thought experiment: compare the technological changes in the world between the times you were ten and twenty years old and the changes in technology in the last ten years. Even a few years ago my cellphone did a fairly poor job at understanding my spoken speech and now it understands me almost perfectly and speech input is now the way that many of us interact with our mobile devices. In my field of machine learning and artificial intelligence, deep learning neural networks have revolutionized how we do speech recognition, language modeling, recognize images, and build predictive models. I expect that environmentally safe energy advances like solar, storage of electricity, and likely viable fusion power will profoundly alter the world for the better in the next ten years. Also remember that with very inexpensive power, fresh water becomes less of a problem at least near oceans because of desalination.

The outcome of rapid social and economic change is much less clear. I believe that people should think for themselves when it comes to politics, and in general politics gets too much of our attention. My thoughts on how to live a free and inspired life are influenced by Catherine Austin Fitts who suggests that we pay more attention to our own physical, mental, and spiritual health than externalities like politics and the trappings of materialism that do little to improve our lives. Catherine promotes the idea of concentrating on adding value to our work, businesses, and society at large. I tend to divide events in my environment into "things I can affect" and "things that I can't do anything about." In principle I like to put almost all of my energy and creativity into things that I can affect.

In addition to enjoying the company of my family and friends my plans for 2017 include:

  • Spending close to zero time watching the "news" on TV. I believe that 20 or 30 minutes a week reading news on the web, preferably randomly chosen from multiple news agencies in many different countries, is sufficient to understand what I need to know about the world situation. Spending many hours a week in a "mental bubble" watching the same news service on the same TV station in just my own country seems like a colossal waste of time that can be better spent on other activities. In the Data Science sense, I "sample" news sources.
  • I hope to spend even more time writing in 2017. I published Haskell Tutorial and Cookbook at the end of last year and my two current writing projects are Introduction to Cognitive Computing and Haskell Cookbook, Volume 2. My wife Carol helps me with editing my books.
  • Cooking and the science of food is a core personal interest and I hope to spend a fair amount of time applying AI and machine learning to my recipe site cookingspace.com. Currently I use the USDA Nutrition Database to estimate the amount of core nutrients in recipes and use a machine learning model to predict what additional ingredients would taste good with any recipe. I am rewriting the core analysis code in Ruby (the latest version is in Clojure) and I plan on major web site updates and I plan on also using the RubyMotion development tools to write apps that use the same analysis code for iOS, Android, and macOS.
  • Spending even more time hiking, kayaking, and at the gym.