Instead of writing an Eliza bot, I'm going to show you how to write one similar to the one provided with Python. The easiest way is just to extend the current implementation [1][2]. I'm not going to do that in this tutorial, however.
Initially I tried to write this tutorial by slowly accumulating working bits of code and then restructuring it as new demands came up. After a while I realized this approach was far from good. At one point I ended up doing some nasty looking metaprogramming with Python's magic methods. That's when I decided it's time to try out a better approach.
Rather than just diving into the code, it's possible to take a look at this problem a bit different way. We could come up with a some sort of class design (see UML) or try a different kind of development approach and worry about design as we go.
As it is quite easy to specify the behavior of an interactive interpreter, I believe it's better to try the latter alternative in this case. I'm going to use Test Driven Development (TDD) approach to achieve this.
If you have never heard of this technique before, it may seem a bit backwards first. Instead of writing the code first and testing it later, in this case you focus on the tests first and derive the code based on that. Any code that does not map to some existing test can be considered to be "dead" and should not be there. In principle the code coverage of tests should be 100%.
So how should the tests be written then? As I stated before it's all about specifying behavior. TDD is an excellent technique to use for specifying an API. Note that it is not a design technique. However, it allows you to refactor your code. This means that you can alter the structure of the code while retaining confidence in it. Essentially it helps you to avoid those nasty regressions that so easily sneak into code.
Before writing any tests, let's set up the project!
Setting up the Project
Instead of using the unit testing framework provided with Python distribution, I'm going to use py.test. If you don't have it set up already, please install it.
Next we need to set up the development environment. Let's do the following tasks:
- Figure out some nice name for the application. This is always the hardest part! I will call mine placidity (courtesy of random word generator).
- Set up a file structure in the following way (replace placidity with the name of your application of course!):
- /placidity
- (Optional) /placidity/INSTALL
- (Optional) /placidity/README
- /placidity/placidity
- /placidity/placidity/__init__.py
- /placidity/placidity/interpreter.py
- /placidity/placidity/tests
- /placidity/placidity/tests/__init__.py
- /placidity/placidity/tests/test_interpreter.py
- (Optional) Set up your revision control system. I won't provide any specific instructions on how to do this. Please consult the provided link for further information.
Note that it is essential to set up this way so that the py.test test runner may find your tests and the modules tested. Also note the way the files have been named. It's not a coincidence that we named the test using test_ prefix. In addition it's possible to use suffix based convention so you could have named the file as interpreter_test.py as well.
Now that we have set up the structure needed by the project, let's set up the development environment. Fire up your favorite text editor (or IDE) and a terminal. Navigate to your project directory (ie. /placidity) in your terminal and open up terminal.py and test_terminal.py in your text editor.
The First Development Cycle
I will start out by specifying behavior for some basic math. After all it will be handy to use the interpreter as a calculator. It's time to write the first test:
test_interpreter.py:
Now that we have written the first test it's a good time to run "py.test" at the terminal to see the test results. It should print out something along this:
placidity\tests\test_interpreter.py E ==================================== ERRORS ==================================== ERROR during collection C:\Users\jutuveps\My Projects\placidity\placidity\tests\ test_interpreter.py > from placidity.interpreter import Interpreter E ImportError: cannot import name Interpreter placidity\tests\test_interpreter.py:1: ImportError =========================== 1 error in 0.11 seconds ============================
Cool! We received an error. This means that we have to add the code needed for it to pass. If you have written Python before, feel free to write one on your own. Here's the one I came up with:
interpreter.py:
If you run py.test now, it should pass meaning your implementation matches to the specification. From technical point of view it's not the greatest implementation but it does make the test pass so it's enough. You should not write more code than what is absolutely necessary.
Note that instead of running the test runner manually you can set it up so that it runs automatically as files are changed by using the --looponfail (or just -f) parameter ("py.test --looponfail"). It's not perfect and sometimes you may need to reset it manually. It's amazing when it works, though. You can see the other parameters you may find useful with "py.test -h".
More Operations
So far we have managed to implement an interactive interpreter that happens to work in this one particular case. It might be fun if it could do a bit more. Let's add some other operations next. If you want, you can do this one by one so that after you have written a test, you write an implementation for it. Here is my code so far:
test_interpreter.py:
interpreter.py:
I cheated a bit in the implementation phase by using Python's eval. An acute reader might complain that eval is a potentially dangerous function. True, but let's worry about that later, okay? We can add some extra tests for the security aspect when needed.
The test file contains a lot of repetition we may get rid of. Let's restructure it a bit:
test_interpreter.py:
It looks a lot more compact now. Essentially I converted changed the tests so that they are contained within a specific test class (note the naming!). In addition there is a separate setup method that creates an instance of interpreter needed by the tests. setup_method is called before each test. It has a counterpart, teardown_method, that is called after each test. It might be useful if you need to set up some connection and then disable it after the test. In practice you might just mock these sort of cases instead, though.
Summary
So far we have implemented an interactive interpreter that handles some basic math. It's missing a lot of vital functionality. It might be fun to allow the user to use his own input even. But that's for some later part of the series.
You may find the source code of this part of the series here. The next part of the series is available here.