Along with the lecture notes, I've posted a couple resources on resolution. The lecture notes (Chp 9, A in PDF & B in PDF) from Russell's class and some hints I've written.
Answers are posted for the CS250 midterm.
Grades as of today (incl. HW #4, but not the midterm or rough drafts yet) are posted.
A word of feedback: if you've got it, I want it. You can stop by my office, leave me voice mail or send me an email. The evaluation I handed out worked well because most people had something to say, but don't wait for me to ask -- tell me now. Now for the specific feedback from the forms I handed out today. The short answer is: the feedback was all over the map. Too fast, too slow. Very relevant, irrelevant. Reading was really easy, reading was really hard. Her's a summary of the major points.
Lectures | Difficulty | Relevance |
Mean |
2.5 |
3.3 |
Std. Deviation |
0.9 |
1.1 |
A brief comment on the feedback overall: I didn't see very many people say, "I could have done this better", referring to themselves. Most of the suggestions were about what I (as the instructor) could do differently. That's fine, I'm happy to address concerns. You play a role as well though, and you need to think abou that as well. The quizzes are an example. A couple people said they really weren't necessary, and were a waste of time (I've heard this a few times elsewhere as well). However, I think the quizzes are helpful to me. As I've said several times in class, I try to cover the major points and I grade them very liberally on a simple scale. They're not so much intended as performance measures of what you're learning overall and that's reflected in the way they are weighted in the grading.
Let be a bit more specifc. The first question on Quiz 3 asked you to number the nodes as they're visited by DFID. If you did the reading of Chapter 3, DFID should have jumped out at you: it combines two ideas (iterative deepening and DFS) discussed at length in the chapter, neither depth-limited search nor DFS is particularly deep and should be straightforward. DFID is the best among blind searches. It's advantages are clear from Figure 3.18 that compares the many searches in the chapter. I didn't ask you to prove that uniform cost is optimal, or prove that DFID is efficient. I simply asked you to number the nodes, essentially just like Figure 3.16. You didn't need to remember anything obscure: you just had to know how DFS works and that you iterate over the depth limit. In grading the problem, I pretty much gave you credit if you labeled the nodes more than once. Very few people answered the first question correctly, however. Most people put one number in each box, showing that they have no understanding of the iteration. Some people wrote the numbers in depth-first order, others in breadth-first and others in a mix of the two.
The second question (on uniform cost search) essentially asked, if you start with a search that considers cost node cost and you want it to look like a search that ignores node cost, how do you that? Again, this was a pretty big deal in the chapter, and I think not a minute detail but an important concept. The average score on Quiz #3 (amopng people who took it) was 1.5, on a scale of 0-3 . That tells me that either people didn't read Chapter 3, or they did and missed two of the fundamental points of the chapter. That doesn't mean it's necessarily your fault as a student, and it's important information for me to know. The easiest way to convince me that the quizzes are a waste of time is for everybody to ace them.
A few people still wish to use Allegro's IDE with Windows, and are a little alarmed at the cascade of warnings they receive when they load aima.lisp and invoke aima-compile(). Now that you've read all about packages, you should realize that the problem stems from this little change that Allegro makes when loading the IDE:
[changing package from "COMMON-LISP-USER" to "COMMON-GRAPHICS-USER"]
Allegro changes the package because the Allegro IDE environment defines a set of primitves for working with graphics, and does so in a separate package. When the names in the AIMA code collide with the Allegro-defined graphics names, you get a message. You can fix this problem by switching the package back to the one AIMA was designed for: COMMON-LISP-USER. To do so, use in-package():
> (in-package "COMMON-LISP-USER") #<The COMMON-LISP-USER package>
Now when you evaluate AIMA code you shouldn't as many complaints -- just a few about the ineffectiveness of Allegro Presto. By the way, here's an image built from the IDE and the AIMA code for ACL 5.01.
There's a bug in the AIMA code for parsing custom specifications in grid environments. In the function parse-where(), the clause to handle xy-p() in the body of the cond is incorrect. The code reads:
((xy-p where) (parse-whats where whats env))
However, a quick glance at the definition of parse-whats() will tell you that the arguments should be passed in the order: environment, location and what -- not the order above. The corrected line is:
((xy-p where) (parse-whats env where whats))
Note that the images that I've created have the buggy code in the them, so if you're using them you should redefine parse-where() using the corrected definition.
Thanks to Nick McCarthy for pointing out this problem.
I've created a file with notes on questions that arise in class. Some notes clarify (I hope) points made, while others point you to places for more detail.
The Allegro Common Lisp environment is now available in /usr/local/bin from the classes server. You can use the dumped Lisp image to automatically load in all the sample code.
Thanks to an enterprising student in CS250, CLisp is now installed on all the CS Department linux machines. Most notably, this includes the 20 new linux machines in the maclab. The program is in the normal place, so typing clisp at the prompt should work (assuming that /usr/local/bin is in your path).
The Allegro Common Lisp environment is available on CD's that you can borrow and install on your own machine. You can use the dumped Lisp image to automatically load in all the sample code. After you've downloaded the image to your machine, simply double-click the program icon to start Lisp. You can enter: (test 'search) to test the sample code is loaded.
Welcome to CS250! In this course we'll cover the basics of artificial intelligence (AI) along with programming in Lisp. This course will likely be very different from other computer science courses you have taken. The course will be a blend of theory and practice. The theory will be wide-ranging -- we will talk about the nature of intelligence, search spaces and first-order logic. The practice will mostly be implementations of various ideas in Lisp. A couple important points to note:
Seems like a few folks been havin' trouble usin' that confounded MCL in the MacLab. To set things right again, I dun created a "special" MCL with all the AIMA code preloaded. If you're in the MacLab and get a hankerin' to use the AIMA code, try the followin':
(test 'search)
The HyperNews groups should be working now, so test 'em out if you've got comments. This might also be a good way to find collaborators for a final project -- post your project idea and see who else is interested.
I couldn't have put it better myself."The specification set forth in this document is designed to promote the portability of Common Lisp programs among a variety of data processing systems. It is a language specification aimed at an audience of implementors and knowledgeable programmers. It is neither a tutorial nor an implementation guide."
Windows images (Allegro 5.01): With IDE, without IDE -- Updated as of 10/25/99
Macintosh: MCL 3.0
Code (All in one file)