HW1: Hellocaml

The detailed description of the project is available in the README.md file in the GitHub repository template, which you can access via the link on Canvas.

Overview

This project provides a refresher on OCaml programming and some warm-up exercises involving tree manipulation and recursive programming (both of which will be highly useful when building the compiler). It will also familiarize you with the basic workflow of the projects in this course, including the testing framework that we will use to (partially) automate the grading of your projects.

Before you begin

For help on how to get started with OCaml see the toolchain web pages and the OCaml web site.

Please also take some time to skim the available resources on the course homepage – in particular, the book Introduction to Objective Caml provides a very good reference for learning OCaml. In the problems below when you see a note like “See IOC 5.2” please refer to the corresponding section of the book.

Getting Started

Unlike future projects, most of the instructions for this project are found as comments in the source files under the code folder. To get started on this project, run make and then continue to the hellocaml.ml file and follow the instructions (in comments) there.

Building the Project

It is recommended that you compile your projects from the command line, using make. We have included a Makefile that provides several make targets that can help you with the homework:

make       --  builds main.native using ocamlbuild
make test  --  runs the test suite
make clean --  cleans your project directory
make utop  --  starts a utop for the project

For example, using make we can build the project and run the tests all in one go:

> make test
make test
dune build --profile release @install
Info: Creating file dune-project with this contents:
| (lang dune 2.6)
| (name project)
./main.native --test
Running test Student-Provided Tests For Problem 1-3
Running test Problem1-1
Running test Problem1-2
Running test Problem1-3
...

Command-line Running and Testing Projects

After compiling the project, you can run it from the command line by executing either the native or bytecode versions.

The projects in this course are designed to have a single, top-level entry point in the file main.ml. Upon running make, it compiles to an executable main.native, linked from the root of the project.

This program provides a test harness that can be used from the command line with a variety of switches and command-line arguments, just like any other compiler. You can always check which command-line switches are available by using the -help or --help flags. For example, HW1 supports only one interesting command-line option --test:

> ./main.native -help
Main test harness

  --test run the test suite, ignoring other inputs
  -help  Display this list of options
  --help  Display this list of options

All of our projects will support the --test option, which will simply run the project’s unit tests, print a summary of the results and then exit. It might give output something like this (bogus sample) that will give you some idea about how much of the project you’ve completed:

> ./main.native --test

Test1:
  case1: failed - not equal
  case2: failed - assert fail
  case3: failed - test threw an unknown exception
Test4:
  OK
Test2 (3/10 points)
  case1: failed - not equal
  case2: failed - not equal
  case3: passed
Test3 (??/20 points):
  Hidden
Test5 (10/10 points):
  OK
---------------------------------------------------
Passed: 5/10
Failed: 5/10
Score: 13/20 (given)
       ??/20 (hidden)

Once the compiler projects reach the stage where we can generate good assembly output, the main function will support more interesting command-line options and be able to process input files in a way that should be familiar if you’ve ever used gcc or another compiler.

Grading

Submit your solution to this assignment by creating a tagged release on GitHub and providing a link to it on Canvas.

Projects that do not compile will receive no credit!

Your grade for this project will be based on:

  • 64 Points for the test cases that are visible to you

  • 25 Points for the hidden test cases

  • 11 Points for the tests you provide (manually graded)