Protocol Design – FAQ

Last change

We really try to keep this page up-to-date, and use the follwing markers:

Questions

General

First Sheet

Second Sheet

Answers

How can I access the machines remotely?

Just use ssh to log into any of the machines of the course. The machines are named: adder, boa, catsnake, cobra, copperhead, keelback, kingsnake, mudsnake, oilsnake, python, seakrait, seasnake, treesnake; the domain is called net.t-labs.tu-berlin.de

An example of a full hostname would be: treesnake.net.t-labs.tu-berlin.de

I want to make my homedirectory readable to my partner, so that we can share code

We are using AFS for our homedirectories. Most notably this means that most of the standard access bits of files are ignored. But you can change permission for entire directories using the fs command. Example: If your parter has the login prak_john then you can give him read access to your home by typing cd $HOME ; fs setacl . prak_john read

What are the requirements to the submissions?

The solutions

How do I submit a solution?

Example: Given that all the files of your solution to assignement 1 are in the directory sheet1. In this case you just call labcourse-submit.sh sheet1 1 . In case you forgot something or want to correct your submission, just run the same line again. Only the last submission counts. If you are running out of time before the deadline, you might want to submit at least one solution early enough, and try to squeeze in a better one just before the deadline.

How do these two-people teams work?

What is a file like this: name.gz?

It is a compressed file that has been produced using the gzip command. To dump it to standard output use one of these commands:

It is usually not advisible to just uncompress such files to form an uncompressed file on the disc, since they can grow really large.

Perl allows you to directly open such files using

open(DATEIHANDLE, "zcat name.gz|")

which will uncompress the file on the fly.

How to deal with data that is dumped to standard output?

There is no need to write programs all the time. Sometimes it is possible to get important information with just one line of shell code.

You can use so called pipes. A pipe connects the standard output of one program to the standard input of another program. Many Unix tools are able to process data from standard input and return the results on standard output. If you want to view the contents of a compressed file, you can pipe the output of the decompression tool to a pager such as less: zcat file.gz | less

You can even connect multiple pipes, which can become really mighty. One example:

zcat name.gz | grep "http" | sort -n -r | uniq | head >output.txt
  1. This reads a compressed file from disc, and outputs the uncompressed contents to standard output, (zcat),

  2. passes through only those lines that contain http (grep),

  3. sorts (sort) these lines by their numeric value (in the first column) (-n) in reverse order (-r),

  4. removes all duplicate lines (uniq),

  5. returns only the first few lines of this possibly still very long list (head)

  6. and redirects this output in a file (>output.txt)

Note this is still just one line to pass to the shell, which solves a sophisticated task without any programming.

How to process such a file using perl?

The following combined perl/shell script returns how often each HTTP return value appears in the logfile.

zcat logfile.gz | perl -e 'while(<>){s/\s+/ /g;@l=split;@f=split("/",$l[3]);$p{$f[1]}++;}foreach $i (sort keys %p) {print "$i:$p{$i}\n";}'
  

Obviously, this one-liner is barely readable (and hence does not comply to the submission rules). Thus here again in a commented form:

zcat logfile.gz uncompress logfile to stdout
| redirect stdout to stdin
perl -e run perl with the quoted (very long) argument as executeable script
while() {} while loop
<> read the next line from stdin to $_
while(<>) {} iterate over all lines of stdin
s/\s+/ /g; globally replace multiple whitespaces with only one whitespace
@l=split; split $_ using space as seperator, and store the individual words in l.
@f=split ("/", $l[3]); split the forth word (using "/") as seperator
$p{$f[1]}; use the second entry of f as key in the hashmap p
$p{$f[1]}++; increase the value of the hashmap entry by one
keys %p list of all keys of the hashmap p
sort keys %p sort the list of keys
foreach $i (sort keys %p) {} iterate over all keys of the hashtable, in proper order
print "$i:$p{$i}\n"; Print the value of i and the value of the hash entry at i to stdout

As you see, Perl can be used on the fly. But such one-liners tend to be really obfuscated, hence proper documentation is needed in the submission. Some more advanced tricks of how to call perl can be found using man perlrun

What should the server respond, if the requested file exists, but is not a reguar file?

The server should reply with the code 200 only if the file is a regular file, with a 55x else.