Update
This commit is contained in:
parent
8b1d7a4381
commit
aadf27a49f
16 changed files with 2151 additions and 1804 deletions
|
@ -66,7 +66,8 @@ This place is for suggesting programming projects that will in first place serve
|
|||
11. **3D model of [fractal](fractal.md)**: Make a program that outputs 3D model of either Siepinski triangle or Koch snowflake fractal. The output shall be some simple 3D format like obj or Collada. The model can be primitive, i.e. it can be just flat shape made of triangles which don't have to really be connected, but the program must allow specifying the number of iterations of the fractal (during invocation, e.g. as a CLI flag). Check that the model is correct by opening it in some 3D editor such as Blender.
|
||||
12. **[steganography](steganography.md)**: Make a program that hides text strings in either pictures, sounds or another text. The program must be a nice [CLI](cli.md) utility that performs both encoding and decoding -- it will allow the user to specify the string to hide (this string can be simplified to take less space, e.g. it may be converted to all caps, special characters may be removed etc.) and the data in which to embed them. The size of the string that can be encoded will of course be limited by how much space there is in the data, so you can reject or shorten the string if that's the case. The string must NOT be hidden in metadata (i.e. exif tags, file header, after the data, ...), it must be encoded in the useful data itself, i.e. in pixels of the picture, samples of the sound or characters of the text, but it mustn't be apparent that there is something hidden in the data. Use some simple technique, for example in images and sound you can often change the least significant bits without it being noticed, in text you can insert typos, hyphens, replace some periods with semicolons etc. Get creative.
|
||||
13. **[sudoku](sudoku.md) solver**: Create a program to which the user somehow passes a sudoku puzzle (in a file, through a CLI flag, interactively... the choice is yours, but passing a new puzzle mustn't require program recompilation) and the program attempts to solve it. It must first employ some basic reasoning, at very least it has to repeatedly try the elimination method, i.e. marking a set of possible values in each empty square and then reducing these sets by crossing out values that can't be in that square because the same value is in its column/row/minisquare -- wherever only one value remains in the set, it is filled in as final; this has to be repeated until no more progress is being made. If you want, you can employ other techniques as well. After this if the puzzle is still not solved, the program will resort to [brute force](brute_force.md) which has to eventually lead to solution (even if it would take too long). If the program finds that the puzzle is unsolvable, it has to report it.
|
||||
14. **language recognizer**: Make a program that will be able to learn and then recognize language of text it is given (in theory it should work for any kind of language, be it human or computer language). Specifically the program will first get *N* files, each one representing a different language (e.g. 5 books in different human languages), then it will take some other text and say to which of the initial *N* files it is linguistically most similar. For simplicity assume only plain ASCII files on input (you can just use some Unicode to ASCII utility on all input files). Use some simple [machine learning](machine_learning.md) technique such as some variant of k-NN. It will suffice if for each training example you construct a vector of some nice features, for example {average word length, vowel/consonant ratio, relative frequency of letter A, relative frequency of letter E, ...}, give each component some weight and then just find the nearest neighbour to the tested sample in this feature space (if you want to be more fancy, split the input files into parts so you get more training samples, then try k-NN with some convenient k). You shouldn't and CANNOT use neural networks, and of course you CANNOT use any machine learning library ;) You don't have to achieve great accuracy but your program should likely be able to quite reliably tell e.g. German from C++.
|
||||
14. **[football](football.md) simulator**: Imagine you're moving to live inawoods in a cabin that has no TV or Internet, but it will have a simple solar powered computer. You are a fan of football and like to watch the matches, so you want to write yourself a program that will simulate a game of football for you to watch. Of course it doesn't have to compete with FIFA games, it can be pretty simplified, but it must have some kind of true 2D graphics (not just ASCII art), done for example with SDL, SFML, Allegro, Xlib or anything similar. It doesn't have to be super realistic, players can be drawn even just as circles (thumbs up for some kind of stick figures), but you must implement the basic rules, including offside, players must have different attributes (name, speed, accuracy etc.), they must be able to foul etc. There must also be referees who may sometimes make a bad judgement. Of course the program must show the score, time etc., as well as some basic statistics (number of shots on the goal etc.). The program must allow some flexibility and settings, for example letting 5 very skilled players play against 10 noob players etc. -- this may be implemented with command line flags so that you don't have to implement a graphical menu. If you really hate football you may choose to implement a similar game, for example ice hockey or american football, but it really has to be comparable to this exercise, don't make it easier for yourself.
|
||||
15. **language recognizer**: Make a program that will be able to learn and then recognize language of text it is given (in theory it should work for any kind of language, be it human or computer language). Specifically the program will first get *N* files, each one representing a different language (e.g. 5 books in different human languages), then it will take some other text and say to which of the initial *N* files it is linguistically most similar. For simplicity assume only plain ASCII files on input (you can just use some Unicode to ASCII utility on all input files). Use some simple [machine learning](machine_learning.md) technique such as some variant of k-NN. It will suffice if for each training example you construct a vector of some nice features, for example {average word length, vowel/consonant ratio, relative frequency of letter A, relative frequency of letter E, ...}, give each component some weight and then just find the nearest neighbour to the tested sample in this feature space (if you want to be more fancy, split the input files into parts so you get more training samples, then try k-NN with some convenient k). You shouldn't and CANNOT use neural networks, and of course you CANNOT use any machine learning library ;) You don't have to achieve great accuracy but your program should likely be able to quite reliably tell e.g. German from C++.
|
||||
|
||||
### Level 3: Hard, *Ultra Violence*
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue