Chris Pollett >
Students >
Basani
( Print
View )
[Bio]
[Project Blog]
[CS297Proposal]
[ClusteringPPT]
[Del1]
[Del2]
[Del3]
[Del4]
[CS297ReportPDF]
[CS298ProposalPDF]
[CS298PresentationPDF]
[CS298ReportPDF]
[Project CodeTAR GZ]

CS297298 Project News Feed
Work done since April 3rd
(Posted on Tue, 17 Apr 2007 12:10:01 PDT
.)
1. Finished Hierarchical clustering implementation:
H, hier build hierarchical clusters
./gnuchess H e p
2. Finished implementing all other extensions:
All extensions:
1.
T, ntree build and lookup nary trees
./gnuchess T e p
2.
H, hier build hierarchical clusters
./gnuchess H e p
3.
b, bookpvs lookup book during each depth of pvs
./gnuchess b e p
4.
t, treepvs lookup nary tree during each depth of pvs
./gnuchess t e p
5.
u, updatescore update score during pvs if book lookup succeeds
./gnuchess u e p
Lookup book during each depth of pvs and update pvs score if found.
6.
U, tupdatescore update score during pvs if nary tree lookup succeeds
./gnuchess U e p
Lookup nary tree during each depth of pvs and update score if found.
TODO:
1.Run various statistics.
2.Write Report.
3.Set presentation date.

Minutes from April 3rd
(Posted on Sun, 08 Apr 2007 00:01:04 PDT
.)
Discussed my version of clustering of game boards.
Went over the logic I implemented.
We decided to do a proper Hierarchical clustering implementation as part of final submission.
I have to do the following four things:
1. Write the Report
2. Collect statistics for the tree structure derived with human games and
games played by the PVS algorithm.
3. Finish implementing my algorithm.
4. Finish Hierarchial clustering implementation.

Work done since March 13th
(Posted on Tue, 03 Apr 2007 09:48:25 PDT
.)
Completed:
Built hierarchical clusters which are used during the move lookup.
Distance metric: The distance metric used is the "number of moves used to convert one board to another board".
The cluster entries are stored in an nary tree format. The children are sorted so that the
leftmost child always is the best move –
it has the best ratio = (wins  losses)/(wins + losses).
Some findings from sample runs:
1. 6150 games are processed and 12 hierarchical clusters are built.
2. The maximum distance between two elements in the cluster is around 20.
3. There are a lot of elements in the first clusters. This is because when we want to add a
new entry, we add in the first possible cluster.
Hierarchical cluster lookup algorithm:
1. The user has made a move. The computer (black) needs to make a next move.
2. If this is the first time a black move is being made, find if there is an entry in our
clusters for the current hash key. If there is one, the leftmost child will be the best move.
Use the move from the leftmost child of this entry.
Remember the "Last black move" entry.
3. If this is a later black move, then all we need to do is to check only
the children of "Last black move" entry to match any of the white moves
made by user. If it does not match, then we don't have that entry in
our cluster. If it matches, use the white entry's leftmost black child.
This will make the move lookups very efficient since we exactly
know which cluster to lookup, which entry to lookup and just
pick the left most child.
Statistical Report of using Hierarchical clusters: In Progress
TODO:
Run Hierarchical clusters along with my PVS enhancement.
Give weight factor based upon PVS move’s location in Cluster.
At each depth lookup for moves belonging to cluster and try to lead the game to a game already seen.

Minutes from March/13/2007
(Posted on Tue, 03 Apr 2007 09:47:26 PDT
.)
Finsihed building nary trees from the game database.
Tree output was shown with ratios
of the nodes.
During the meeting further logic
was discussed:
1. To build hierarchical trees.
2. To find correlation between node ratios and PVS scores.
3. To use the above correlation value to compare PVS tree with book trees and to use that to prune the PVS tree.

Clustering of Book moves
(Posted on Tue, 06 Mar 2007 11:21:52 PST
.)
In this enhancement, we will build the book entries in form of different
clusters. Each cluster will be an nary tree. This is to enhance lookups
and are a bit faster since we we will know the next move to make from our tree.
There will be no need to generate legal moves during runtime to do the
lookups. This will be further enhanced to generate distances between
clusters and migrate the current game to a game in the cluster by choosing
the appropriate moves.
Data Structures:
/*
* This many next moves (children) for each entry in the tree
*/
#define MAX_MOVES_PER_TYPE 16
/*
* Each entry in the tree.
* (This will be expanded or optimized later on, once we start calculating
* ratios and sorting them and also finding distances between clusters)
*/
struct cboard {
HashType b_hkey; /* Hash key after this move */
char b_cmove[SANSZ]; /* Character move format */
int b_move; /* This move lead to this new board */
short b_side; /* Side played this move now */
uint16_t b_wins; /* Number of wins on making this move */
short b_index; /* This is where the next child will go */
short b_cluster; /* My cluster number */
struct cboard *b_parent; /* My parent */
struct cboard *n_bp[MAX_MOVES_PER_TYPE]; /* The children! */
};
/*
* This is the where the book entries are stored
* A new entry "cluster" is added to indicate which cluster
* this key (hashkey) is in.
*/
static struct hashtype {
uint16_t wins;
uint16_t losses;
uint16_t draws;
HashType key;
int cluster; /* New entry indicating cluster */
} *bookpos;
Algorithm:
1. Build the book lookup array (bookpos) as usual.
2. Start second pass reading the book to build the clusters.
3. Read a move from the book.
case 1:
Current hash key is in bookpos and not in any cluster.
If this a new game we are reading, then start building a new cluster.
This will be the root of the cluster. Update the bookpos[] array that this
move (hashkey) is now in this cluster. Update this cluster entry with all
the information from structure above such as the wins, move and side etc.
case 2:
Current hash key is in bookpos and not in any cluster.
If this not a new game (not the first move), then add this move to an
existing cluster. To do this, remember the previous hash key (PrevHashKey)
when the last cluster entry was updated. Add this new entry as the child
of the PrevHashKey. If no more room for further children, then skip this
move (only 16 children maximum stored).
case 3:
Current hash key is in bookpos and already in some cluster.
Skip this move since this entry is already added and the number of
wins/losses/draws have already been updated from one of the cases above.
4. Continue reading book moves until the end and create new cluster wherever
necessary upto the limit OR add to existing cluster upto the limit OR
skip the move if it is already added.
DONE:
Current demonstration is only with a binary tree.
This will be enhanced to work correctly with nary tree.
TODO:
1. Do nary tree traversals
2. Print the output in tree format
3. Sort the children based upon the wins and losses ratio
4. Run autoplay to collect statistics

Minutes from Feb 27th
(Posted on Tue, 06 Mar 2007 09:13:18 PST
.)
Discussed about the statistical report.
Worked on designing the datastructures for Cluster implementation and learning about trees/graphs and hierarchical clustering.
Next Week:
Try to finish the cluster implementation.

Statistical Report
(Posted on Tue, 27 Feb 2007 12:11:16 PST
.)
1. GNUchess Vs GNUchess:
No book moves looked up by anyone
Total games: 200
White wins: 185 (92%)
Black wins: 12 (6%)
Draws: 3 (1%)
Both white and black seem to have made same moves over and over
again.
Atleast upto 30 moves, many games looked similar. By that time,
white already gained big advantage and has won most of the games.
2. Only white doing book lookups
Total games: 200
White wins: 93 (46%)
Black wins: 54 (27%)
Draws: 53 (26%)
The result is a bit different here from the first result because
of book moves, the games deviated from one another.
3. Only black doing book lookups
Total games: 200
White wins: 79 (39%)
Black wins: 60 (30%)
Draws: 61 (30%)
Here, we can see that black has done much better, it won more games
and could draw more games compared to the first two collection
of data above.
4. Only black making book moves and looking up book even after
the usual 10 book moves
Total games: 200
White wins: 80 (40%)
Black wins: 41 (20%)
Draws: 79 (40%)
Total moves: 25000 (approx white = 12500 and black = 12500)
Total book moves used: 1086 (approx 4.3 %)
Book Move: 1. Used: 200
Book Move: 2. Used: 200
Book Move: 3. Used: 188
Book Move: 4. Used: 178
Book Move: 5. Used: 109
Book Move: 6. Used: 90
Book Move: 7. Used: 83
Book Move: 8. Used: 19
Book Move: 9. Used: 6
Book Move: 10. Used: 2
Book Move: 11. Used: 2
Book Move: 12. Used: 2
Book Move: 13. Used: 0
Book Move: 14. Used: 1
Book Move: 15. Used: 0
Book Move: 16. Used: 0
Book Move: 17. Used: 0
Book Move: 18. Used: 0
Book Move: 19. Used: 0
Book Move: 20. Used: 0
It is very clear that once we are past 10 moves for white and black,
book entries seldom match. That seems to be reason why gnuchess
chose the default of maximum 10 book moves lookup per game.
5. White: No changes, no book. Black: Book+PVS extension with weight
Total games: 100
White wins: 39 (39%)
Black wins: 22 (22%)
Draws: 39 (39%)
Number of times book moves found while in PVS and not initially: 22
There is not much difference between 4 and 5 above since the
number of book moves found during PVS are very less. Most of the
games either used book or PVS for the most part yielding similar
results.
6. Black: No changes, no book.
White: Book+PVS extension with weight
White wins: 76
Black wins: 29
This proves that when played as white the win numbers are different.

Minutes from Feb 20th
(Posted on Tue, 27 Feb 2007 09:31:43 PST
.)
1> My PVS modifications was run against GNUchess program.
Out of 105 games:
white won = 76 times and black won = 29
In general, the engine with the enhancements made moves faster
usually since it was using both PVS and the book. The other engine
was completely playing on its own using only PVS.
1. White usually has an advantage since it is on the "offensive".
2. After making a few moves (around 15 or so), both the engines just use PVS since book matches usually can longer be found. This happens even with a large database of more than 100000 games. Since, there are so many combinations of games, once you digress from the book into your own game, it is really hard to find a match in the book again.
2> New logic for finding distance between two game boards was discussed:
1. For each move in the book find all its next moves (max 16?). Repeat finding moves for
the next moves up till depth 5. Store that information.
Now you have to order the children from the largest to the smallest. For that you would compare each child with all its siblings.
Assume you are computing for node C with n children:
For each child Ci if i < j; where j is its sibling
then
Wi – Li/ Wi + Li > Wj – Lj / Wj + Lj
Where the first ratio will be Cri.
Similarly you will calculate ratios for all the children and then order them according to their ratios.
2. Now to find distance between two boards C and D. where D has m number of children.
d(C, D, r) = max(n,m) sigma i = 1 (d (Ci, Di, r1) +  Cri – Dri  ; where r = 5
d(C, D, 0) = max (n,m) sigma i = 1  Cri – Dri
3. Based on above distances found cluster the game boards hierarchically.
Assignment for next week:
1. Run more statistical report.
2. Implement the above clustering algorithm.

Minutes from Feb 13th
(Posted on Tue, 20 Feb 2007 20:44:41 PST
.)
Done Enhancement to PVS algorithm.
In this enhancement, if the current board is not found in
the game database then while PVS is generating moves, it will
consult the book. If a move is found in the book, it gives a
"weightage" to the score achieved at that point. This weight
includes the elapsed time, wins, losses, draws and the depth (ply).
score = score * weight_factor
weight_factor = (time_weight_factor) +
(winslossesdraws_weight_factor) +
(depth weight factor)
The weight factor would look like this:
weight = ((8192 * (int)(SearchTime  ElapsedTime))/(int) SearchTime) +
(bookpos[j].wins  bookpos[j].losses + bookpos[j].draws/2) +
(10 * (MAX_DEPTH  Idepth)/MAX_DEPTH);
A large weight is given to "time". If a good move is found in one second
when PVS normally takes 5 seconds, then it is a big gain. If the move
is also found in the initial stages (smaller depth) it also gets a higher
weightage.
The idea is that a better score due to the weight factor will help
PVS pick this move early enough without spending too much time.
Next Week's Assignment:
 Run changes against GNU chess and generate statistical report.
 Start implementing Clustering algorithm.

Title for BLOG entry.
(Posted on Tue, 20 Feb 2007 20:37:49 PST
.)
Actual content of the entry.

Clustering and Conversion Notes
(Posted on Tue, 06 Feb 2007 12:38:54 PST
.)
Go through the list of games and divide them into different "clusters".
A cluster will have all the games which are very "close" to each other.
By close, a game can be converted into another game in the cluster
by using only a few moves (possibly one move or upto five moves).
bookpos is normally used to stored all the entries from the book.
static struct hashtype {
uint16_t wins;
uint16_t losses;
uint16_t draws;
HashType key;
} *bookpos;
bookpos will be extended to contain the "cluster" where this particular
entry will be in.
static struct hashtype {
uint16_t wins;
uint16_t losses;
uint16_t draws;
HashType key;
char cluster[MAX_CLUSTERNAME];
} *bookpos;
Each entry in the cluster will have the following:
#define MAX_MOVES_PER_TYPE 16
#define MAX_MOVES /* Upto 4 move conversion */
typedef struct amove {
int move;
int side;
int hashkey; /* Hashkey after making this move */
int num_wins;
} amove_t;
#define MAX_MOVES_PER_TYPE 16
struct cluster {
HashType key;
/*
* Information about the "closest" neighbors.
* Upto 16 different board conversions are stored here.
*/
int num_moves; /* board conversion, number of moves */
amove_t moves[MAX_MOVES_PER_TYPE][MAX_MOVES];
};
For example, if we storing "two move" conversion, the array would
look like this
{
{
{ 3300, 1, 0x1234, 1 },
{ 3400, 0, 0x2345, 1 }
},
{
{ 3301, 1, 0x4567, 1 },
{ 3401, 0, 0x7890, 1 }
},
...
...
UPTO 16 entries
};
Given a board B1, if one move leads to a winning board B2, then
only "one move" moves will be stored. There may be several other
"one move"s which could lead to another winning board. Store
only such moves and ignore all "two move" and "three move"
conversions. So, given a board, only the minimum number of
moves required for conversion will be stored for that board.
The reason is that if less number of moves conversion is chosen,
there is a better chance of always getting to the stored games.
For example, if a "5 move conversion" is chosen, and the user
moves some different move in the 2nd move, then you cannot get
to this board. But with single move, you are already using a
stored game. You can choose the move which had maximum wins.

White (user) has made a move and its now black (computer's)
turn to make a move.
After white makes a move, there is HashKey for the current board.
Check if this HashKey is in the bookpos[] array.
If this HashKey is found, that means we have a book entry for
the current game. We need to find the next black move with
the most wins. Find the "cluster" from the bookpos[] array
for this Hashkey.
In this cluster, find all the "neighbors" for this hashkey
and pick the one with maximum moves. This will always be a
single move board conversion, since the current board is
already found.
If the Hashkey is not found, that means current board is not
present in our stored games. The aim is to find the closest
game to the current board. Keep generating legal moves and
check if there is any match for the HashKey. If there is
match found, that means we have found a board.
Go to the cluster where this board is found and pick the best
possible board from that. The best possible board is considered
as the one with minimum number of moves for converion and
with highest number of wins.
When the Hashkey is not found, several legal moves are generated
to convert into an existing board. Assume 5 moves are required
to do the conversion. B1 W1 B2 W2 B3. Make move B1. Assume
white will make B2. Now, we should be able to move B2  we
should store this and remember that we are going towards a
particular board. We should not be generating all the legal
moves if not necessary.
1. If the current board is already present, then pick the next
best move from the cluster of "close" games.
2. If the current board is not present, generate all the legal
moves to convert into an existing board and the choose
the best board from the "close" games in that cluster.
Remember the moves in this case and use them on your next turn.

Minutes from Nov 28th meeting
(Posted on Tue, 05 Dec 2006 12:45:44 PST
.)
Completed since last meeting:
1. GNUchess Vs GNUchess Environment setup and demo
Used Autoplay online chess program (http://www.vanheusden.com/autoplay/)
that connects two xboard/winboard protocol compliant chess engines and lets
them play against each other.
GNUchess chess program is an xboard complaint chess engine. To be able to
use autoplay program GNU has to be started in the engine mode with options
x
At runtime the engine displays the information in the Coordinate Notation that
uses only the squares that the pieces were on to denote movements.
For example: 1. e2e4 e7e6...
Description on how autoplay works:
autoplay creates two new processes for each gnuchess engine. It does a fork/exec for each of the chess program. Two pipes are created per process.
For example: White_gnuchess.exe: fd1_r and fd1_w
Black_gnuchess.exe: fd2_r and fd2_w
autoplay  Reads from fd1_r (reads a move from white chess engine)
Writes to fd2_w (writes the move to black chess engine)
Reads from fd2_r (reads a move from black chess engine)
Writes to fd1_w (writes the move to white chess engine)
autoplay process does a select(fd1_r, fd2_r) and reads the data on whichever filedescriptor the data has arrived on. Then, it writes the data to the other engine.
I added my code in autoplay.exe to store all the moves made by White and Black players such that I can run the moves on the xboard.
2. Worked on extension to PVS algorithm.
During my implementation I came up with following idea for extension on book move lookup:
Currently bookpos array stores the following.
bookpos[i].key = HashKey;
bookpos[i].wins;
bookpos[i].losses;
bookpos[i].draws;
Instead if I can store:
bookpos[i].key = HashKey;
bookpos[i].wins;
bookpos[i].losses;
bookpos[i].draws;
bookpos[i].nextmoves[8] (upto to 8 entries as an example)
nextmoves will contain (move, wins)
(0x88bd7241, [move1, 1], [move2, 3], [move4, 2])
When user (white) makes a move, we only need to store black winning
positions.
Anytime, you find HashKey based upon current board positions, store the next set of winning black moves in the same bookpos array during the startup. So, when the game starts, the book lookups will be faster. We have to find the matching hashkey and then
look at bookpos[i].nextmoves[8] and pick the best one.
To be done by next meeting (Dec 5th)
1. Complete PVS extension to lookup book move at every search depth.
2. Complete Book lookup extension to minimize bookpos search.

Minutes from Nov/14/2006
(Posted on Thu, 16 Nov 2006 08:19:02 PST
.)
Assignment: Come up with the logic for Clustering game boards and its usage.
Finished: In order to manipulate Game database I had to research how game database in .PGN format was converted into .DAT format used by GNU chess program.
Here’s my findings:
The book uses lex and yacc to parse the book.pgn file
PGNReadFromFile calls yylex()
The book is parsed and BookBuilder() is called for each entry.
bookpos[*] is populated with
bookpos[i].key = HashKey;
bookpos[i].wins++;
bookpos[i].losses++;
bookpos[i].draws++;
Once max entries are read from the book, they are written to the disk with name book.dat.
Each record in the book.dat contains a the hashkey, wins, losses and draws.
/*
* This is the record as it will be written in the binary
* file in network byte order. HashType is uint64_t. To
* avoid endianness and padding issues, we do not read or
* write structs but put the values in an unsigned char array.
*/
static unsigned char buf[2+2+2+8];
BookBuilderClose is called which calls "book_to_buf()" in a loop to do that. This will write the contents to book.dat on disk.
Next time on wards, whenever the gnuchess program is called, it will just open book.dat and no parsing is required.
The book.dat is loaded into memory when gnuchess is called and bookpos[*] is again populated by calling "buf_to_book()" in a loop.
BookQuery() is called when a move from book is searched for to make the next move. It will find the move by doing hash lookup and based on some user requirements (such as best move, random move) and then number of wins in that position, it chooses the next move.
Meeting minutes: Discussed on few possible ideas on how to use the Clustered game boards to find the next best move during the game.
Assignment to be completed by next meet (Nov 28th):
1. Set up GNU chess program verses GNU chess program environment.
2. Modify PVS algorithm such that at each search depth moves from book.dat is consulted.
3. Research on Clustering of game boards and its usage.

Minutes from Nov/6/2006
(Posted on Fri, 10 Nov 2006 11:04:48 PST
.)
1. Submitted the two deliverables.
2. Explained how PVS algorithm performs better than alphabeta cutoffs.
Showed the PVS program I wrote to understand PVS cutoffs.
3. Next meeting have to present login on Clustering of board games.

Minutes from Oct/31/2006
(Posted on Wed, 08 Nov 2006 09:59:46 PST
.)
Minutes from Oct 31st meeting.
1. Discussed on the two deliverables.
a.Deliverable 1: Chess Game Databases and GNU Chess Program
b.Deliverable 2: GNUchess program's PVS Algorithm
Submission date was moved to Nov 6th because there was not enough information on PVS algorithm.
I had to prove why PVS is better than alphabeta prunning in the next meeting.

Minutes from Oct/17/2006
(Posted on Tue, 31 Oct 2006 09:48:23 PST
.)
Minutes from October 17th
1. Worked on pretty printing the PVS tree algorithm.
2. Was able to explain how Alpha beta pruning is done in case of chess program.
3. Was able to explain the output of the PVS algorithm with some examples.
My pretty print of the output showing tree depths, scores and Alpha Beta values with Principal Variation Node value.
1. Move 1
SANmv : d4
SearchRoot score = 10. Idepth = 1, RootAlpha = 152, RootBeta = 2, RootPV = 3299
SearchRoot score = 61. Idepth = 2, RootAlpha = 85, RootBeta = 65, RootPV = 3299
SearchRoot score = 10. Idepth = 3, RootAlpha = 136, RootBeta = 14, RootPV = 3299
SearchRoot score = 60. Idepth = 4, RootAlpha = 85, RootBeta = 65, RootPV = 3299
SearchRoot score = 10. Idepth = 5, RootAlpha = 135, RootBeta = 15, RootPV = 3299
SearchRoot score = 60. Idepth = 6, RootAlpha = 85, RootBeta = 65, RootPV = 3299
SearchRoot score = 10. Idepth = 7, RootAlpha = 135, RootBeta = 15, RootPV = 3299
SearchRoot score = 34. Idepth = 8, RootAlpha = 85, RootBeta = 65, RootPV = 3299
SearchRoot score = 23. Idepth = 9, RootAlpha = 109, RootBeta = 41, RootPV = 3299
SearchRoot score = 32768. Idepth = 10, RootAlpha = 98, RootBeta = 52, RootPV = 3299
Found good score. DONE.
score = 32768, Idepth = 10, RootAlpha = 98, RootBeta = 52 RootPV = 3299, SANmv : d5
SANmv : d5
COMPUTER move is : d5
Notes:
Initial RootPV = 3299
The RootPV did not change till depth 10 since the computer did not
find a better move than what it thought initially. So, it chose d5
2. Move 2
SANmv : Qd3
SearchRoot score = 10. Idepth = 1, RootAlpha = 116, RootBeta = 34, RootPV = 4013
SearchRoot score = 32. Idepth = 2, RootAlpha = 65, RootBeta = 85, RootPV = 4013
SearchRoot score = 13. Idepth = 3, RootAlpha = 107, RootBeta = 43, RootPV = 4013
SearchRoot score = 24. Idepth = 4, RootAlpha = 62, RootBeta = 88, RootPV = 4013
SearchRoot score = 12. Idepth = 5, RootAlpha = 99, RootBeta = 51, RootPV = 4013
SearchRoot score = 20. Idepth = 6, RootAlpha = 63, RootBeta = 87, RootPV = 3690
SearchRoot score = 20. Idepth = 7, RootAlpha = 95, RootBeta = 55, RootPV = 3690
SearchRoot score = 24. Idepth = 8, RootAlpha = 95, RootBeta = 55, RootPV = 3690
SearchRoot score = 32768. Idepth = 9, RootAlpha = 99, RootBeta = 51, RootPV = 3690
Found good score. DONE.
score = 32768, Idepth = 9, RootAlpha = 99, RootBeta = 51 RootPV = 3690, SANmv : Nc6
SANmv : Nc6
COMPUTER move is : Nc6
Notes:
Initial RootPV = 4013. The RootPV changed to 3690 in between since the
computer found a better move than the initial move. (changed from Nf6 to Nc6)
3. Move 3
SANmv : Qxh7
SearchRoot score = 1012. Idepth = 1, RootAlpha = 112, RootBeta = 38, RootPV = 167927
FAIL HIGH: SearchRoot score = 1104. Idepth = 1, RootAlpha = 38, RootBeta = 32767, RootPV = 167927
SearchRoot score = 1053. Idepth = 2, RootAlpha = 1029, RootBeta = 1179, RootPV = 167927
SearchRoot score = 1103. Idepth = 3, RootAlpha = 978, RootBeta = 1128, RootPV = 167927
SearchRoot score = 1049. Idepth = 4, RootAlpha = 1028, RootBeta = 1178, RootPV = 167927
SearchRoot score = 1125. Idepth = 5, RootAlpha = 974, RootBeta = 1124, RootPV = 167927
FAIL HIGH: SearchRoot score = 1133. Idepth = 5, RootAlpha = 1124, RootBeta = 32767, RootPV = 167927
SearchRoot score = 1099. Idepth = 6, RootAlpha = 1058, RootBeta = 1208, RootPV = 167927
SearchRoot score = 1126. Idepth = 7, RootAlpha = 1024, RootBeta = 1174, RootPV = 167927
SearchRoot score = 1130. Idepth = 8, RootAlpha = 1051, RootBeta = 1201, RootPV = 167927
SearchRoot score = 1149. Idepth = 9, RootAlpha = 1055, RootBeta = 1205, RootPV = 167927
SearchRoot score = 32768. Idepth = 10, RootAlpha = 1074, RootBeta = 1224, RootPV = 167927
Found good score. DONE.
score = 32768, Idepth = 10, RootAlpha = 1074, RootBeta = 1224 RootPV = 167927, SANmv : Rxh7
SANmv : Rxh7
COMPUTER move is : Rxh7
Notes:
In this case FAIL HIGH is hit. A score > Beta was returned. The alphabeta windows
had to be readjusted (RootAlpha = 38, RootBeta = 32767) and search had to be
done in a regular alpha beta manner. However the final move did not change.
4. Move 4
SANmv : Kd1
SearchRoot score = 1252. Idepth = 1, RootAlpha = 1045, RootBeta = 1195, RootPV = 35483
FAIL HIGH: SearchRoot score = 1252. Idepth = 1, RootAlpha = 1195, RootBeta = 32767, RootPV = 35483
SearchRoot score = 1123. Idepth = 2, RootAlpha = 1177, RootBeta = 1327, RootPV = 35483
FAIL LOW: SearchRoot score = 1182. Idepth = 2, RootAlpha = 32767, RootBeta = 1177, RootPV = 35483
SearchRoot score = 1202. Idepth = 3, RootAlpha = 1107, RootBeta = 1257, RootPV = 35483
SearchRoot score = 1194. Idepth = 4, RootAlpha = 1127, RootBeta = 1277, RootPV = 35483
SearchRoot score = 1244. Idepth = 5, RootAlpha = 1119, RootBeta = 1269, RootPV = 35483
SearchRoot score = 1191. Idepth = 6, RootAlpha = 1169, RootBeta = 1319, RootPV = 35483
SearchRoot score = 1228. Idepth = 7, RootAlpha = 1116, RootBeta = 1266, RootPV = 35483
SearchRoot score = 1199. Idepth = 8, RootAlpha = 1153, RootBeta = 1303, RootPV = 35483
SearchRoot score = 32768. Idepth = 9, RootAlpha = 1124, RootBeta = 1274, RootPV = 35483
Found good score. DONE.
score = 32768, Idepth = 9, RootAlpha = 1124, RootBeta = 1274 RootPV = 35483, SANmv : Nxd4
SANmv : Nxd4
COMPUTER move is : Nxd4
Notes:
In this case FAIL LOW case is hit. A score < Alpha was returned. The alphabeta windows
had to be readjusted (RootAlpha = 32767, RootBeta = 1177) and search had to be
done in a regular alpha beta manner. However the final move did not change.
After few more moves…
11. Move 11
SANmv : Ke3
SearchRoot score = 32767. Idepth = 1, RootAlpha = 1506, RootBeta = 1656, RootPV = 35661
Found good score. DONE.
score = 32767, Idepth = 1, RootAlpha = 1506, RootBeta = 1656 RootPV = 35661, SANmv : Qxf2#
SANmv : Qxf2#
Notes:
The computer got a very good score at Idepth = 1. It did not have to
search beyond that.

Minutes from Oct/03/2006
(Posted on Fri, 06 Oct 2006 17:44:53 PDT
.)
Assignment: Had to finish compile and modify GNU chess program.
Was able to demo the chess program.
More time was spent understanding the data structures used by GNU and in figuring out they used the games database and other search algorithms.
Work done with GNU Chess Program:
1. Installed the gnu chess program on a linux machine
configure
make
Ran the gnuchess program successfully.
2. Installed the xboard program on a linux machine
configure
make
Ran the xboard program successfully.
3. Installed the book.pgn used by gnuchess program
Ran the gnuchess program to use the moves from the book.
4. Understood the following:
The chess moves in general
The basic gnu chess datastructures
The concepts of a bitboard (bitmap) and how at any point of the
game we can find out which piece is on which square.
BitBoard[white][pawn]
BitBoard[white][knight]
BitBoard[white][bishop]
BitBoard[white][rook]
BitBoard[white][queen]
BitBoard[white][king]
BitBoard[black][pawn]
BitBoard[black][knight]
BitBoard[black][bishop]
BitBoard[black][rook]
BitBoard[black][queen]
BitBoard[black][king]
h8................................... c1 b1 a1
63 2 1 0
64bits (a bit is turned on based on where the piece is)
1 2 3 4 5 6
enum Piece { empty, pawn, knight, bishop, rook, queen, king, bpawn };
BitBoard b[2][7]; /* piece/pawn positions by side (0=white, 1=black)
and then by piece (1=pawn..6=king). For example,
b[white][knight] has bits set for every board
position occupied by a White Knight. */
5. Read the algorithm of how book is used by gnuchess
Book is available in book.pgn format
Convert it to book.dat by running gnuchess program
If the book is configured, gnuchess consults the book for any
moves. If it finds an appropriate matching move, it uses that move.
Reads the book from the file into memory buffer.
/*
* This is the record as it will be written in the binary
* file in network byte order. HashType is uint64_t.
*/
wins, losses, draws, key
static unsigned char buf[2+2+2+8];
Generates all the moves and filters illegal moves.
Now, consults the book for a DIGEST_MATCH for the specific key (move).
There is a possibility of more than one match. The move it picks
should have atleast some number of wins.
It is configurable how it picks the move  best, wost, random and so on.
An example of how it consults the book:
Thinking...
Looking for opening book in book.dat...
Read opening book (book.dat)...
Loading book from book.dat.
151 hash collisions... Opening database: 438 book positions.
In this position, there is one book move:
Nf6(75/3/1/0)
Nf6(76)
Actual book entry:

[Event "?"]
[Site "Monaco"]
[Date "1966.??.??"]
[Round "2"]
[White "Adams, Michael"]
[Black "Piket"]
[Result "01"]
[ECO "E14"]
1.d4 Nf6 2.c4 e6 3.Nf3 b6 4.e3 Bb7 5.Nc3 d5 6.cxd5 exd5 7.Bb5+ c6 8.Bd3
Be7 9.OO OO 10.b3 Nbd7 11.Bb2 Bd6 12.Qc2 Rc8 13.Rfe1 Re8 14.e4 dxe4 15.
Nxe4 Nxe4 16.Bxe4 Nf6 17.Bf5 Rc7 18.Rxe8+ Qxe8 19.Ne5 c5 20.Qd3 Re7 21.Re1
g6 22.Bh3 Nd5 23.Rf1 Nf4 24.Qd2 Nxh3+ 25.gxh3 h5 26.Nc4 Qd7 27.d5 f6 28.
Nxd6 Qxd6 29.Rd1 g5 30.Qd3 Kf7 31.Qf3 Kg6 32.Qd3+ f5 33.Qc3 Bxd5 34.Qd2
Rd7 35.h4 gxh4 36.Qe3 Qe6 37.Qc3 f4 38.Qd3+ Kh6 39.h3 f3 40.Qd2+ Kh7 41.
Qd3+ Qg6+ 42.Kh2 Qxd3 43.Rxd3 Kg6 44.Rd2 Kf5 45.Rd3 a5 46.a3 Kf4 47.Bc1+
Ke4 48.Re3+ Kf5 49.Rd3 Ke5 50.Bb2+ Kf4 51.Re3 Be4 52.Bc1 Kf5 53.b4 Rd1 54.
Rc3 axb4 55.axb4 cxb4 56.Rc8 b3 57.Rc3 Rxc1 58.Rxc1 b2 59.Re1 b1=Q 01

[ W L D [ d4 Nf6] ] = key 75 bookpos

Minutes from Sep/19/2006
(Posted on Fri, 06 Oct 2006 17:39:35 PDT
.)
Minutes from Sep 19th meeting
Assignment: Had to come up with a distance metric for Clustering of board games.
Goal is to create K cluster of Directed Acyclic graph of all the board games that are closer to each other.
Topological ordering of vertices should be maintained in the graph such that if there is a path from Vi to Vj, then Vj appears after Vi in the ordering. Of course the ordering is not necessarily unique.
The graph can be represented as an adjacency array list.
Each item in the list represents a move in the form of the bitmap of the whole board.
For example, the list stores information such as M1 move is seen in game G1 and G2.
And move M3 is followed by either M4 or M8 move and it is seen in game G4 and G2
Preliminary idea to cluster board games: Further research will be done.
Modified Hierarchical Clustering Algorithm might be best suited to form clusters of game boards.
Clustering Process:
1. First start with assigning each board game to a cluster.
2. Then start merging closest board games into a cluster so that after the merger we have one less cluster.
3. Compute the distance between the new cluster and each of the old clusters.
4. Repeat steps 2 and 3 until all the boards are clustered into a single cluster.
5. Once the hierarchical cluster is formed it is possible to derive K clusters from this tree by cutting the k1 longest link.
Distance Metric:
Distance between two game boards = # of moves needed convert one board to another.
Process to find the distance:
1. Save each move in a game board as bitmap of the whole board.
2. For each move taken in the second board find the number of moves (cost/distance) needed in the first board to reach that move.
3. Sum of all the moves = Distance between the two boards.
Further research is needed to find the distance metric.

Minutes from Sep/12/2006 meeting with Prof. Pollett
(Posted on Tue, 19 Sep 2006 14:29:24 PDT
.)
Discussion on Clustering Algorithm:
1. Discussed the Kmeans Clustering Algorithm: (Exclusive Clustering)
1. Here you start with an initial number (K) of clusters with some centroids.
2. Then you group objects based upon their distance to the closest centroid.
3. Once all the objects have been assigned, we recalculate the positions of K centroids
4. Repeat step 2 and 3 until the centroids no longer move. This produces a separation of the objects into groups from which the metric to be minimized can be calculated.
2. Discussed the Fuzzy Cmeans Clustering algorithm
Fuzzy cmeans (FCM) is a method of clustering which allows one piece of data to belong to two or more clusters.
3. Discussed the Hierarchical Clustering Algorithm
This results in a tree of cluster of objects.
4. Kmeans clustering algorithm is the simplest one. For this we need to come up with a distance algorithm to find the distance between two game positions. Drawback of this algorithm is we need to specify the number of the clusters ahead of time.
5. Next meeting discuss distance algorithm.

Really Simple Syndication (RSS) Feed...
