Your Ad Here

Monday, February 8, 2010

Result for Neural Network Task 1

The results obtained from the MLP networks with different neuron tried are shown in the Table below. After 20,000 iteration of training, it could be seen that the best result obtained was the first one using 10 hidden neurons with score 42%, although the RMS error was the highest.




A screenshot of its RMS error from NeuralWork is shown in Figure 3.1 below.


Sunday, February 7, 2010

Knowledge-Based System Task Four


The first part of this task was to run a agent-based system as shown in the above figure using flex agent. Every agent was connected up and registered with the facilitator agent (dvlafac). The whole idea of this agent-based system was to allow the client agent (myagent) to extract information of a particular vehicle from the individual registration office through the facilitator. The client agent could also add information on to the registration office. Four KQML messages were sent out by myagent to test out the system. They are shown in the table below with respective message codes and performatives used.



The second part of this task was to write relations to the newoffice agent to perform validity checking on a registration mark. There are two main relation for performing this checks, they are validate(Regmark) which will perform validation and validhere(Regment) which will interpret whether the vehicle was registered in Glasgow. The codes in Task KBS1 and Task KBS2 was converted to relations and reused in this task. The codes in the file GET_MARK.KSL were modified such that the questions and write statements were all taken out. The program was tested in the console and run successfully. The agent was started and connected to the network. Some messages were sent by myagent to test out the system.














Tuesday, January 26, 2010

Knowledge-Based System Task Three

In this task, I have suggested some alternative for characters that were misidentify in task NN3. Table below shows some of the number characters that have been incorrectly identify as alphabet characters.


Rules have been written in such away that if any of the alphabets in the above table was entered in char3 or char4 (to simulate misidentification), the alphabet will be changed to the respective number. I have set the variable mark to become altered if the character has been change. It will then fire the rule altered_mark and the amended registration marks will be printed out from the console.
Finally all the codes for the three tasks were integrated together and add on to the file GET_MARK.KSL. An overview of how the program works in a forward chain are shown in the flowchart below



Knowledge-Based System Task Two

In the second task, some rules have been written to interpret where and when a vehicle was registered given a valid registration marks. The first two characters of the marks indicate the local memory tag which identify the region and DVLA office the vehicle registered. The first rule at_England was to identify char1 for vehicle registered in England. Besides C and S, all other uppercase alphabet entered in char1 other then those invalid characters indicate that the vehicle was registered in England. If the characters C or S were entered in char1, the vehicle would be registered in either Wales or Scotland and the following eight rules were written to identify the regions and the local offices. Next, rules have been written to interpret the age identifier, that is, the third and fourth character of the registration mark. The rule mar_aug was written to identify the age identifier from 02 to 49, therefore char3 must be between 0 and 4. I have written a function get_val(Char) for converting a digit character entered into a number value. As for age identifier 50, a separate rule mar_aug_2050 was written so that char3 will not overlap with the age identifier from 51 to 59. The rule sep_feb0, sep_feb1 and sep_feb2 was written to identify the age identifier from 51 to 58, 59 and 60 to 99 respectively.

Monday, January 25, 2010

Knowledge-Based System Task One

After recognizing characters using neural networks, the next step was to use KBS to validate the vehicle registration marks. In this task, a rule-based program was written to determine whether or not the set of characters entered constitutes a valid vehicle registration marks. I have added on my flex code to the file GET_MARK.KSL. There were a total of fifteen rules written in this task. The first five rules were written to validate all alphabets entered must be in uppercase. For example, the rule range_char1 states that if char1 was outside the range of A to Z, a message is printed out to indicate that char1 must be entered with an uppercase alphabets. I have set a variable code to become invalid for all rules that satisfy invalid conditions. This would result printing out an error message ‘INVALID CODE’, as the last rule (error_code) in this task will fire. The rule invalid_char1 was to validate those alphabets that were not allowed in char1. This was done the same for char2, char5, char6 and char7. As for char3 and char4, they must be a number between 0 and 9. The combination of char3 and char4 must not be entered 00 or 01 because the year 2000 and 2001 are not shown in the age identifier table for the month March to August.

Sunday, January 24, 2010

Neural Network Task Four

The fourth task will involve using a Hopfeild network to recognize digit characters. I have created a training file containing only two digit characters, 7 and 8, from the data file IDEALTRN.NNA. These two characters were chosen because they have quite a great hamming distance. With a great hamming distance between stored patterns, the possibility of the network to perform well would be high. I have also created a testing file containing five example of each character 7 and 8 from the data file IDEALTST.NNA. A Hopfield network was then created using 48 input neurons, 48 hidden neurons and 48 output neurons, as there are 48 bits in each medium resolution character. The input and output layers act as a buffer and only the hidden layer do the processing. The network was trained and recalled one complete passes through the file. The output values are printed in the result file and these were compared with the input values contain in the training file to see how many correct outputs. All results were recorded.

I have done the same experiment as above using another pair of digit characters, 1 and 3, with a smaller hamming distance. It was found that it gives a poor results.

Neural Network Task Three

The third task was to test the RBF network with the full sets of character using medium resolution data, that is, 0 to 9, A to Z and a space character. Therefore there were a total of 37 characters. A RBF network was created with 24 input neurons, 50 prototype neurons and 37 output neurons. The network takes a longer time to train and it learns for 80,000 iteration before the RMS error converges. The network was then improved by increasing the number of prototype neurons. A total of four different number of prototype neurons(50, 70, 100 and 180) were used to run the networks and all results are recorded. As the output classes are larger, more prototype neurons were to be used for clustering.