Abstract
Inducing concept descriptions in first order logic is inherently a complex task; then, heuristics are needed to keep the problem to manageable size. In this paper we explore the effect of alternative search strategies, including the use of information gain and of a-priori knowledge, on the quality of the acquired relations, intended as the ability to reconstruct the rule used to generate the examples. To this aim, an artificial domain has been created, in which the experimental conditions can be kept under control, the 'solution' of the learning problem is known and a perfect theory is available. Another investigated aspect is the impact of more complex description languages, such as, for instance, including numerical quantifiers. The results show that the information gain criterion is too greedy to be useful when the concepts have a complex internal structure; however, this drawback is more or less shared with any purely statistical evaluation criterion. The addition of parts of the available domain theory increases the obtained performance level. Similar results have been previously obtained on a number of real applications and of test-cases taken from standard machine learning data bases.
Lingua originale | Inglese |
---|---|
pagine (da-a) | 221-232 |
Numero di pagine | 12 |
Rivista | Fundamenta Informaticae |
Volume | 18 |
Numero di pubblicazione | 2-4 |
Stato di pubblicazione | Pubblicato - feb 1993 |
Pubblicato esternamente | Sì |