ACKNOWLEDGEMENTS
This thesis for bachelor’s degree has been accomplished for three months. During
this time, many people have made substantial contributions in one way or another
that I would like to mention herein.
First and foremost, I would especially like to thank my research advisor, Dr. Ha
Quang Thuy for his invaluable guidance and tremendous motivation that he provided
at every step of this work. His enthusiastic support and untiring interest in the
subject is deeply appreciated. I have gain immensely from his deep technical insight
and thoroughness in problem solving.
Some portions of this thesis have been previously published in the Conference of
Junior Scientists 2002 of Vietnam National University, Hanoi, and I owe thanks to
Dr. Do Van Thanh, M.Sc. Pham Tho Hoan, B.Sc. Phan Xuan Hieu for their valuable
contributions as the co-authors of that paper.
My thanks also go to all of my lecturers at Faculty of Technology of Vietnam National
University Hanoi who provided me with indispensable scientific knowledge
throughout four school years. Special thanks to the following individuals, and many
others who are not mentioned by name, for their teaching: M.Sc. Le Quang Hieu,
M.Sc. Nguyen Quang Vinh, M.Sc. Nguyen Dinh Viet, M.Sc. Pham Hong Thai, Dr.
Nguyen Tue, M.Sc. Nguyen Nam Hai, M.Sc. Dao Kien Quoc, M.Sc. Le Anh
Cuong, Asoc.Prof. Trinh Nhat Tien, Dr. Dinh Manh Tuong, M.Sc. Vu Ba Duy,
Asoc.Prof. Nguyen Quoc Toan, M.Sc. Ngo Le Minh, Asoc.Prof. Ngo Quoc Tao.
Without the knowledge they equipped me, my thesis would never take shape.
I am particularly grateful to my family for providing me with a source of strength
and encouragement, and giving me the best possible education, and imbibing in me
a thirst for learning.
Last but not the least my girlfriend Nguyen Thi Thu Thuy who sacrificed time and
energy so that this work could be completed. I appreciate it, and hope that the effort
has been worthwhile.
ii
ABSTRACT
Over the last several years, the problem of efficiently generating large numbers of
association rules has been an active research topic in the data mining community.
Many different algorithms have been developed with promising results. There are
two current approaches to the association rule mining problem. The first is to mine
the frequent itemsets regardless of their coefficients. The second is to assign
weights to the items to reflect their importance to the users. However, they both
rely on the using of the minimum support which may confuse us. Practically, we
may want to mine the best rules to our knowledge instead of those which satisfy a
certain threshold, especially if this threshold is an equation. To overcome this problem,
we introduce the concept of adjustable interestingness and propose a novel approach
in mining association rules based on adjustable interestingness. Our algorithm
only works with the most interesting rules, thus reducing significantly search
space by skipping many uninteresting itemsets and pruning those that cannot generate
interesting itemsets at the earlier stage. Therefore, the total time needed for
the mining is substantially decreased.
TABLE OF CONTENTS
Acknowledgements .i
Abstract ii
Table of contents iii
List of tables and figures .iv
CHAPTER 1: Introduction .1
1.1. What is data mining? 1
1.2. Data mining versus query tools 2
1.3. Mining association rules .3
1.4. Outline of the thesis 5
CHAPTER 2: Mining association rules with weighted items 6
2.1. Introduction 6
2.2. Problem definition 7
CHAPTER 3: Mining association rules with adjustable interestingness .10
3.1. Interestingness and interesting itemsets .10
3.2. Interestingness constraints 11
3.3. Motivation behind interesting itemsets and adjustable interestingness .12
CHAPTER 4: Algorithm for mining association rules with adjustable
interestingness (MARAI) .14
4.1. Motivation 14
4.2. Preliminaries .15
4.3. Basic properties of itemset-tidset pairs 18
4.4. MARAI: Algorithm design and implementation .20
4.5. Experimental Evaluation 25
CHAPTER 5: Conclusion .28
References a
Appendix b
iv
LIST OF TABLES AND FIGURES
Table 1. Database of a stationery store .8
Table 2. Transactions of a stationery store .9
Table 3. Itemsets sorted into descending order of their interestingness 11
Table 4. Itemsets sorted into descending order of the interestingness .17
Table 5. All interesting itemsets .18
Table 6. Database characteristics .25
Figure 1. Example database and frequent itemsets 4
Figure 2. Example database 15
Figure 3. The MARAI algorithm .22
Figure 4. Search process using adjustable interestingness .23
Figure 5. Performance of the MARAI algorithm on Cosmetic 26
Figure 6. Performance of the MARAI algorithm on Census .27
36 trang |
Chia sẻ: maiphuongtl | Lượt xem: 1833 | Lượt tải: 0
Bạn đang xem trước 20 trang tài liệu Đề tài Mining association rules with adjustable interestingness, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
hm only works with the most interesting rules, thus reducing significantly search
space by skipping many uninteresting itemsets and pruning those that cannot gen-
erate interesting itemsets at the earlier stage. Therefore, the total time needed for
the mining is substantially decreased.
iii
TABLE OF CONTENTS
Acknowledgements .....................................................................................................i
Abstract...................................................................................................................... ii
Table of contents ...................................................................................................... iii
List of tables and figures ...........................................................................................iv
CHAPTER 1: Introduction .........................................................................................1
1.1. What is data mining?........................................................................................1
1.2. Data mining versus query tools........................................................................2
1.3. Mining association rules...................................................................................3
1.4. Outline of the thesis..........................................................................................5
CHAPTER 2: Mining association rules with weighted items....................................6
2.1. Introduction ......................................................................................................6
2.2. Problem definition............................................................................................7
CHAPTER 3: Mining association rules with adjustable interestingness.................10
3.1. Interestingness and interesting itemsets .........................................................10
3.2. Interestingness constraints..............................................................................11
3.3. Motivation behind interesting itemsets and adjustable interestingness .........12
CHAPTER 4: Algorithm for mining association rules with adjustable
interestingness (MARAI) .........................................................................................14
4.1. Motivation ......................................................................................................14
4.2. Preliminaries...................................................................................................15
4.3. Basic properties of itemset-tidset pairs ..........................................................18
4.4. MARAI: Algorithm design and implementation ...........................................20
4.5. Experimental Evaluation ................................................................................25
CHAPTER 5: Conclusion.........................................................................................28
References ..................................................................................................................a
Appendix ....................................................................................................................b
iv
LIST OF TABLES AND FIGURES
Table 1. Database of a stationery store.......................................................................8
Table 2. Transactions of a stationery store.................................................................9
Table 3. Itemsets sorted into descending order of their interestingness ..................11
Table 4. Itemsets sorted into descending order of the interestingness.....................17
Table 5. All interesting itemsets...............................................................................18
Table 6. Database characteristics .............................................................................25
Figure 1. Example database and frequent itemsets ....................................................4
Figure 2. Example database......................................................................................15
Figure 3. The MARAI algorithm .............................................................................22
Figure 4. Search process using adjustable interestingness.......................................23
Figure 5. Performance of the MARAI algorithm on Cosmetic ................................26
Figure 6. Performance of the MARAI algorithm on Census ...................................27
1
CHAPTER 1
INTRODUCTION
In this chapter, we introduce the concept of data mining, and explain why it is re-
garded as such important developments. As companies is the background of mining
association rules.
1.1. What is data mining?
There is confusion about the exact meaning between the terms ‘data mining’ and
‘knowledge discovery in databases (KDD)’. At the first international KDD confer-
ence in Montreal in 1995, it was proposed that the term ‘KDD’ be used to describe
the whole process of extraction of knowledge from data. An official definition of
KDD is: ‘the non-trivial extraction of implicit, previously unknown and potentially
useful knowledge from data’ [2]. The knowledge which is discovered must be new,
not obvious, and human must be able to use it for a particular purpose. It was also
proposed that the term ‘data mining’ should be used exclusively for the discovery
stage of the KDD process. The whole KDD steps include selection, preprocessing,
transformation, data mining and the interpretation or evaluation. Data mining has
been focused on as it is the most significant and most time-consuming among KDD
steps.
The sudden rise of interest in data mining can partly be explained by the following
factors [2]:
1. In the 1980s, all major organizations built infrastructural databases, containing
data about their clients, competitors, and products. These databases form a potential
gold-mine; they contain gigabytes of data with much ‘hidden’ information that
cannot easily be traced using SQL (Structure Query Language). Data mining algo-
2
rithms can find interesting regularities in databases, whereas, SQL is just a query
language; it only helps to find data under constraints of what we already know.
2. As the use of networks continues to grow, it will become increasingly easy to
connect databases. Thus, connecting a client’ s file to a file with demographic data
may lead to unexpected views on the spending patterns of certain population
groups.
3. Over the past few years, machine-learning techniques have expanded enor-
mously. Neural networks, genetic algorithms and other simple, generally applicable
learning techniques often makes it easier to find interesting connections in data-
bases.
4. The client/sever revolution gives the individual knowledge worker access to cen-
tral information systems, from a terminal on his or her desk.
1.2. Data mining versus query tools
What is the difference between data mining and a normal query environment?
What can a data mining tool do that SQL cannot?
It is significant to realize that data mining tools are complementary to query tools.
A data mining tool does not replace a query tool but give a lot of additional possi-
bilities [2]. Suppose that we have a large file containing millions of records that de-
scribe customers’ purchases in a supermarket. There is a wealth of potentially use-
ful knowledge which can be found by trigger normal queries, such as ‘Who bought
butter and bread last week?’ , ‘Is the profit of this month more than that of last
month?’ and so on. There is, however, knowledge hidden in the databases that is
much harder to find using SQL. Examples would be the answers to questions such
as ‘What products were often purchased together?’ , or ‘What are the subsequent
purchases after buying a gas cooker?’ . Of course, these questions could be an-
swered using SQL but proceeding in such a way could take days or months to solve
the problem, while a data mining algorithm could find the answers automatically in
3
a much shorter time, sometimes even in minutes or a couple of hours. It is said that
if we know exactly what we are looking for, use SQL; but if we know only vaguely
what we are looking for, turn to data mining.
1.3. Mining association rules
There are various kinds of methods to mine the information from the database, such
as mining association rules, multi-level data generalization and summarization,
classification, and clustering [4]. The most common type is mining association
rules.
The problem of mining association rules in databases was first introduced in 1993
by Agrawal [1]. An example of such a rule might be that “90% of customers pur-
chase bread and butter also purchase milk and coffee”. Since its introduction, As-
sociation Rules Mining (ARM) [1] has become one of the core data mining tasks.
ARM is an undirected or unsupervised data mining technique, which work on mas-
sive data, and it produces clear and understandable results. ARM is aimed at find-
ing regularities in data
The following is a formal statement of the problem [1]: Let },...,,{ 21 miiiI = be a set
of literals, called items. A set of items is also called an itemset. An itemset with k
items is called a k-itemset. Let D be a set of transactions, where each transaction
T is a set of items such that IT ⊆ . Associated with each transaction is a unique
identifier, called its TID . We say that a transaction T contains X , a set of some
items in I , if TX ⊆ . The support of of an itemset X , denoted ),( DXσ , is the
number of examples in D where it occurs as a subset. An itemset is frequent or
large if its support is more than a user-specified minimum support (min_sup) value.
An association rule is an implication of the form YX ⇒ where IX ⊆ , IY ⊆ and
φ=∩ YX . X is called the antecedent of the rule, and Y is called the consequence
of the rule. The rule YX ⇒ has support s in the transaction set D if s% of transac-
tions in D contain both X and Y . That is, the support of the association rule
4
YX ⇒ is the probability that YX ∪ occurs in the set of transactions in the data-
base D ; it is denote by Y)support(X ∪ . The rule YX ⇒ holds in the transaction
set D with confidence c if c% of transactions in D that contain X also contain Y .
The confidence of the association rule YX ⇒ is the probability that a transaction
contains Y given that the transaction contains X , or it may be given methamati-
cally as )(/)( XsupportYXsupport ∪ .
Example 1.1. Consider a set of itemsets }F E, D, C, B, A,{=I . Let D be a set of
four transactions as following:
Transaction
identification
Items
bought
10 A, B, C
20 A, C
30 A, D
40 B, E, F
Frequent
pattern
Support
{A} 75%
{B} 50%
{C} 50%
{A, C} 50%
Figure 1. Example database and frequent itemsets
For rule CA ⇒ :
support = support({A} ∪ {C}) = 50%
confidence = support({A} ∪ {C}) / support({A}) = 66%
The problem of discovering all association rules can be decomposed into two sub-
problems [1]:
1. Find all acts of items (itemsets) that have transaction support above minimum
support. The support for an item is the number of transactions that contain the item-
set. Recall that an itemset is frequent or large if its support is more than a user-
specified minimum support (min_sup) value.
Min. support 50%
Min. confidence 50%
5
Example 1.2. From the above database, we obtain four frequent itemsets {A}, {B},
{C} and {A, C} with supports of 75%, 50%, 50% and 50% respectively.
2. Use the large itemsets to generate the desired rules. Here is a straightforward al-
gorithm for this task. For every large itemset l , find all non-empty subsets of l . For
every such subset a , output a rule of the form )( ala −⇒ if the ratio of support( l )
to support(a ) is at least minconf. We need to consider subsets of l to generate rules
with multiple consequents.
Example 1.3. From the frequent itemset {A, C} found in example 1.2, we can gen-
erate two rules whose confidences are greater than or equal to minconf value.
Itemset {A, C}
%100
%50
%50
})support({C
{C})}suuport({A
confidence
%66
%75
%50
})support({A
{C})}suuport({A
confidence
AC rule
CA rule
==
∪
=
==
∪
=
⇒
⇒
As the problem of generating rules from the itemsets in step 2 is straightforward
[1], we will not mention it over again in this thesis.
1.4. Outline of the thesis
The remainder of this thesis is as follows. In chapter 2, we state the definition of
mining association rules with weighted items. The main aim of this chapter is to
provide a background for weight based problems we base our approach on. In
chapter 3, we describe the main idea of the thesis. A new term, adjustable interest-
ingness, is also introduced here. After the extensive discussion of mining associa-
tion rules with adjustable interestingness in chapter 3, we devote chapter 4 to the
algorithm for it. Experiments on real databases are also described. Finally, we con-
clude the thesis with a summary and a discussion of future work.
6
CHAPTER 2
MINING ASSOCIATION RULES WITH
WEIGHTED ITEMS
In the last section, we discussed about mining association rule for unweighted case.
In the following, we introduce the conceptual framework of weight and apply it to
mining association rules. The concept of weight will be used in the coming chap-
ters.
2.1. Introduction
There have been two approaches to the association rule mining problem. The first
one is to mine the frequent itemsets regardless of their coefficients [1, 7]. The sec-
ond trend is to assign weights to the items to reflect their importance to the users.
Some previous works focused on mining frequent itemsets with weighted items [5]
and different supports [6].
The association rules, mentioned in previous chapter, are called the ‘unweighted’
association rules [6] as the items are treated uniformly.
Example 2.1. The following rule is the unweighted binary association rule from [1]:
(Bread = Yes) => (Ham = Yes)
with support = 60% & confidence = 80%
The above rule states that the probability of buying bread and ham in a set of trans-
action is 0.6, and the confidence states that probability that buying ham, given that
that customer buys bread, is 0.8.
7
The above rule is an unweighted case. However, it is better for the following cases
to consider the importance of the items or attributes.
For example, the rule
(Income = High) => (School level = High)
is, in human interpretation, probably more interesting than
(Price = High) => (Sales = Low)
even if the support of the latter rule is much more than that of the former.
By using the weights, the importance of the attributes or items can be reflected, and
we can mine the rules with interestingness. For example, we can add the weights to
the sales transactions, where the items are under promotion, or with more profits.
The unweighted association rules would be the same if the database did not change,
thus it cannot provide a flexible way for the users to adjust the priority of the rules.
Therefore, the mining association rules for weighted items was presented in [6] to
resolve this problem.
2.2. Problem definition
Similar to section 1.3, we consider a database with a set of transaction D , a set of
attributes or items I , and each transaction is assigned a transaction identifier TID .
Based on the definitions in section 1.3, the weights and weighted association rules
are defined [6]:
Definition 1. An item weight, w , where 10 ≤≤ w , defines the importance of the
item. 0 indicates the least important item, and 1 denotes the most important item.
For example, if the weight of the itemset X is 0.95, it tells us the itemset is impor-
tant in the set of transaction D . The weight of 0.1 indicates a less important set.
8
Definition 2. A weighted association rule (or association rule with weighted item)
has the form YX ⇒ , where IX ⊆ , IY ⊆ , φ=∩ YX , and the items in X and Y
are given by the weights.
Definition 3. The weighted support of the binary weighted rule YX ⇒ is the ad-
justing ratio of the support, or mathematically,
∑
∪∈
=
)(
),()(),(
YXj
j YXsupportwYXwsupport
where the weights of the items },...,,{ 21 miii are },...,,{ 21 mwww respectively.
In order to find the interesting rules, two thresholds, minimum weighted support
(wminsup) and minimum confidence (minconf) must be specified.
Definition 4. An itemset X is called a large weighted itemset if the weighted sup-
port of the itemset X is greater than or equal to the weighted support threshold, or
mathematically,
wminsupXwsupport ≥)(
Definition 5. A weighted association rules YX ⇒ is called an interesting rule if
the confidence of itemset ( YX ∪ ) is greater than or equal to a minimum confi-
dence threshold, and ( YX ∪ ) is a large weighted itemset.
Product ID Item Average Profit Weight …
1 Eraser 100 0.1 …
2 Ball-pen 200 0.2 …
3 Notebook 300 0.3 …
4 Pencil 500 0.5 …
5 Pen 1000 1 …
Table 1. Database of a stationery store
9
TID Product ID TID Product ID
1 1 4 2 1 4 5
3 2 3 5 4 3 5
5 1 2 4 5 6 1 3 4
7 3 8 2 5
Table 2. Transactions of a stationery store
Example 2.2. Suppose in a stationery store, a database is shown in Table 1. Each
item includes information of name, profit and given weight. Table 2 gives the
transaction database. For each transaction, there will be a transaction identifier
(TID ) and the names of items. Suppose there are only 5 items and totally 8 transac-
tions in the transaction database.
Regardless of the weights given, if the value of minsup is set to 0.4, {1, 4} will be a
large itemset since its support is 50%. However, {1, 4, 5} is not a large itemset as it
appears only two times in the database.
But if we take weights of items into account, and the given value of wminsup is 0.4,
{1, 4} will not be a large weighted itemset since
(0.1 + 0.5) x
8
4
= 0.3 ≤ 0.4
On the contrary, {1, 4, 5} will be a large itemset since
(0.1 + 0.5 + 1) x
8
2
= 0.4 ≥ 0.4
By the same argument, {5}, {1, 2, 5} will be large weighted itemsets.
Although itemset {1, 4} has a greater support than that of {1, 2, 5}, it seem to be
true that the latter otherwise make a greater profit than the former can do. In this
case, we say that itemset {1, 2, 5} is more interesting than itemset {1, 4}, or the in-
terestingness of itemset {1, 2, 5} is greater than that of itemset {1, 4}.
10
CHAPTER 3
MINING ASSOCIATION RULES WITH ADJUST-
ABLE INTERESTINGNESS
In this chapter, we design a new concept, adjustable interestingness. Furthermore, a
novel approach in mining association rules based on adjustable interestingness is
introduced.
3.1. Interestingness and interesting itemsets
Based on the definitions of weighted itemsets in previous chapter, we extend the
definitions of interestingness and interesting itemsets.
Definition 1. The interestingness of an itemset X , denoted interest( X ), is the co-
efficient correlation between the number of transactions in which it occurs as a sub-
set and the total weight of its items, or methametically,
∑
∈
=
)
)()()(
Xj
j XsupportwXinterest
In order to find the interesting itemsets, the threshold, minimum interestingness
(min_int) must be specified.
Definition 2. An itemset X is called an interesting itemset if the interestingness of
the itemset X is greater than or equal to the interestingness threshold, or
mathematically,
intminXinterest _)( ≥
11
Example 3.1. From the database in Table 1 and 2, we can calculate the interesting-
ness of itemsets as the following table. The itemsets are sorted into descending or-
der of their interestingness.
Itemset W* S* I* p Itemset W* S* I* p
{5} 1 62.5% 0.625 {1, 3, 4} 0.9 12.5% 0.1125
{2, 5} 1.2 37.5% 0.45 {1, 2, 4} 0.8 12.5% 0.1
{1, 4, 5} 1.6 25% 0.4 {3, 4} 0.8 12.5% 0.1
{4, 5} 1.5 25% 0.375 {2, 4} 0.7 12.5% 0.0875
{3, 5} 1.3 25% 0.325 {2} 0.2 37.5% 0.075
{1, 4} 0.6 50% 0.3 {2, 3} 0.5 12.5% 0.0625
{1, 5} 1.1 25% 0.275 {1} 0.1 50% 0.05
{4} 0.5 50% 0.25 {1, 3} 0.4 12.5% 0.05
{2, 4, 5} 1.7 12.5% 0.2125 {1, 2} 0.3 12.5% 0.0375
{2, 3, 5} 1.5 12.5% 0.1875 {3} 0.3 50% 0.15
{1, 2, 5} 1.3 12.5% 0.1625
* W, S and I are acronyms for Weight, Support and Interestingness, respectively.
Table 3. Itemsets sorted into descending order of their interestingness
If the value of min_int is 0.3, we obtain six interesting itemsets; these are: {5},
{2, 5}, {1, 4, 5}, {4, 5}, {3, 5}, {1, 4}. Of these six interesting itemsets, five con-
tain item 5 which represents for pens. It proves that the interestingness of an item-
set is made up of its weight and support.
3.2. Interestingness constraints
By sorting the itemsets into descending order of their interestingness, we have two
diverse ways to mine the most interesting itemsets. The first is to set a threshold for
minimum interestingness, or min_int. In the example 3.1, when the min_int value is
set to 0.3, there are six most interesting itemsets found in the database. That is,
there are only six itemsets whose interestingness are greater than or equal to 0.3.
12
Since the number of itemsets found is unpredictable, it may be cumbersome when
min_int is lowered to 0.
In this thesis, we present an alternative way to mine the most interesting itemsets.
By this way, the min_int is adjusted throughout the mining process. The term con-
straint is defined as the number of itemsets for which we desire to mine and it must
be specified. From the example 3.1, if the constraint value is set to 5, we can mine
five most interesting itemsets whose interestingness are 0.325 or over. Therefore,
the min_int value is adjusted to 0.325 afterward. Similarly, if the constraint is 10,
the min_int is adjusted to 0.1875 since the interestingness of ten most interesting
items are greater or equal to 0.1875. It is clear that the greater the constraint is, the
smaller the min_int is adjusted to.
3.3. Motivation behind interesting itemsets and adjustable
interestingness
By setting the interestingness of an itemset, we can get a balance between the two
measures, weights and supports. If supports are separated from weights, we can
only find itemsets having sufficient support. However, this may ignore some inter-
esting knowledge. Special items and special group of items may be specified indi-
vidually and have higher priority. For example, there are few customers buying
pens, but the profit the pens make is much more than that of other products. As a
matter of course, the store clerk will want to put the pens under the promotion
rather than others. For this reason, the weight which is a measure of the important
of an item is applied.
The interestingness of an item can be computed at the multiplication of weight and
support. Interestingness, in some case, can be “the potential usefulness of the
knowledge” but it seems to be difficult to understand. It is clear that most end-users
are not statisticians, they thus have trouble setting the threshold for min_int. Putting
a query “Show me twenty most interesting itemsets” is definitely more comprehen-
sible than “Please list itemsets whose interestingness are greater or equal to 0.5”.
13
Furthermore, it is impractical to generate entire set of interesting itemsets. Our pur-
pose is to mine only most interesting ones. Hence, we design a new concept, ad-
justable interestingness, in this thesis.
Related work
Our past work [5] addressed the problem of mining association rules with different
supports, provided that most of proposed algorithms employing the same minimum
support, minsup, to generate itemsets. In some situation, it may not be appropriate.
There may be some itemsets with smaller supports than minsup value, however,
they can generate more useful rules. By setting the minimum support for each item,
we generate closed sets using a triple minsup-itemset-tidset and then restrict the
number of itemsets to be found, thus the search space is fairly reduced.
14
CHAPTER 4
ALGORITHM FOR MINING ASSOCIATION
RULES WITH ADJUSTABLE INTERESTINGNESS
(MARAI)
The main idea of this thesis, adjustable interestingness, has been introduced in the
previous chapter. In this case, the meaning of support has been changed, and the
CHARM algorithm cannot be applied. In this chapter, we propose the MARAI al-
gorithm as solutions. Thorough experimental performance indicates that our algo-
rithm works effectively in large databases.
4.1. Motivation
It may seem that the CHARM algorithm [7] can be adopted in the interestingness
constraints case. However, the meaning of the support, called interestingness, has
been changed. Therefore, it is not necessarily true that all subsets of a large
weighted itemset are large weighted itemsets.
Example 4.1. Take the database and the set of transaction from example 2.2. For all
the possible itemsets, there are only three large weighted itemsets, which are
{1, 4, 5}, {5}, {1, 2, 5}. However, {1, 5} is not a large weighted itemset, even
though it is a subset of both itemset {1, 4, 5} and itemset {1, 2, 5}.
In this situation, the new algorithm, called MARAI algorithm, is proposed to solve
above problem. The framework of our proposed algorithm for mining association
rules with adjustable interestingness is similar to the CHARM algorithm, but the
detailed steps contain some significant differences. To begin with, we also mine
only the closed sets [7]. Closed sets are lossless in the sense that they uniquely
determine the set of all frequent itemsets and their exact frequency. The set of all
15
termine the set of all frequent itemsets and their exact frequency. The set of all
closed frequent itemsets can be orders of magnitude smaller than the set of all fre-
quent itemsets, especially on dense databases. Before introducing the new algo-
rithm, we will reiterate some concepts represented in previous chapters and de-
scribe the problem setting and preliminaries.
4.2. Preliminaries
In this section, we describe the conceptual framework of closed sets [7]. Let I be a
sets of items, and D a database of transactions. Each transaction has a unique iden-
tifier (tid) and contains a set of items. Let T be the sets of all tids. A set IX ∈ is
called an itemset, and a set TY ∈ is called a tidset. For convenient, we write an
itemset {A, C, W} as ACW, and a tidset {2, 4, 5} as 245. For an itemset X, we de-
note the set of all tids that contain X as a subset by )(Xt . For a tidset Y , we de-
note the set of items appearing in all the tids of Y by )(Yi . The notion )(XtX ×
refers to an itemset-tidset pair, or an IT-pair [7].
DISTINCT BOOK ITEMS
Item ID Weight Description
A 0.2 Jane Austen
C 0.2 Agatha Christie
D 0.3 Conan Doyle
T 0.4 Mark Twain
W 0.1 Wodehouse
DATABASE
TID Itemset
1 A C T W
2 C D W
3 A C T W
4 A C D W
5 A C D T W
6 C D T
Figure 2. Example database
Consider the database shown in Figure 2. There are five different items, I = {A, C,
D, T, W}, and six transactions T = {1, 2, 3, 4, 5, 6}. The table on the left shows the
information about the items in a book store. The information includes the identifi-
16
cation of the items, the author names of such items and the given weight of each
item. The table on the right shows the transaction database. For each transaction,
there will be a transaction identifier and a set of items in which the transaction con-
tains. Suppose there are only 5 items and totally 6 transactions in the transaction
database.
The corresponding tidset of ACW, denoted t(ACW), is 1345 since there are 4 trans-
actions 1, 3, 4, 5 containing ACW as a subset. The corresponding itemset of 245,
denoted i(245), is CDW as the sets of items {C, D, W} is common to all the tids 2,
4 and 5.
It is worth mentioning that )()( xtXt Xx ∈∩= , and )()( YiYi Yy ∈∩= .
Example 4.2. 1345123451234561345)W()C()A()ACW( =∩∩=∩∩= tttt
and ACDTWACDWCDW)5()4()2()245( ∩∩=∩∩= iiii .
The support of an itemset X , denoted )(Xσ , is the number of transactions in D
where it occurs as a subset [1], i.e., |)(|)( XtX =σ [7]. The weight of an itemset
X , denoted weight( X ), is the total weight of items in which the itemset X con-
tains, i.e., weight( X ) = ∑
∈Xj
jw [6].
We use the notation )(Xω to refer to the interestingness of the itemset X . As de-
scribed in previous chapter, interest(X) = ∑
∈ )
)()(
Xj
j Xsupportw . We thus have
)()()( XweightXX ×= σω
Example 4.3. %67
6
4)ACW( ==σ , weight(ACW) = 0.2 + 0.2 + 0.1 = 0.5, and
33.05.0%67)( =×=ACWω .
17
The table below shows all 31 itemsets which are sorted into descending order of the
interestingness.
Itemset W* S* I* p Itemset W* S* I* p
ACTW 0.9 50% 0.45 ACD 0.7 33% 0.23
CT 0.6 67% 0.4 ADW 0.6 33% 0.2
ACT 0.8 50% 0.4 C 0.2 100% 0.2
CTW 0.7 50% 0.35 ACDTW 1.2 17% 0.2
ATW 0.7 50% 0.35 AW 0.3 67% 0.2
CD 0.5 67% 0.33 D 0.3 67% 0.2
ACW 0.5 67% 0.33 DW 0.4 50% 0.2
CDT 0.9 33% 0.3 ACDT 1.1 17% 0.18
AT 0.6 50% 0.3 ADTW 1 17% 0.17
CDW 0.6 50% 0.3 CDTW 1 17% 0.17
AC 0.4 67% 0.27 AD 0.5 33% 0.17
T 0.4 67% 0.27 ADT 0.9 17% 0.15
ACDW 0.8 33% 0.27 DTW 0.8 17% 0.13
TW 0.5 50% 0.25 A 0.2 67% 0.13
CW 0.3 83% 0.25 W 0.1 83% 0.08
DT 0.7 33% 0.23
* W, S and I are acronyms for Weight, Support and Interestingness, respectively.
Table 4. Itemsets sorted into descending order of the interestingness
An itemset is interesting if its interestingness is greater than or equal to a user-
specified minimum interestingness (min_int) value, i.e., if min_intX ≥)(ω . An
interesting itemset is called closed if there exists no proper superset XY ⊃ with
)()( YX σσ = . The term closed used in this thesis is similar to the term closed de-
fined in [3, 7]. A set of interesting closed itemsets is a subset of the corresponding
set of interesting itemsets. This subset is necessary and sufficient to cover all of the
information about the interesting itemsets.
18
Example 4.4. Given min_int be 2. There are 23 interesting itemsets as follows:
Support Itemsets
100% (6) C
83% (5) CW
67% (4) CT, CD, ACW, AC, T, AW, D
50% (3) ACTW, ACT, CTW, ATW, AT,
CDW, TW, DW
33% (2) CDT, ACDW, DT, ACD, ADW
17% (1) ACDTW
Table 5. All interesting itemsets
We obtain 10 closed itemsets which are underlined; these are: C, CW, CT, CD,
ACW, ACTW, CDW, CDT, ACDW and ACDTW. As the example shows, if F
denotes the sets of interesting itemsets, and C the set of closed ones, then we have
IFC ⊆⊆ . Generally, the the set C can be orders of magnitude smaller than the
set F , which itself is orders of magnitude smaller than the set of all itemsets I
(especially for dense database).
4.3. Basic properties of itemset-tidset pairs
A closure of an itemset X , denoted )(Xc is the smallest closed set that contain X
[7]. Recall that )(Yi is the sets of items common to all the tids in the tidset Y ,
while )(Xt is the tids common to all the items in X . The closure of an itemset X
can be computed by mapping )(Xt to its image in the itemset space, i.e.,
))(()()( XtixtiXc == $ [7]. An itemset X is closed if and only if )(XcX = .
The support of an itemset X is also equal to the support of its closure, i.e.,
))(()( XcX σσ = .
19
Example 4.5. Since ACW)1345(ACW))(()ACW( === itic , itemset ACW is
closed.
For any two IT-pairs, )( ii XtX × and )( jj XtX × , if ji XX ⊆ then )()( ji XtXt ⊇ .
Example 4.6. For ACTWACW ⊆ , )ACTW(1351345)ACW( tt =⊇= .
Let )( ii XtX × and )( jj XtX × be any two IT-pairs. We have four properties of IT-
pairs.
Property 1. If )()( ji XtXt = then )()()( jiji XXcXcXc ∪== [7]. It follows that
)()()( jjii XXXX ωωω ≥∪≤ .
If )()( ji XtXt = , then obviously ))(())(( ji XtiXti = , i.e., )()( ji XcXc = . Further,
)()( ji XtXt = implies that )()()()( ijiji XtXtXtXXt =∩=∪ . We thus have
))(())(( iji XtiXXti =∪ , giving us )()( iji XcXXc =∪ .
From )()()( jiji XXtXtXt ∪== , we have )()()( jiji XXXX ∪== σσσ . Further,
)()()( jjii XweightXXweightXweight ≥∪≤ , then )()()( jjii XXXX ωωω ≥∪≤ .
Note that )()()( XweightXX ×= σω
Example 4.7. If 1345)AW()AC( == tt , ACW)ACW()AW()AC( === ccc
then we conclude that 2.0)AW(33.0)ACW(27.0)AC( =≥=≤= ωωω .
This property implies that we can replace every occurrence of iX with ji XX ∪ ,
and we can remove the element jX from further consideration, since its closure is
identical to the closure of ji XX ∪ but it is not as interesting as ji XX ∪ .
Property 2. If )()( ji XtXt ⊂ , then )()( ji XcXc ≠ , but )()( jii XXcXc ∪= , thereby
)()( jii XXX ∪≤ ωω .
If )()( ji XtXt ⊂ , then )()()()()( jijiji XtXtXtXtXXt ≠=∩=∪ , giving us
)()()( jiji XcXcXXc ≠=∪ and )()( jii XXX ∪≤ ωω
20
Example 4.8. From the above database, 1345)ACW()ACD(45 =⊂= tt , we have
ACDW)ACDW()ACD( == cc and 27.0)ACDW()ACD(23.0 =≤= ωω .
We will use this observation to replace every occurrence of iX with ji XX ∪ , since
they have identical closures and the interestingness of iX is less than that of
ji XX ∪ . However, since )()( ji XcXc ≠ , we cannot remove itemset jX as it may
generate itemsets more interesting than itemset ji XX ∪ .
Property 3. If )()( ji XtXt ⊃ , then )()( ji XcXc ≠ , but )()( jij XXcXc ∪= , giving
)(( jij XXX ∪≤ ωω .
Similar to property 2 above.
Property 4. If )()( ji XtXt ≠ , then )()()( jiji XXcXcXc ∪≠≠ [7]. It follows that
)()()( jiji XXXX ∪≠≠ ωωω .
If )()( ji XtXt ≠ , then clearly )()()()()( jijiji XtXtXtXtXXt ≠≠∩=∪ , giving
us )()()( jiji XcXcXXc ≠≠∪ , and )()()( jiji XXXX ∪≠≠ ωωω .
This property means that neither itemset iX nor itemset jX can be eliminated, both
of which lead to different closures, then can generate itemsets with different inter-
estingness.
4.4. MARAI: Algorithm design and implementation
In this section, we now present MARAI, an algorithm for mining association rules
with adjustable interestingness. The pseudo-code for MARAI appears in Figure 3.
The algorithm start by setting the min_int value to 0 and initializing the prefix class
[ P ] to the single itemsets and their tidsets in Line 1 and 2. The main computation
is performed in MARAI-EXTEND which return the set of interesting closed item-
set C . Based on the adjustable minimum interestingness, it is possible to generate a
summary of the set of interesting closed itemsets. Then there is no need to explic-
itly count the minimum support, minsup.
21
MARAI-EXTEND is responsible for considering each combination of IT-pairs ap-
pearing in the prefix class [ P ]. For each IT-pair )( ii XtX × (Line 5), it combines it
with the other IT-pair )( jj XtX × , that has support more than its support (Line 7).
Each iX generates a new prefix class [ iP ] which is initially empty (Line 6). At line
8, the two IT-pairs are combined to produce a new pair YX × , where ji XX ∪=X
and )()(Y ji XtXt ∩= . Line 9 tests which of the four IT-pair properties can be ap-
plied by calling MARAI-PROPERTY. Once all properties have been processed, we
recursively explore the new class [ iP ] in a depth-first manner (Line 10). X , an ex-
tension of iX , is determined if it is an interesting itemset and if it was already in
closed set C (Line 12). In the case that the support of X is greater than min_int
value, we then insert the itemset X into the set of closed itemsets C (Line 13).
Nevertheless, we only mine the most interesting itemsets, i.e., the number of item-
sets found must not exceed the value of constraint. Thus, if the number of itemsets
in C is more than constraint value (Line 14), then we have to eliminate the least
interesting itemset (Line 15) so that the number of itemsets is always equal to con-
straint. The minimum interestingness, min_int, will be set to the interestingness of
the least interesting itemset afterward, i.e., the minimum interestingness has been
adjusted (Line 16).
At this stage any closed itemset containing iX has already been generated. We then
return to Line 5 to process the next IT-pair in [ P ].
The result is that we obtain as many as constraint most interesting itemsets.
22
:),(MARAI constraintD
1. 0=min_int
2. })(:)({][ min_intXIXXtXP iiii ≥∧∈×= ω
3. )],([EXTENDMARAI φ=− CP
4. itemsets closed ginterestin //allreturn C
:)],([EXTENDMARAI CP−
5. ][in )( each for P XtX ii ×
6. ii XP == X and ][ φ
7. )()( with ,][in )(each for ijjj XXPXtX σσ ≥×
8. )()(Y and XX jij XtXtX ∩=∪=
9. ])[],PROPERTY([-MARAI iPP
10. C)],EXTEND([-MARAI then )]([ if ii PP φ≠
11. ][ delete iP
12. thensubsumednot is X and (X) if min_int>ω
13. X∪= CC
14. then| if constraintC| >
15. constraintiijj XXCX ≤≤≤ 1|)()( // from Remove ωω
16. }1|)(min{ constraintiXmin_int i ≤≤= ω
:])[],([PROPERTYMARAI iPP−
17. then))X(( if min_int≥ω
18. 1roperty then //P)() if ji Xtt(X =
19. ][ from Remove PXj
20. X with all Replace iX
21. 2roperty then //P)( if else ji Xt)t(X ⊂
22. X with all Replace iX
23. 3roperty then //P if else )t(X)t(X ji ⊃
24. ][ from Remove PXj
25. ][ toYX Add iP×
26. 4roperty then //P if else )t(X) t(X ji ≠
27. ][ toYX Add iP×
Figure 3. The MARAI algorithm
23
{}
Ax1345 Cx123456 Dx2456 Tx1356 Wx12345
ACx1345
ACWx1345
ACDx45 ACTx135 CDx2456 CTx1356 CWx12345
ACDWx45 ACTWx135
CDTx56 CDWx245 CTWx135
ACDTWx5
CDTWx5
Figure 4. Search process using adjustable interestingness
Example 4.9. Figure 4 shows how MARAI works on our example database. To be-
gin with, let constraint be 5. We use the pseudo-code in Figure 3 to illustrate the
computation. We initialize the root class as [ P ] = { 1345A× , 123456C× ,
2456D× , 1356T× , 12345W× } in line 2. At line 5, we first process the node
1345A× ; it will be combined with the remaining elements in line 7.
1345A×=iX
123456C×=jX Æ Replace A with AC //Prop. 2
Æ 1345AC×=iX
2456D×=jX : Add ACD to [ 1P ]: }{1 ACD=P //Prop. 4
1356T×=jX : Add ACT to [ 1P ]: } ,{1 ACTACD=P //Prop. 4
12345W×=jX : Replace AC with ACW
Æ 1345ACW×=iX , } ,{ ACTWACDW=iP
We next make a recursive call to MARAI-EXTEND with } ,{1 ACTWACDW=P .
45ACDW×=iX
135ACTW×=jX : Add ACDTW to [ 11P ]: }{11 ACDTW=P //Prop. 4
24
MARAI then makes a recursive call to process class }{11 ACDTW=P . Since there
is only one element, we jump to line 13, where ACDTW is added to the interesting
closed set C . When we return, the ACDW is complete, thus ACDW itself is added
to C . We next look the element ACTW in [ 1P ] ACTW. Since it is the last element,
we can move to line 13 and add ACTW to C .
The A (now ACW) branch is complete and ACW can be inserted to C likewise.
When we process 123456C×=iX , we find that )()( DC tt ⊃ , )()( TC tt ⊃ , and
)()( WC tt ⊃ . Since property 2 applies, we remove D, T, W and add CD, CT,
CW to [ 2P ]
We next make a recursive call to MARAI-EXTEND with class } , ,{2 CWCTCD=P
2456CD×=iX
1356CT×=jX : Add CDT to [ 22P ]: }{22 CDT=P //Prop. 4
12345CW×=jX : Add CDW to [ 22P ]: } ,{22 CDWCDT=P //Prop. 4
A recursive call of MARAI-EXTEND is taken with class } ,{22 CDWCDT=P .
56CDT×=iX
245CDW×=jX : Add CDTW to [ 22P ]: }{22 CDTW=P
Since 5CDTW× is subsumed with 5ACDTW× in C , we discard it. The branch of
CDT is full done, thus CDT is added to C .
We then process element CDW in class [ 22P ]. Since it is the last element in [ 22P ],
CDW will be added to C . We have initially set the constraint value to 5, i.e., we
only desire to mine five most interesting itemsets. Nevertheless, the number of
itemsets that C contains is six; we have to prune the least interesting one. Hence,
ACDTW will be removed from C since its interestingness is less than or equal to
any other itemset in C . C now has only 5 itemsets; these are ACDW, ACTW,
ACW, CDT, CDW with the interestingness of 0.27, 0.45, 0.33, 0.3 and 0.3, respec-
tively. The minimum interestingness, min_int, consequently is increased from 0 to
0.27.
25
Since it cannot be extended further, CD is inserted to C . Simultaneously, ACDW is
eliminated from C and min_int value is set to 0.3.
We next look the element CT of class [ 2P ].
1356CTXi ×=
12345CW×=jX : Add CTW to [ 23P ]: }{23 CTW=P //Prop. 4
135CTW× is subsumed with 135ACTW× in C , thus it will not be added to C .
Since its branch is end, we then add CT to C , deleting CDW at the same time.
The last element CW of class [ 2P ] is next processed. Since the interestingness of
CW is 0.25, less than the value of min_int, it cannot be a generator and is thus
pruned. At this step, no new recursion is made and the final interesting closed set
C consists of five bold, uncrossed IT-pairs shown in Figure 4.
The example above shows that MARAI reduces significantly search space by skip-
ping many uninteresting itemsets and pruning those cannot generate interesting
itemsets at the earlier stage. Therefore, the total time needed for the mining is less
than the non-constraint case.
4.5. Experimental Evaluation
A performance study is carried out for the algorithm MARAI. A series of experi-
ments were conducted on a 300MHz Pentium PC with 192MB of memory, running
Windows XP. The timing is measured by the CPU time calculated from the built-in
timing functions.
Database # Items Avg. Length # Records Level Searched
cosmetic
census
67
98
29
15
10123
3140
8
11
Table 6. Database characteristics
26
Table 6 shows the characteristics of the real databases used in our experiments. It
shows the number of items, the average transaction length and the number of re-
cords in each database. The table additionally shows the maximum level of search
that MARAI performed to discover the most interesting rules when constraint
value is set to 100.
In order to initialize the experiment setup, we use a dense database, namely Cos-
metic, which we obtained from a cosmetic retailing store. In the generation of the
weights, we assume that the weights are equivalent to the price of its product. For
the synthetic itemsets, such as product categories, we generate the weights accord-
ing to our evaluation of their values to the salespeople. We next map the quantita-
tive attributes to the binary type, each of which is mapped to a range of the set of
consecutive integers [3]. For example, suppose an attribute is age of customers, we
would expect rules related to five ranges of age: under 20, from 20 to 29, from 30
to 39, from 40 to 49 and over 50. After discretized [3], the database consists of 67
items with more than 10,000 transactions.
0
20
40
60
80
100
120
140
160
180
200
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
Constraint
To
ta
l t
im
e
(se
c)
Cosmetic
Figure 5. Performance of the MARAI algorithm on Cosmetic
27
Figure 5 shows how MARAI works on Cosmetic with an increasing number of con-
straint value, we kept all other parameters constant. In this figure, the time shown
on y-axis is given in seconds. There is a tradeoff between the constraint value pre-
sented on x-axis and the running time. As can be seen, with constraint = 5, MARAI
takes 19 seconds to show five most interesting itemsets. The more time will be
needed for the case of greater constraint. It takes approximately 3 minutes when
constraint value is set to 100. However, at the point when many itemsets can
satisfy the threshold, the execution time will remain constant. From the figure, the
execution time rises with the increasing number of constraint value linearly, imply-
ing that the complexity of the algorithm is O(constraint). The interesting rules are
listed in more detail in Appendix.
0
20
40
60
80
100
120
140
160
180
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
Constraint
To
ta
l t
im
e
(se
c)
Census
Figure 6. Performance of the MARAI algorithm on Census
In the following experiment, we use a sparse database, namely Census, which has
more items but fewer records than Cosmetic. From figure 6, we observe that a simi-
lar amount of time is taken, implying that the execution time increases with the
numbers of both items and transactions.
28
CHAPTER 5
CONCLUSION
In this thesis, we have introduced the concept of adjustable interestingness, making
the mining of the association rules possible to be interactive with the users. The us-
ers can set the number of interesting itemsets they desire to mine, instead of em-
ploying the minimum support value. This is practical and useful since the support
value is much incomprehensible when weights are applied. Our ideas base on the
fact that the minimum interestingness value may be adjusted during mining proc-
ess. Finally, we proposed an algorithm to solve our problem.
Future work
We have studied the mining of association rules in binary data. However, a transac-
tion database can be quantitative type. In the experiments mentioned above, we
transfer the quantitative database to the binary type by using discretization [3] and
it may cause losing much information. In this case, we should utilize fuzzy set [8]
to overcome this problem. A fuzzy set is represented by a membership function
which assigns to each value of the attribute a value between 0 and 1 to indicate how
much this value belongs to the fuzzy set [8].
We applied the MARAI algorithm to real databases in business and census areas,
and it seems to be feasible. Clinical databases have accumulated large quantities of
information about patients and their medical conditions which could provide new
medical knowledge. We are currently in the process of applying our algorithm to
real clinical databases of a hospital. We observe that interestingness plays a signifi-
cant role in mining useful rules in that a certain disease usually have many symp-
toms, each of which should have diverse importance, thus requiring employing ad-
justable interestingness.
a
REFERENCES
[1] R. Agrawal, T. Imielinski, and A. Swami, ‘Mining association rules between
sets of items in large databases’ . In Proc. of the ACM SIGMOD Conference
Management of Data, Washington D.C., May 1993.
[2] P. Adriaans, D. Zantinge, ‘Data mining’ , Addison-Wesley, 1999.
[3] J. Han, M. Kamber, ‘Data Mining: Concepts and Technique’ , University of
Illinois, 2002
[4] U. Fayyad, S. Chaudhuri, P. Bradley, ‘Data mining and its role in database
systems’ , 1999
[5] D. V. Thanh, P. T. Hoan, P. X. Hieu, N. T. Trung, ‘Khai phá lu WN WK SY L
K WU NK{QJJL QJ QKDX¶ >0LQLQJ DVVRFLDWLRQ UXOHVZLWK GLIIHUHQW VXp-
ports], Conference of junior scientists of Vietnam Nat’l Univ. Hanoi, pages
475-483, 2002
[6] C. H. Cai, ‘Mining association rules with weighted items’ , Thesis for degree
of master, Chinese University of Hongkong, 1998
[7] M. J. Zaki, C. J. Hsiao, ‘CHARM: An efficient algorithm for closed itemset
mining’ , 2002
[8] L. A. Zadeh, Fuzzy sets, Informat. Control, 338-353, 1965.
b
APPENDIX
The following rules are the rules mined from the experiments. We set constraint
value to 100, thus we only mine a hundred most interesting itemsets. From these
itemsets, we extract a number of interesting rules which are sorted into descending
order of the interestingness of the itemsets. We assume that the threshold for all
rules is 0.75 in Cosmetic, i.e., the confidences of all rules are greater or equal to
75%. The threshold of 0.7 is applied in Census.
Rules extracted from the database Cosmetic
(a) 98.6% customers buying Geo. Nature Anti Trouble Cream product are women.
(b) 98.6% customers buying Geo. Nature Anti Trouble Skin Emulsion product are women.
(c) 99.1% customers buying Geo. UV White Essence product are women.
(d) 98.7% customers buying Geo. Loose Finish Powder product are women.
(e) 97.8% customers buying Geo. Nature Fluid Serum product are women.
(f) 70.0% customers buying Geo. UV White Essence product are from 20 to 29 years of age.
(g) 97.9% customers buying Geo. Nature Firming Eye Cream product are women.
(h) 98.0% customers buying Geo. White Serum product are women.
(i) 100% customers buying Geo. White Serum product and buying for more than 1,000,000
VND are women.
(j) 89% customers in summer are women.
(k) 85.8% customers buying For Men category are men.
c
(l) 98.5% customers buying Geo UV White category buy Geo. White Serum product.
(m) 86.6% male customers buy for fewer than 500,000 VND.
(n) 76.9% buying Geo. White Water Toner product buy for more than 1,000,000 VND
Rules extracted from the database Census
(a) 87.8% districts: State = Texas Æ Population = High
(b) 89.5% districts: White = Low Æ Population = High
(c) 73.2% districts: State = Texas Æ White = Medium
(d) 74% districts: State = Texas Æ Male = Medium
(e) 70.5% districts: Hispanic = High Æ White = Low
(f) 97.7% districts: Income = High Æ Population = High
(g) 70.4% districts: Income = Medium Æ White = Low
(h) 98.8% districts: College Graduate = High Æ Population = High
(i) 76.7% districts: College Graduate = High Æ Population = High and Hispanic = High
(j) 72.1% districts: White = Low Æ Density = Low
(k) 85.3% districts: State = Illinois Æ White = Low
(l) 93.1% districts: State = Georgia Æ Population = High
(m) 91.9% districts: State = New York Æ Density = Medium
(n) 84.3% districts: State =Illinois Æ High School Graduate = Medium
Các file đính kèm theo tài liệu này:
- K44_Nguyen_Thanh_Trung_Thesis_English.pdf