In this paper, many operators, which can be used to aggregate intuitionistic linguistic information,
are defined. A modified algorithm based on linguistic labels representation is given to solve the GDM
problem with IFLPR. The results of the modification are the same as the previous method based on
2-tuple intuitionistic linguistic representation [31], and this proves the correctness of the refinement.
15 trang |
Chia sẻ: huongthu9 | Lượt xem: 545 | Lượt tải: 0
Bạn đang xem nội dung tài liệu Symbolic computational models for intuitionistic linguistic information, để tải tài liệu về máy bạn click vào nút DOWNLOAD ở trên
Journal of Computer Science and Cybernetics, V.32, N.1 (2016), 30–44
DOI: 10.15625/1813-9663/32/1/5984
SYMBOLIC COMPUTATIONAL MODELS FOR INTUITIONISTIC
LINGUISTIC INFORMATION
PHAM HONG PHONG1, BUI CONG CUONG2
1Faculty of Information Technology, National University of Civil Engineering;
phphong84@yahoo.com
2Institute of Mathematics, Vietnam Academy of Science and Technology; bccuong@gmail.com
Abstract. In 2014, the notion of intuitionistic linguistic labels was first introduced. In this paper,
we develop two symbolic computational models for intuitionistic linguistic labels. Various operators
are proposed, and their properties are also examined. Then, an application to group decision making
using intuitionistic linguistic preference relations is discussed.
Keywords. linguistic aggregation operator, linguistic symbolic computational model, intuitionistic
linguistic label, group decision making, linguistic preference relation
1. INTRODUCTION
1.1. Group decision making problem under linguistic information
Decision making is the process of choosing alternative(s) among several alternatives based on the
assessments given by decision makers (DMs). The uncertainty and the fuzziness of human thought
result in decision making with linguistic information in a wide variety of practical problems. In 2000,
Herrera and Herrera-Viedma [14] proposed the solution scheme for solving group decision making
(GDM) problems under linguistic information:
(1) Specification of the linguistic term set with its semantic. In this step, the
linguistic variable [30] or linguistic expression domain with a semantic is established to provide
the evaluations about alternatives according to the different criteria.
(2) Choice of the appropriate aggregation operators of linguistic information.
This step depends on the characteristics of the problem and how we represent linguistic terms
of the linguistic variable.
(3) Choice of the best alternative(s). It consists of two phases:
(a) Aggregation phase: It consists of obtaining overall linguistic assessments on the
alternatives by aggregating the assessments provided according to all the criteria by
means of the chosen aggregation operator of linguistic information.
(b) Exploitation phase: It consists of establishing a rank ordering the alternatives ac-
cording to the overall assessments for choosing the best alternative(s).
Many aggregations which aggregate linguistic information were developed. They are classified
into four groups [15,16]:
c© 2015 Vietnam Academy of Science & Technology
SYMBOLIC COMPUTATIONAL MODELS FOR INTUITIONISTIC LINGUISTIC INFORMATION 31
(1) Linguistic computational model based on membership functions (type-1 fuzzy sets). Lin-
guistic terms are seen as fuzzy numbers. Using Extension Principle, the result of the aggregation
is also a fuzzy number. Then, an approximation function is utilized to associate this result
with a particular label.
(2) Linguistic computational model based on type-2 fuzzy sets. This model is an improvement of
the previous one by replacing the type-1 representation by type-2 representation. According
to this approach, computational processes make use the interval type-2 fuzzy sets (a particular
kind of the type-2 fuzzy sets) which maintain the uncertainty but reduce the computational
performance.
(3) Linguistic symbolic computational models based on ordinal scales. This model has been
widely used in computing with words due to its simplicity and high interpretability. It has two
characteristics:
• A finite and totally ordered discrete label set is used to represent the information;
• The operators, according to this model, perform calculations on the indices of the lin-
guistic labels.
(4) Linguistic symbolic computational models based on 2-tuple representation. Linguistic infor-
mation is presented as a pair of values, called linguistic 2-tuple, (s, α), where s is a linguistic
label and α is a numeric value representing a Symbolic Translation. This model makes cal-
culations with linguistic labels easily and without loss of information.
Tanino [19, 20] represented the information about the set of alternatives in three different ways:
as a preference ordering of the alternatives, as a fuzzy preference relation, and as a utility function.
Hence, obtaining a uniform representation of the evaluations must be the first step of the resolution
process of the GDM problem. Chiclana, Herrera and Herrera-Viedma [4] showed that a preference
ordering of the alternatives and a utility function can be converted to achieve a preference relation.
So, preference relations have been frequently considered in the GDM problem: Delgado et al. [8]
studied the situations in which DMs provide their preference information by using linguistic labels,
Xu [26] introduced the concept of intuitionistic preference relations and their application in GDM,
Zhang [31] proposed GDM with 2-tuple intuitionistic fuzzy linguistic preference relations, Xia and
Xu [21] developed an approach to GDM problem based on the hesitant fuzzy preference relations, ...
.
1.2. Intuitionistic linguistic label set
Motivated by Atanassov’s intuitionistic fuzzy set theory [1,2], we defined the intuitionistic linguistic
label [5, 17]. The novel definition is useful in situations when experts’ opinions are given as pairs of
linguistic labels. For each pair of linguistic labels, the first label expresses the membership and the
second expresses the nonmembership (of an element in a set).
For example, the relation between a car x and the set of good cars A can be expressed by two
components: (linguistic) membership component µA (x) = very likely, and (linguistic) nonmem-
bership component νA (x) = impossibly. So the pair (very likely, impossibly) is an intuitionistic
linguistic label expressing this relation.
32 PHAM HONG PHONG, BUI CONG CUONG
Some order relations for intuitionistic linguistic information were developed: membership-based
order relation and nonmembership-based order relation [17], score and confidence-based order relation
[5]. Using order relations, intuitionistic linguistic max and min [17], intuitionistic linguistic weighted
median [17], intuitionistic linguistic fuzzy relations and max-min composition of these relations [5]
were introduced.
1.3. The organization of the paper
The rest of the paper is organized as follows. Section 2 gives an overview of linguistic symbolic
computational models as well as intuitionistic label set. Sections 3 and 4 are devoted to present the
contributions of the paper: two linguistic symbolic computational models for intuitionistic linguistic
information. Section 5 proposes a refined algorithm, which use the new operators, for GDM problem
under intuitionistic fuzzy linguistic preference relations. Section 6 draws a conclusion.
2. PRELIMINARIES
2.1. Linguistic symbolic computational models
2.1.1. Linguistic symbolic computational model for discrete label set
Yager [28] presented the linguistic information using a finite and totally ordered discrete label set:
S = {s0, s1, . . . , sg}, (1)
such that si ≥ sj iff i ≥ j. In order to aggregate linguistic labels, the classical operators (maximum,
minimum and negation) are used:
max (si, sj) = si if i ≥ j ; min (si, sj) = si if i ≤ j ; neg (si) = sg−i.
Example 2.1. A set of seven linguistic labels could be [3]:
S = {s0 = none, s1 = very low, s2 = low, s3 = medium,
s4 = high, s5 = very high, s6 = perfect} .
Using these classical operators, many operators were developed: ordinal ordered weighted av-
eraging (OOWA) operator [29], linguistic weighted disjunction (LWD) and conjunction (LWC)
operators [10], ordinal hybrid aggregation (OHA) operator [22], ... .
The convex combination of linguistic labels [7] was also employed in this computational model.
It provides a wider range of aggregation operators. An aggregation operator, which uses the convex
combination of linguistic labels, can be interpreted as [11,12]:
Sn
C−→ [0, g] app(·)−−−−→ {0, . . . , g} → S,
where C is an operator which directly acts over the label indexes, and app (·) is an approximation
function used to obtain an index associated with a linguistic label. Some well known operators of
this type are linguistic ordered weighted averaging (LOWA1) [9], induced linguistic ordered weighted
averaging (ILOWA1) [10], linguistic weighted averaging (LWA1) [13], linguistic weighted OWA
(LWOWA) operators [18], ... .
The finite and totally ordered discrete label set, S, is also described as follows (g is an even
positive integer):
SYMBOLIC COMPUTATIONAL MODELS FOR INTUITIONISTIC LINGUISTIC INFORMATION 33
• Subscript-symmetric linguistic evaluation scale [6]:
S =
{
sα
∣∣∣α = −g
2
, . . . ,−1, 0, 1, . . . , g
2
}
. (2)
• Multiplicative linguistic evaluation scale [23]:
S =
{
sα
∣∣∣∣α = 1g
2 + 1
, . . . ,
1
2
, 1, 2, . . . ,
g
2
+ 1
}
. (3)
• ... .
Similar to the linguistic evaluation scale in (1), aggregation operators were also developed for
scales in (2-3).
2.1.2. Linguistic Symbolic Computational Model for Continuous Label Set
Xu [27] introduced the computational model to improve the accuracy in processes of linguistic aggre-
gation by extending the subscript-symmetric linguistic evaluation scale (2) to the continuous linguistic
one, S¯ = {sα|α ∈ [−t, t]}, where t (t > g) is a sufficiently large positive integer. If sα ∈ S, then sα
is called an original linguistic label; otherwise, an extended (or virtual) linguistic label. The classical
operators maximum, minimum and negation were defined similarly to the those of S.
Example 2.2. [24] The discrete label set S = {s−3, s−2, s−1, s0, s1, s2, s3} (the set of original
linguistic terms) is extended to a continuous one S¯ = {sα|α ∈ [−3, 3]}. The label s−0.3 ∈ S¯ , for
example, is a virtual linguistic label.
The operators which aggregate continuous labels were developed [25]: linguistic averaging (LA),
linguistic weighted averaging (LWA2), linguistic ordered weighted averaging (LOWA2), induced
linguistic OWA (ILOWA2) operators, ... .
2.2. Intuitionistic label set
Definition 2.1. [5, 17] An intuitionistic linguistic label is defined as a pair of linguistic labels
(si, sj) ∈ S2 such that i + j ≤ g, where S = {s0, s1, . . . , sg} is the linguistic label set; si and
sj ∈ S respectively define the degree of membership and the degree of nonmembership of an object
in a set. The set of all intuitionistic linguistic labels is denoted as S˜. Two intuitionistic linguistic
labels a˜ = (si, sj), b˜ = (sp, sq) are termed to be equal, denoted by a˜ = b˜, if si = sp and sj = sq.
Definition 2.2. [5] For each a˜ = (si, sj) ∈ S˜, the score, SC (a˜), and the confidence, CF (a˜), of a˜
are respectively defined as follows:
SC (a˜) = i− j ; CF (a˜) = i+ j.
Definition 2.3. [5] For all a˜, b˜ ∈ S˜, we define:
a˜ ≥ b˜⇔
SC (a˜) > SC
(
b˜
)SC (a˜) = SC
(
b˜
)
CF (a˜) ≥ CF
(
b˜
) ; a˜ > b˜⇔
{
a˜ ≥ b˜
a˜ 6= b˜ .
34 PHAM HONG PHONG, BUI CONG CUONG
Table 1: The S˜ is reordered such that s˜0 ≤ s˜1 . . . ≤ s˜G (G = 27)
CF = 0 CF = 1 CF = 2 CF = 3 CF = 4 CF = 5 CF = 6
SC = −6 s˜0 = (s0, s6)
SC = −5 s˜1 = (s0, s5)
SC = −4 s˜2 = (s0, s4) s˜3 = (s1, s5)
SC = −3 s˜4 = (s0, s3) s˜5 = (s1, s4)
SC = −2 s˜6 = (s0, s2) s˜7 = (s1, s3) s˜8 = (s2, s4)
SC = −1 s˜9 = (s0, s1) s˜10 = (s1, s2) s˜11 = (s2, s3)
SC = 0 s˜12 = (s0, s0) s˜13 = (s1, s1) s˜14 = (s2, s2) s˜15 = (s3, s3)
SC = 1 s˜16 = (s1, s0) s˜17 = (s2, s1) s˜18 = (s3, s2)
SC = 2 s˜19 = (s2, s0) s˜20 = (s3, s1) s˜21 = (s4, s2)
SC = 3 s˜22 = (s3, s0) s˜23 = (s4, s1)
SC = 4 s˜24 = (s4, s0) s˜25 = (s5, s1)
SC = 5 s˜26 = (s5, s0)
SC = 6 s˜27 = (s6, s0)
Example 2.3. Consider the linguistic label set of seven labels S = {s0, s1, s2, s3, s4, s5, s6}. The
corresponding intuitionistic label set is S˜ =
{
(si, sj) ∈ S2
∣∣ i+ j ≤ 6}. Using the relation “≥”
(Definition 2.3), S˜ can be ordered as in Table 1.
Definition 2.4. [5] Let {a˜1, . . . , a˜n} be a collection of intuitionistic linguistic labels in S˜, the max,
min operators are defined as follows:
max (a˜1, . . . , a˜n) = b˜1 ; min (a˜1, . . . , a˜n) = b˜n,
where b˜j is the j-th largest of the a˜i.
3. AGGREGATION OPERATORS FOR DISCRETE INTUITIONISTIC
LABEL SET
3.1. More on intuitionistic linguistic max-min operators
Let {a˜1, . . . , a˜n} be a collection of intuitionistic linguistic arguments in S˜. In order to find max (a˜1, . . . , a˜n)
and min (a˜1, . . . , a˜n), some notations are used:
arg maxSC (a˜1, . . . , a˜n) =
{
a˜
∣∣∣∣SC (a˜) = maxk=1,...,n {SC (a˜k)}
}
;
arg minSC (a˜1, . . . , a˜n) =
{
a˜
∣∣∣∣SC (a˜) = mink=1,...,n {SC (a˜k)}
}
;
arg maxCF (a˜1, . . . , a˜n) =
{
a˜
∣∣∣∣CF (a˜) = maxk=1,...,n {CF (a˜k)}
}
;
arg minCF (a˜1, . . . , a˜n) =
{
a˜
∣∣∣∣CF (a˜) = mink=1,...,n {CF (a˜k)}
}
.
Theorem 3.1. For all collections of intuitionistic linguistic labels {a˜1, . . . , a˜n} in S˜, we have:
(A1) arg maxCF (arg maxSC (a˜1, . . . , a˜n)) contains a unique element a˜
∗, and
max (a˜1, . . . , a˜n) = a˜
∗;
(A2) arg minCF (arg minSC (a˜1, . . . , a˜n)) contains a unique element a˜∗, and
min (a˜1, . . . , a˜n) = a˜∗.
SYMBOLIC COMPUTATIONAL MODELS FOR INTUITIONISTIC LINGUISTIC INFORMATION 35
Proof. Let us consider a˜∗ ∈ arg maxCF (arg maxSC (a˜1, a˜2, . . . , a˜n)). With the assumption that
arg maxSC (a˜1, . . . , a˜n) =
{
b˜1, . . . , b˜m
}
, we get a˜∗ ∈ arg maxCF
(
b˜1, . . . , b˜m
)
. Thus, SC (a˜∗) =
max {SC (a˜1) , . . . ,SC (a˜n)}, and CF (a˜∗) = max
{
CF
(
b˜1
)
, . . . ,CF
(
b˜m
)}
. Because each pair
of score and confidence determines a unique intuitionistic linguistic label, a˜∗ is the unique element of
arg maxCF (arg maxSC (a˜1, . . . , a˜n)).
For each a˜ ∈ {a˜1, . . . , a˜n}, there are two cases:
• If a˜ /∈
{
b˜1, . . . , b˜m
}
, then SC (a˜) < SC (a˜∗).
• If a˜ ∈
{
b˜1, . . . , b˜m
}
, then SC (a˜) = SC (a˜∗) and CF (a˜) ≤ CF (a˜∗).
By definition 2.3, a˜ ≤ a˜∗. So, max (a˜1, . . . , a˜n) = a˜∗. The remainder is similar.
Example 3.1. Consider a˜1 = (s0, s4), a˜2 = (s2, s4), a˜3 = (s3, s1), a˜4 = (s1, s5), a˜5 = (s4, s2).
We have:
max (a˜1, . . . , a˜n) = arg maxCF (arg maxSC (a˜1, a˜2, a˜3, a˜4, a˜5))
= arg maxCF (a˜3, a˜5) = a˜5;
min (a˜1, . . . , a˜n) = arg minCF (arg minSC (a˜1, a˜2, a˜3, a˜4, a˜5))
= arg minCF (a˜1, a˜4) = a˜4.
3.2. Intuitionistic linguistic aggregation median operators
In this sub-section, we define some operators on intuitionistic linguistic labels by using the relation
“≥” (Definition 2.3). From now on, W = (w1, . . . , wn) denotes a weight vector such that wi ≥ 0,
and
∑n
i=1wi = 1.
Definition 3.1. Let {a˜1, . . . , a˜n} be a collection of intuitionistic linguistic labels in S˜, and{
b˜1, . . . , b˜n
}
be a permutation of {a˜1, . . . , a˜n} such that b˜1 ≥ b˜2 · · · ≥ b˜n. We define:
(1) Intuitionistic linguistic median operator:
iMed (a˜1, . . . , a˜n) =
{
b˜n+1
2
if n is odd,
b˜n
2
if n is even.
(2) Intuitionistic linguistic weighted median: consider the collection {(w1, a˜1) , . . . , (wn, a˜n)},
where W = (w1, . . . , wn) is a weight vector, wi is associated weight of a˜i. We assume that{(
u1, b˜1
)
, . . . ,
(
un, b˜n
)}
is the ordered collection of {(w1, a˜1) , . . . , (wn, a˜n)} such that uj is
the weight that is associated with the ai that becomes bj . Let Ti =
∑i
j=1 uj , intuitionistic linguistic
weighted median (iLWM) operator is defined as:
iLWM ((w1, a˜1) , . . . , (wn, a˜n)) = b˜k,
where k is the smallest integer such that Tk is greater than or equal to 0.5.
36 PHAM HONG PHONG, BUI CONG CUONG
Example 3.2. Consider a˜1 = (s0, s4), a˜2 = (s2, s4), a˜3 = (s3, s1), a˜4 = (s1, s5), a˜5 = (s4, s2),
and W = (0.2, 0.3, 0.15, 0.22, 0.13). Reordering the collection {a˜1, a˜2, a˜3, a˜4, a˜5} in descending
order, we get b˜1 = a˜5 > b˜2 = a˜3 > b˜3 = a˜2 > b˜4 = a˜4 > b˜5 = a˜1.
• iMed (a˜1, a˜2, a˜3, a˜4, a˜5) = b˜ 5+1
2
= b˜3 = a˜2 = (s2, s4).
• Since u1 = w5 = 0.13, u2 = w3 = 0.15, u3 = w2 = 0.3, u4 = w4 = 0.22, u5 = w1 = 0.2,
then T1 = u1 = 0.13, T2 = u1 + u2 = 0.28, T3 = u1 + u2 + u3 = 0.58 > 0.5. Consequently,
iLWM (a˜1, a˜2, a˜3, a˜4, a˜5) = b˜3 = a˜2.
Theorem 3.2. iLWM is idempotent, bounded, commutative and monotonous, i.e.:
(B1) For all a˜ ∈ S˜, and W = (w1, . . . , wn) is a weight vector,
iLWM ((w1, a˜) , . . . , (wn, a˜)) = a˜;
(B2) For all a˜i ∈ S˜, and W = (w1, . . . , wn) is a weight vector,
min (a˜1, . . . , a˜n) ≤ iLWM ((w1, a˜1) , . . . , (wn, a˜n)) ≤ max (a˜1, . . . , a˜n) ;
(B3) For all a˜i ∈ S˜, W = (w1, . . . , wn) is a weight vector, and σ is a permutation on the
set {1, . . . , n},
iLWM ((w1, a˜1) , . . . , (wn, a˜n)) = iLWM
((
wσ(1), a˜σ(1)
)
, . . . ,
(
wσ(n), a˜σ(n)
))
;
(B4) Let a˜i, c˜i ∈ S˜ such that a˜i ≤ c˜i for all i = 1, . . . n , and W = (w1, . . . , wn) is a weight
vector,
iLWM ((w1, a˜1) , . . . , (wn, a˜n)) ≤ iLWM ((w1, c˜1) , . . . , (wn, c˜n)) .
Proof. (B1)-(B2) are straightforward.
(B3) It is implied from the fact that the j-th largest of {a˜1, . . . , a˜n} is equal to the that of{
a˜σ(1), . . . , a˜σ(n)
}
.
(B4) It is easily shown that the j-th largest of {a˜1, . . . , a˜n} is smaller or equal to the that of
{c˜1, . . . , c˜n}. So, (B4) is also proved.
3.3. Convex combination of two intuitionistic linguistic labels
Definition 3.2. Let a˜1 and a˜2 ∈ S˜. A convex combination of a˜1 and a˜2 that has an associated
weight vector W = (w1, w2), denoted by iC {wi, a˜i, i = 1, 2}, is defined as follows:
iC {wi, a˜i, i = 1, 2} = s˜k,
where k = i + round [w1 (j − i)], round (·) is the usual round operator, and s˜j = max {a˜1, a˜2},
s˜i = min {a˜1, a˜2}.
For convenience, we also denote by iC {w1, w2, a˜1, a˜2} the expression iC {wi, a˜i, i = 1, 2}.
Theorem 3.3. Let iC be a convex combination having an associated weight vector W =
(w1, w2). We have:
(C1) For all a˜1, a˜2 ∈ S˜, iC {1, 0, a˜1, a˜2} = max (a˜1, a˜2) and iC {0, 1, a˜1, a˜2} = min (a˜1, a˜2);
(C2) (Idempotency) For all s˜i ∈ S˜, iC {w1, w2, s˜i, s˜i} = s˜i;
SYMBOLIC COMPUTATIONAL MODELS FOR INTUITIONISTIC LINGUISTIC INFORMATION 37
(C3) (Boundary) For all a˜1, a˜2 ∈ S˜, min (a˜1, a˜2) ≤ iC {wi, a˜i, i = 1, 2} ≤ max (a˜1, a˜2);
(C4) (Monotonicity) For all a˜1, a˜2, c˜1, c˜2 ∈ S˜ such that a˜1 ≤ c˜1, and a˜2 ≤ c˜2,
iC {wi, a˜i, i = 1, 2} ≤ iC {wi, c˜i, i = 1, 2} ;
(C5) (Commutativity) For all a˜1, a˜2 ∈ S˜, iC {w1, w2, a˜1, a˜2} = iC {w1, w2, a˜2, a˜1}.
Proof. (C1)-(C2) and (C5) are trivial.
(C3) As round (·) is an increasingly monotonous function, we have
i ≤ i+ round [w1 (j − i)] ≤ i+ round (j − i) = i+ (j − i) = j,∀i ≤ j.
Thus, min (a˜1, a˜2) ≤ iC {wi, a˜i, i = 1, 2} ≤ max (a˜1, a˜2).
(C4) Assume that a˜1 = s˜i, a˜2 = s˜j , c˜1 = s˜p and c˜2 = s˜q with i ≤ j. By hypothesis, i ≤ p and
j ≤ q.
Case 1. p ≤ q. We have
(p+ w1 (q − p))− (i+ w1 (j − i)) = (1− w1) (p− i) + w1 (q − j) ≥ 0,
or
i+ w1 (j − i) ≤ p+ w1 (q − p) .
By the increasing monotonicity of round (·) function,
iC {wi, a˜i, i = 1, 2} ≤ iC {wi, c˜i, i = 1, 2} .
Case 2. p > q. We have
iC {wi, a˜i, i = 1, 2} ≤ max (a˜1, a˜2) = s˜j ≤ s˜q = min (c˜1, c˜2) ≤ iC {wi, c˜i, i = 1, 2} .
Using the convex combination of two intuitionistic linguistic labels, convex combination of n
intuitionistic linguistic labels is recursively defined as the following definition.
Definition 3.3. A convex combination of n (n ≥ 2) intuitionistic linguistic labels a˜1, . . . , a˜n in
S˜, that has associated weight vector W = (w1, . . . , wn), denoted by iC
n {wk, a˜k, k = 1, . . . , n}, is
defined as follows:
(1) n = 2: iC2 {wi, a˜i, i = 1, 2} = iC {wi, a˜i, i = 1, 2},
(2) n > 2:
iCn {wk, a˜k, k = 1, 2, . . . , n} = iC
{
w1, 1− w1, b˜1, iCn−1
{
wh
1− w1 , b˜h, h = 2, . . . , n
}}
,
where bj is the j-th largest of ai.
We also denote by iCn {w1, . . . , wn, a˜1, . . . , a˜n} the expression iCn {wi, a˜i, i = 1, . . . , n}.
38 PHAM HONG PHONG, BUI CONG CUONG
3.4. Intuitionistic linguistic ordered weighted averaging operator
Definition 3.4. Intuitionistic linguistic ordered weighted averaging (iLOWA1) operator, whose
weight vector is W = (w1, . . . , wn), is defined as follows:
iLOWA1W (a˜1, . . . , a˜n) = iC
n {wk, a˜k, k = 1, . . . , n} .
Example 3.3. Let a˜1 = (s0, s4), a˜2 = (s2, s4), a˜3 = (s3, s1), a˜4 = (s1, s5), a˜5 = (s4, s2)
be intuitionistic linguistic labels belonging to the intuitionistic label set given in Example 2.3, and
W = (0.2, 0.3, 0.15, 0.22, 0.13). It is easy to see that (Table 1) a˜1 = s˜2, a˜2 = s˜8, a˜3 = s˜20,
a˜4 = s˜3, a˜5 = s˜21. So, b˜1 = s˜21, b˜2 = s˜20, b˜3 = s˜8, b˜4 = s˜3, b˜5 = s˜2. We have:
iLOWA1W (a˜1, a˜2, a˜3, a˜4, a˜5) = iC
{
20
100
,
80
100
, s˜21, iC
4
{
30
80
,
15
80
,
22
80
,
13
80
, s˜20, s˜8, s˜3, s˜2
}}
;
iC4
{
30
80
,
15
80
,
22
80
,
13
80
, s˜20, s˜8, s˜3, s˜2
}
= iC
{
30
80
,
50
80
, s˜20, iC
3
{
15
50
,
22
50
,
13
50
, s˜8, s˜3, s˜2
}}
;
iC3
{
15
50
,
22
50
,
13
50
, s˜8, s˜3, s˜2
}
= iC
{
15
50
,
35
50
, s˜8, iC
2
{
22
35
,
13
35
, s˜3, s˜2
}}
.
(4)
Let us calculate iC2
{
22
35 ,
13
35 , s˜3, s˜2
}
, substitute in to (4):
iC2
{
22
35
,
13
35
, s˜3, s˜2
}
= s˜2+round[ 2235 (3−2)] = s˜3;
iC3
{
15
50
,
22
50
,
13
50
, s˜8, s˜3, s˜2
}
= iC
{
15
50
,
35
50
, s˜8, s˜3
}
= s˜3+round[ 1550 (8−3)] = s˜5;
iC4
{
30
80
,
15
80
,
22
80
,
13
80
, s˜20, s˜8, s˜3, s˜2
}
= iC
{
30
80
,
50
80
, s˜20, s˜5
}
= s˜5+round[ 3080 (20−5)] = s˜11;
iLOWA1W (a˜1, a˜2, a˜3, a˜4, a˜5) = iC
{
20
100
,
80
100
, s˜21, s˜11
}
= s˜11+round[ 20100 (21−11)] = s˜13.
Theorem 3.4. iLOWA1 operator satisfies the following properties:
(D1) (Idempotency) For all a˜ ∈ S˜, and W = (w1, . . . , wn) is a weight vector,
iLOWA1W (a˜, . . . , a˜) = a˜;
(D2) (Boundary) For all a˜i ∈ S˜, and W = (w1, . . . , wn) is a weight vector,
min
i=1,...,n
{a˜i} ≤ iLOWA1W (a˜1, . . . , a˜n) ≤ max
i=1,...,n
{a˜i} ;
(D3) (Monotonicity) Let a˜i, c˜i ∈ S˜ such that a˜i ≤ c˜i for all i = 1, . . . , n, and W =
(w1, . . . , wn) is a weight vector,
iLOWA1W (a˜1, . . . , a˜n) ≤ iLOWA1W (c˜1, . . . , c˜n) ;
(D4) (Commutativity) For all a˜i ∈ S˜, pi is a permutation over the set of arguments, and
W = (w1, . . . , wn) is a weight vector,
iLOWA1W (a˜1, . . . , a˜n) = iLOWA1W (pi (a˜1) , . . . , pi (a˜n)) .
SYMBOLIC COMPUTATIONAL MODELS FOR INTUITIONISTIC LINGUISTIC INFORMATION 39
Proof. Properties (D1), (D2) and (D4) are clearly verified because of the corresponding properties
of the convex combination of two intuitionistic linguistic labels. We prove (D3) by induction on n.
• It is true for n = 2, because in this case iLOWA1 is reduced to iC.
• For all t ≥ 2, suppose that it is true for n = t. i.e.:
iLOWA1W
(
a˜′1, . . . , a˜
′
t
) ≤ iLOWA1W (c˜′1, . . . , c˜′t) , (5)
for all a˜′i, c˜
′
i ∈ S˜ such that a˜′i ≤ c˜′i for all i = 1, . . . , t, and W ′ = (w′1, . . . , w′t) is a weight vector.
Let us consider a˜i, c˜i ∈ S˜ such that a˜i ≤ c˜i for all i = 1, . . . , t+1, and W = (w1, . . . , wt+1) is a
weight vector. Without loss of generality we may assume: that a˜1 ≥ . . . ≥ a˜t+1 and c˜1 ≥ . . . ≥ c˜t+1.
Then,
iLOWA1W (a˜1, . . . , a˜t+1) = iC
{
w1, 1− w1, a˜1, iCt
{
wh
1− w1 , a˜h, h = 2, . . . , t+ 1
}}
,
and
iLOWA1W (c˜1, . . . , c˜t+1) = iC
{
w1, 1− w1, c˜1, iCt
{
wh
1− w1 , c˜h, h = 2, . . . , t+ 1
}}
.
By (5),
iCt
{
wh
1− w1 , a˜h, h = 2, . . . , t+ 1
}
≤ iCt
{
wh
1− w1 , c˜h, h = 2, . . . , t+ 1
}
. (6)
Using the monotonicity of the iC operator, inequality (6) and a˜1 ≤ c˜1, we obtain:
iC
{
w1, 1− w1, a˜1, iCt
{
wh
1− w1 , a˜h, h = 2, . . . , t+ 1
}}
≤ iC
{
w1, 1− w1, c˜1, iCt
{
wh
1− w1 , c˜h, h = 2, . . . , t+ 1
}}
.
Thus, iLOWA1W (a˜1, . . . , a˜t+1) ≤ iLOWA1W (c˜1, . . . , c˜t+1). (D3) is proved.
4. AGGREGATION OPERATORS FOR CONTINUOUS INTUITIONISTIC
LABEL SET
In this section, we extend the discrete intuitionistic labels set S˜ =
{
(si, sj) ∈ S2
∣∣ i+ j ≤ g} to
continuous one, such that the indexes of the membership and the nonmembership of each intuitionistic
linguistic label take continuous values.
Definition 4.1. Continuous intuitionistic linguistic label set, Sˆ, is defined as the set of all pair
aˆ = (sα, sβ) such that α, β ≥ 0 and 0 ≤ α+ β ≤ g, i.e.:
Sˆ = {(sα, sβ) |α, β ≥ 0, α+ β ≤ g} .
Each aˆ = (sα, sβ) is termed as a continuous intuitionistic linguistic label. Two continuous intu-
itionistic linguistic labels (sα, sβ), (sδ, sσ) are equal, denoted by (sα, sβ) = (sδ, sσ), if α = δ and
β = σ.
40 PHAM HONG PHONG, BUI CONG CUONG
In Definition 4.1, it is not required that (sα, sβ) ∈ S2, i.e., α and β can take non-integer values.
For each aˆ ∈ Sˆ, if aˆ ∈ S˜, aˆ is called a original intuitionistic label; otherwise, a virtual intuitionistic
label. The score, confidence degree, relations “≥”, and “>” of continuous intuitionistic label are
defined similarly to the those of (discrete) intuitionistic label (Definitions 2.2, 2.3).
Definition 4.2. Let {aˆ1, . . . , aˆn} be a collection of continuous intuitionistic labels, aˆj =
(
sαj , sβj
)
for all j = 1, . . . , n. Intuitionistic linguistic averaging (iLA2) operators is defined as:
iLA2 (aˆ1, . . . , aˆn) =
(
sα¯, sβ¯
)
,
where α¯ =
(∑n
j=1 αj
)
/n , and β¯ =
(∑n
j=1 βj
)
/n .
Theorem 4.1. For all {aˆ1, . . . , aˆn} be a collection of continuous intuitionistic labels in Sˆ,
iLA2 (aˆ1, . . . , aˆn) is an continuous intuitionistic linguistic label.
Proof. Since aˆj =
(
sαj , sβj
)
is an intuitionistic linguistic label, we have αj ≥ 0, βj ≥ 0, and
αj + βj ≤ g, for all j = 1, . . . , n. It implies α¯ ≥ 0, β¯ ≥ 0, and
α¯+ β¯ =
∑n
j=1 (αj + βj)
n
≤ (ng) /g = g,
which proves the theorem.
Definition 4.3. Let {aˆ1, . . . , aˆn} be a collection of continuous intuitionistic labels in Sˆ, aˆj =(
sαj , sβj
)
for all j = 1, . . . , n, and W = (w1, . . . , wn) be a weight vector of (aˆ1, . . . , aˆn). Intu-
itionistic linguistic weighted averaging (iLWA2) operator is defined as:
iLWA2 (〈w1, aˆ1〉 , . . . , 〈wn, aˆn〉) =
(
sα¯, sβ¯
)
,
where α¯ =
∑n
j=1wjαj , and β¯ =
∑n
j=1wjβj .
Theorem 4.2. For all {aˆ1, . . . , aˆn} be a collection of continuous intuitionistic labels in Sˆ
and W = (w1, . . . , wn) be a weight vector of (aˆ1, . . . , aˆn), iLWA2 (〈w1, aˆ1〉 , . . . , 〈wn, aˆn〉) is an
continuous intuitionistic linguistic label.
Proof. Similar to Theorem 4.1.
Definition 4.4. Let {aˆ1, . . . , aˆn} be set of continuous intuitionistic labels in Sˆ, and bˆj =
(
sδj , sσj
)
be the j-th largest of aˆi. Intuitionistic linguistic ordered weighted aggregation (iLOWA2W) operator,
whose associated weight vector is W = (w1, . . . , wn), is defined as:
iLOWA2W (aˆ1, . . . , aˆn) = (sδ¯, sσ¯) ,
where δ¯ =
∑n
j=1wjδj , and σ¯ =
∑n
j=1wjσj .
Theorem 4.3. For all {aˆ1, . . . , aˆn} be a collection of continuous intuitionistic labels in Sˆ and
W = (w1, . . . , wn) be a weight vector, iLOWA2W (aˆ1, . . . , aˆn) is a continuous intuitionistic
linguistic label.
Proof. Similar to Theorem 4.1.
Example 4.1. Consider intuitionistic linguistic labels aˆ1 = (s1.2, s4.6), aˆ2 = (s2.3, s3.1), aˆ3 =
(s2.4, s1.9), aˆ4 = (s1.3, s4.1), aˆ5 = (s2.2, s0.7) in Sˆ = {(sα, sβ) |α, β ≥ 0, α+ β ≤ 6}, and a
SYMBOLIC COMPUTATIONAL MODELS FOR INTUITIONISTIC LINGUISTIC INFORMATION 41
weight vector W = (0.19, 0.15, 0.23, 0.31, 0.12). We have bˆ1 = aˆ5 > bˆ2 = aˆ3 > bˆ3 = aˆ2 > bˆ4 =
aˆ4 > bˆ5 = aˆ1. Then,
iLA2 (aˆ1, . . . , aˆ5) =
(
s 1.2+2.3+2.4+1.3+2.2
5
, s 4.6+3.1+1.9+4.1+0.7
5
)
= (s2.08, s2.88) ,
iLWA2 (〈w1, aˆ1〉 , . . . , 〈w5, aˆ5〉) = (s0.19×1.2+0.15×2.3+0.23×2.4+0.31×1.3+0.12×2.2 ,
s0.19×4.6+0.15×3.1+0.23×1.9+0.31×4.1+0.12×0.7)
= (s1.792, s3.131) ,
iLOWA2W (aˆ1, . . . , aˆ5) = (s0.19×2.2+0.15×2.4+0.23×2.3+0.31×1.3+0.12×1.2 ,
s0.19×0.7+0.15×1.9+0.23×3.1+0.31×4.1+0.12×4.6)
= (s1.854, s2.954) .
Remark 4.1. It is easily seen that iLA2, iLWA2, and iLOWA2 are idempotent, bounded, increasingly
monotonous and commutative.
5. AN APPLICATION
Zhang etal. ( [31], algorithm (I)) proposed the intuitionistic fuzzy linguistic preference relation
(IFLPR) as a new type of preference relation. Then, a GDM problem under intuitionistic linguistic
information was considered, in which, the 2-tuple representation was utilized to represent intuition-
istic linguistic information. In this paper, a refined approach to this problem is introduced, which
generates the same result as [31]. This approach avoids the use of 2-tuple representation. Thus, it
seem be simpler and more understandable.
Using IFLPRs, intuitionistic linguistic averaging (iLA2) and intuitionistic linguistic weighted
averaging (iLWA2) operators, the method is described as follows:
Step 1. Consider a GDM problem, where X = {x1, . . . , xn} is a set of alternatives, and
D = {d1, . . . , dm} is a set of DMs. Assume that W = (w1, . . . , wm) is the weight vector of DMs,
wh is the weight of Dh (h = 1, . . . ,m). Denote P
h =
[
phij
]
n×n
, where phij is the intuitionistic
linguistic preference of alternative xi over the alternative xj , given by the DM dh (i, j = 1, . . . , n,
h = 1, . . . ,m).
Step 2. Utilize the iLWA2 operator to aggregate the P
h =
[
phij
]
n×n
(h = 1, ..., n), and derive
a collective IFLPR P = [pij ]n×n, where
pij = iLWA2
(〈
w1, p
1
ij
〉
, . . . ,
〈
wm, p
m
ij
〉)
, (i, j = 1, . . . , n). (7)
Step 3. For each alternative xi, we get an overall collective intuitionistic linguistic label:
pi = iLA2 (pi1, . . . , pin) , (i = 1, . . . , n). (8)
Step 4. Rank all pi by means of the order relation defined in Definition 2.3, then rank all the
alternatives xi and select the best one(s) in accordance with the values of pi (i = 1, . . . , n).
Example 5.1. Consider the linguistic label set S which consists of seven linguistic labels: s0 =
certain, s1 = extremely likely, s2 = meaningful chance, s3 = it may, s4 = small chance,
s5 = extremely unlikely, s6 = impossible and its corresponding intuitionistic label set S˜. There
42 PHAM HONG PHONG, BUI CONG CUONG
are two DMs who use intuitionistic labels to access the preferences over the set of alternatives X =
{x1, x2, x3, x4}.
• DM p1 has his/her weight w1 = 0.6 [31]:
P 1 =
(s3, s3) (s1, s2) (s2, s2) (s1, s3)
(s2, s1) (s3, s3) (s2, s3) (s1, s4)
(s2, s2) (s3, s2) (s3, s3) (s3, s3)
(s3, s1) (s4, s1) (s3, s3) (s3, s3)
.
• DM p2 has his/her weight w2 = 0.4 [31]:
P 2 =
(s3, s3) (s2, s4) (s4, s1) (s5, s0)
(s4, s2) (s3, s3) (s4, s2) (s6, s0)
(s1, s4) (s2, s4) (s3, s3) (s3, s3)
(s0, s5) (s0, s6) (s3, s3) (s3, s3)
.
Use (7) to aggregate P 1 and P 2, we obtain the collective IFLPR:
P =
(s3, s3) (s1.4, s2.8) (s2.8, s1.6) (s2.6, s1.8)
(s2.8, s1.4) (s3, s3) (s2.8, s2.6) (s3, s2.4)
(s1.6, s2.8) (s2.6, s2.8) (s3, s3) (s3, s3)
(s1.8, s2.6) (s2.4, s3) (s3, s3) (s3, s3)
.
Use (8), we get p1 = (s2.45, s2.3), p2 = (s2.9, s2.35), p3 = (s2.55, s2.9), p4 = (s2.55, s2.9). Then,
x2 > x1 > x3 = x4.
6. CONCLUSIONS
In this paper, many operators, which can be used to aggregate intuitionistic linguistic information,
are defined. A modified algorithm based on linguistic labels representation is given to solve the GDM
problem with IFLPR. The results of the modification are the same as the previous method based on
2-tuple intuitionistic linguistic representation [31], and this proves the correctness of the refinement.
ACKNOWLEDGMENT
The authors are greatly indebted to anonymous reviewers for their comments and their valuable
suggestions that improved the quality and clarity of the paper. This work is sponsored by the
NAFOSTED under contract No. 102.01-2015.01.
REFERENCES
[1] K. T. Atanassov, “Intuitionistic fuzzy sets”, Fuzzy Sets and Systems, vol. 20, pp. 87–96, 1986.
[2] K. T. Atanassov and S. Stoeva, “Intuitionistic L-fuzzy sets”, in Cybernetics and Systems Re-
search (Eds. R. Trappl), Elsevier Science Pub., Amsterdam, vol. 2, pp. 539–540, 1986.
[3] G. Bordogna, M. Fedrizzi, and G. Passi, “A linguistic modeling of consensus in group decision
making based on OWA operator”, IEEE Transactions on Systems, Man, and Cybernetics,
vol. 27, pp. 126–132, 1997.
SYMBOLIC COMPUTATIONAL MODELS FOR INTUITIONISTIC LINGUISTIC INFORMATION 43
[4] F. Chiclana, F. Herrera, and E. Herrera-Viedma, “Integrating multiplicative preference relations
in a multipurpose decision-making model based on fuzzy preference relations”, Fuzzy Sets and
Systems, vol. 122, pp. 277–291, 2001.
[5] B. C. Cuong and P. H. Phong, “Max - Min Composition of Linguistic Intuitionistictic Fuzzy
Relations and Application in Medical Diagnosis”, VNU Journal of Science: Comp. Science &
Com. Eng., vol. 30, no. 4, pp. 601–968, 2014.
[6] Y. Q. Dai, Z. S. Xu, and Q. L. Da, “New evaluation scale of linguistic information and its
application”, Chinese Journal of Management Science, vol. 16, no. 2, pp. 145–349, 2008.
[7] M. Delgado, J. L. Verdegay, and M. A. Vila, “On aggregation operations of linguistic labels”,
International Journal of Intelligent Systems, vol. 8, pp. 351–370, 1993.
[8] M. Delgado, F. Herrera F, and E. Herrera-Viedma, “Combining numerical and linguistic infor-
mation in group decision making”, Information ciences, vol. 107, pp. 177–194, 1998.
[9] F. Herrera and J. L. Verdegay “Linguistic assessments in group decision”, in Proceedings of the
First European Congress on Fuzzy and Intelligent Technologies, Aachen, pp. 941–948, 1993.
[10] F. Herrera and E. Herrera-Viedma, “Aggregation operators for linguistic weighted information”,
IEEE Transactions on Systems, Man, and Cybernetics-Part A, vol. 27, no. 5, pp. 646–656,
1997.
[11] F. Herrera and L. Mart´ınez, “A 2-tuple fuzzy linguistic representation model for computing with
words”, IEEE Transactions on Fuzzy Systems, vol. 8, pp. 746–752, 2000.
[12] F. Herrera and L. Mart´ınez, “An approach for combining linguistic and numerical information
based on the 2-tuple fuzzy linguistic representation model in decision-making”, International
Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 8, pp. 539–562, 2000.
[13] F. Herrera and E. Herrera-Viedma, “Choice functions and mechanisms for linguistic preference
relations”, European Journal of Operational Research, vol. 120, pp. 144–161, 2000.
[14] F. Herrera and E. Herrera-Viedma, “Linguistic decision analysis: steps for solving decision
problems under linguistic information”, Fuzzy Sets and Systems, vol. 115, pp. 67-82, 2000.
[15] F. Herrera, S. Alonso, F. Chiclana, and E. Herrera-Viedma, “Computing with words in decision
making: foundations, trends and prospects”, Fuzzy Optimization and Decision Making, vol. 8,
no. 4, pp. 337–364, 2009.
[16] L. Martinez, D. Ruan, and F. Herrera, “Computing with Words in Decision support Systems: An
overview on Models and Applications”, International Journal of Computational Intelligence
Systems, vol. 3, no. 4, pp. 382–395, 2010.
[17] P. H. Phong and B. C. Cuong, “Some intuitionist linguistic aggregation operators”, Journal of
Computer Science and Cybernetics, vol. 30, no. 3, pp. 216–226, 2014.
[18] V. Torra, “The weighted OWA operator”, International Journal of Intelligent Systems,
vol. 12, pp. 153–166, 1997.
44 PHAM HONG PHONG, BUI CONG CUONG
[19] T. Tanino, “Fuzzy Preference Relations in Group Decision Making”, in Non-conventional Pref-
erence Relations in Decision Making, J. Kacprzyk and M. Roubens (Eds.), Springer-Verlag,
Berlin, pp. 54–71, 1988.
[20] T. Tanino, “On Group Decision Making Under Fuzzy Preferences”. in Multiperson Decision
Making Using Fuzzy Sets and Possibility Theory, J. Kacprzyk and M. Fedrizzi (Eds.), Kluwer
Academic Publishers, Dordrecht, pp. 172–185, 1990.
[21] M. M. Xia and Z. .S. Xu,“Managing hesitant information in GDM problems under fuzzy
and multiplicative preference relations”, International Journal of Uncertainty, Fuzziness and
Knowledge-based Systems, vol. 21, pp. 865–897, 2013.
[22] Z. S. Xu, “Uncertain Multiple Attribute Decision Making: Methods and Applications”, Ts-
inghua University Press, Beijing, 2004.
[23] Z. S. Xu, “EOWA and EOWG operators for aggregating linguistic labels based on linguistic
preference relations”, International Journal of Uncertainty, Fuzziness and Knowledge-Based
Systems, vol. 12. pp. 791–810, 2004.
[24] Z. S. Xu, “A method based on linguistic aggregation operators for group decision making with
linguistic preference relations”, Information Sciences, vol. 166, pp. 19–30, 2004.
[25] Z. S. Xu, “On generalized induced linguistic aggregation operators”, International Journal of
General Systems, vol. 35, pp. 17–28, 2006.
[26] Z. S. Xu, “Intuitionistic preference relations and their application in group decision making”,
Information Sciences, vol. 177, pp. 2363–2379, 2007.
[27] Z. S. Xu, “Group decision making based on multiple types of linguistic preference relations”,
Information Sciences, vol. 178, pp. 452–467, 2008.
[28] R. R. Yager, “A new methodology for ordinal multiobjective decisions based on fuzzy sets”,
Decision Sciences, vol. 12, pp. 589–600, 1981.
[29] R. R. Yager, “Applications and extensions of OWA aggregations”, International Journal of
Man-Machine Studied, vol. 37, pp. 103–132, 1992.
[30] L. A. Zadeh, “The concept of a linguistic variable and its application to approximate reasoning
– I”, Information Scences, vol. 8, pp. 199–249, 1975.
[31] Y. Zhang, H. X. Ma, B. H. Liu, and J. Liu, “Group decision making with 2-tuple intuitionistic
fuzzy linguistic preference relations”, Soft Computing, vol. 16, pp. 1439–1446, 2012.
Received on March 23 - 2015
Revised on June 09 - 2016
Các file đính kèm theo tài liệu này:
- symbolic_computational_models_for_intuitionistic_linguistic.pdf