CS166

Chris Pollett

Oct 24, 2012

# Outline

• Multilevel Security
• HW Problem
• Compartments
• Covert Channels
• Inference Control

# Multilevel Security (MLS)

• Last day, we concluded with clearances and classification; today, we look at multi-level security in more detail.
• MLS is needed when subjects/objects at different levels use/on same system
• MLS is a form of Access Control
• Military and government have been interested in MLS for many decades
• Lots of research into MLS
• Strengths and weaknesses of MLS well understood (almost entirely theoretical)
• Many possible uses of MLS outside military

# MLS Applications

• Classified government/military systems
• Business example: Might want to restrict info to: Senior management only, all management, everyone in company, or general public
• Network firewall
• Confidential medical info, databases, etc.
• Usually, MLS not a viable technical system -- More of a legal device than technical system

# MLS Security Models

• MLS models explain what needs to be done
• Models do not tell you how to implement
• Models are descriptive, not prescriptive -- That is, high level description, not an algorithm
• There are many MLS models
• We'll discuss the simplest MLS models
• Other models are more realistic
• Other models also more complex, more difficult to enforce, harder to verify, etc.

# HW Problem

Problem 7.37 Using the info provided with this problem...

(a). Use equation 7.1 to compute the distance: d(\A\l\i\c\e, \B\o\b), d(\A\l\i\c\e, \C\h\a\r\l\i\e), d(\B\o\b, \C\h\a\r\l\i\e).

(b). Assuming the same statistics apply to the shortened iris codes as to normal-lengthed ones, which of the users U,V,W,X,Y is most likely Alice? Bob? Charlie? None of the above?

Answer. To solve this problem, I wrote the following PHP script to dompute all the distances:

<?php
//php in xampp is 32 bit
$c['alice'][0]= 0xbe439ad5; // high order half$c['alice'][1]=  0x98ef5147; // low order half
$c['bob'][0] = 0x9c8b7a14;$c['bob'][1] =   0x25369584;
$c['charlie'][0]=0x88552233;$c['charlie'][1]=0x6699ccbb;

$c['u'][0] = 0xc975a213;$c['u'][1] =     0x2e89ceaf;
$c['v'][0] = 0xdb9a8675;$c['v'][1] =     0x342fec15;
$c['w'][0] = 0xa6039ad5;$c['w'][1] =     0xf8cfd965;
$c['x'][0] = 0x1dca7a54;$c['x'][1] =     0x273497cc;
$c['y'][0] = 0xaf8b6c7d;$c['y'][1] =     0x5e3f0f9a;

foreach($c as$first_pair => $first_iris_code) { foreach($c as $last_pair =>$last_iris_code) {
echo "d($first_pair,$last_pair) = " .
iris_distance($first_iris_code,$last_iris_code)."\n";
}
}

function iris_distance($code1,$code2)
{

$score = 0; for($k = 0; $k < 2;$k++) {
for($i = 0;$i< 32; $i++) { if (($code1[$k] & 1) != ($code2[$k] & 1)) {$score++;
}
$code1[$k] = floor($code1[$k]/2);
$code2[$k] = floor($code2[$k]/2);
}
}
return $score/64; } ?>  Using this d(\A\l\i\c\e, \B\o\b) = 0.453125, d(\A\l\i\c\e, \C\h\a\r\l\i\e) = 0.609375, and d(\B\o\b, \C\h\a\r\l\i\e) = 0.53125, answering (a). For (b), d(u, \C\h\a\r\l\i\e) = 0.171875 so is Charlie, v had a score bigger than 0.32 in each case so was no one, d(w, \A\l\i\c\e) = 0.15625 so w is alice, d(x, \B\o\b) = 0.15625 so x is Bob, and y did not correspond to anyone. # Bell-LaPadula • The BLP security model designed to express the essential requirements for MLS • BLP deals with confidentiality -- to prevent unauthorized reading • Recall that O is an object, S a subject • Object O has a classification • Subject S has a clearance • Security level denoted L(O) and L(S) # Bell-LaPadula Idea • BLP consists of: • Simple Security Condition: S can read O if and only if L(O) le L(S) • *-Property (Star Property): S can write O if and only if L(S) le L(O) • No read up, no write down # McLeans Criticisms of BLP • McLean: BLP is "so trivial that it is hard to imagine a realistic security model for which it does not hold" • McLean's "system Z" allowed administrator to reclassify object, then "write down" • Is this fair? • Violates spirit of BLP, but not expressly forbidden in statement of BLP • Raises fundamental questions about the nature of (and limits of) modeling # B and LPs Response • BLP enhanced with tranquility property • Strong tranquility: security labels never change • Weak tranquility: security label can only change if it does not violate "established security policy" • Strong tranquility impractical in real world • Often want to enforce "least privilege" • Give users lowest privilege for current work • Then upgrade as needed (and allowed by policy) • This is known as the high water mark principle • Weak tranquility allows for least privilege (high water mark), but the property is vague # BLP: The Bottom Line • BLP is simple, probably too simple • BLP is one of the few security models that can be used to prove things about systems • BLP has inspired other security models • Most other models try to be more realistic • Other security models are more complex • Models difficult to analyze, apply in practice # Biba's Model • BLP for confidentiality, Biba for integrity -- Biba is to prevent unauthorized writing • Biba is (in a sense) the dual of BLP • Integrity model • Suppose you trust the integrity of O1 but not O2 • If object O3 includes O1 and O2 then you cannot trust the integrity of O3 • Integrity level of O is minimum of the integrity of any object in O • Low water mark principle for integrity # Biba • Biba can be stated as: • Write Access Rule: S can write O if and only if I(O) le I(S) (if S writes O, the integrity of O less than that of S) • Biba's Model: S can read O if and only if I(S) le I(O) (if S reads O, the integrity of S less than that of O) • Often, replace Biba's Model with Low Water Mark Policy: If S reads O, then I(S) = min(I(S), I(O)). # BLP versus Biba # Compartments • Multilevel Security (MLS) enforces access control up and down • Simple hierarchy of security labels is generally not flexible enough • Compartments enforces restrictions across • Suppose TOP SECRET divided into TOP SECRET {CAT} and TOP SECRET {DOG} • Both are TOP SECRET but information flow restricted across the TOP SECRET level # More Compartments • Why compartments? Why not create a new classification level? • May not want either of TOP SECRET {CAT} le TOP SECRET {DOG} TOP SECRET {DOG} le TOP SECRET {CAT} • Compartments designed to enforce the need to know principle -- Regardless of clearance, you only have access to info that you need to know to do your job # Example Compartments • Arrows indicate "ge" relationship • Not all classifications are comparable, e.g., TOP SECRET {CAT} vs SECRET {CAT, DOG} # MLS vs Compartments • MLS can be used without compartments. And vice-versa • But, MLS almost always uses compartments • Example • MLS mandated for protecting medical records of British Medical Association (BMA) • AIDS was TOP SECRET, prescriptions SECRET • What is the classification of an AIDS drug? • Everything tends toward TOP SECRET • Defeats the purpose of the system! • Compartments-only approach used instead # Covert Channel • MLS designed to restrict legitimate channels of communication • May be other ways for information to flow • For example, resources shared at different levels could be used to "signal" information • Covert channel: a communication path not intended as such by system's designers # Covert Channel Example • Alice has TOP SECRET clearance, Bob has CONFIDENTIAL clearance • Suppose the file space shared by all users • Alice creates file FileXYzW to signal "1" to Bob, and removes file to signal "0" • Once per minute Bob lists the files • If the file FileXYzW does not exist, Alice sent 0 • If the file FileXYzW exists, Alice sent 1 • Alice can leak TOP SECRET info to Bob! # Covert Channel Example # More Covert Channels • Other possible covert channels? • Print queue • ACK messages • Network traffic, etc. • When does covert channel exist? • Sender and receiver have a shared resource • Sender able to vary some property of resource that receiver can observe • "Communication" between sender and receiver can be synchronized # Covert Channel Pervasiveness • So, covert channels are everywhere • "Easy" to eliminate covert channels: • Eliminate all shared resources... • ...and all communication • Virtually impossible to eliminate covert channels in any useful system • DoD guidelines: reduce covert channel capacity to no more than 1 bit/second • Implication? DoD has given up on eliminating covert channels! # Covert Channel Data Rate • Consider 100MB TOP SECRET file • Plaintext stored in TOP SECRET location • Ciphertext (encrypted with AES using 256-bit key) stored in UNCLASSIFIED location • Suppose we reduce covert channel capacity to 1 bit per second • It would take more than 25 years to leak entire document through a covert channel • But it would take less than 5 minutes to leak 256-bit AES key thru covert channel! # Real-World Covert Channel • Hide data in TCP header reserved field • Or use covert_TCP, tool to hide data in • Sequence number • ACK number # Real-World Covert Channel • Hide data in TCP sequence numbers • Tool: covert_TCP • Sequence number X contains covert info # Inference Control Example • Suppose we query a database • Question: What is average salary of female CS professors at SJSU? • Answer:$95,000
• Question: How many female CS professors at SJSU?
• Specific information has leaked from responses to general questions!

# Inference Control and Research

• For example, medical records are private but valuable for research
• How to make info available for research and protect privacy?
• How to allow access to such data without leaking specific information?

# Naive Inference Control

• Remove names from medical records?
• Still may be easy to get specific info from such anonymous data
• Removing names is not enough -- As seen in previous example
• What more can be done?

# Less-naive Inference Control

• Query set size control -- Don't return an answer if set size is too small
• N-respondent, k% dominance rule
• Do not release statistic if k% or more contributed by N or fewer
• Example: Avg salary in Bill Gates' neighborhood
• This approach used by US Census Bureau
• Randomization -- Add small amount of random noise to data
• Many other methods -- none satisfactory