WhiteWinterWolf.com - hardeninghttps://www.whitewinterwolf.com/2018-02-20T00:00:00+01:00RSA key lengths, elliptic curve cryptography and quantum computing2017-12-14T00:00:00+01:002017-12-14T00:00:00+01:00WhiteWinterWolftag:www.whitewinterwolf.com,2017-12-14:/posts/2017/12/14/rsa-key-lengths-elliptic-curve-cryptography-and-quantum-computing/<p>Some tools, like <a href="/about/pgp" title="PGP keys"><span class="caps">PGP</span></a>, are still stuck<sup id="fnref-stuck"><a class="footnote-ref" href="#fn-stuck">1</a></sup> to legacy cryptography,
mainly the <span class="caps">RSA</span> algorithm.
For such tools, <span class="caps">RSA</span>-2048 is often described as strong enough for any
foreseeable future, anything above being overkill
The <a href="https://gnupg.org/faq/gnupg-faq.html#please_use_ecc" rel="external" title="GnuPG FAQ: Why do people advise against using RSA-4096?">GnuPG official documentation</a> in particular even goes
this far as considering that using <span class="caps">RSA</span>-3027 or <span class="caps">RSA</span>-4096 constitutes
<em>“an improvement so marginal that it’s really not worth mentioning”</em>, adding
that <em>“the way to go would be to switch to elliptical curve cryptography”</em>.</p>
<p>The assertion that this improvement is <em>“marginal”</em> is <a href="https://security.stackexchange.com/q/171308/32746" rel="external" title="How to interpret this statement against 4096-bit RSA (StackExchange)">debatable</a>,
as is the trust in the elliptical curves to protect us in the future.</p>
<h3><a class="toclink" href="#longer-rsa-keys">Longer <span class="caps">RSA</span> keys</a></h3>
<p>While the <abbr title="National Institute of Standards and Technology"><span class="caps">NIST</span></abbr> considers <span class="caps">RSA</span>-2048 to be safe for commercial use up to 2030,
it still advises the use of at least a <span class="caps">RSA</span>-3072 key for beyond
(see BlueKrypt’s <a href="https://www.keylength.com/en/4/" rel="external" title="NIST Recommendations (Keylength)">Keylength</a> website to get an overview of various recommendations).</p>
<p>Read quickly, such recommendation …</p><p>Some tools, like <a href="/about/pgp" title="PGP keys"><span class="caps">PGP</span></a>, are still stuck<sup id="fnref-stuck"><a class="footnote-ref" href="#fn-stuck">1</a></sup> to legacy cryptography,
mainly the <span class="caps">RSA</span> algorithm.
For such tools, <span class="caps">RSA</span>-2048 is often described as strong enough for any
foreseeable future, anything above being overkill
The <a href="https://gnupg.org/faq/gnupg-faq.html#please_use_ecc" rel="external" title="GnuPG FAQ: Why do people advise against using RSA-4096?">GnuPG official documentation</a> in particular even goes
this far as considering that using <span class="caps">RSA</span>-3027 or <span class="caps">RSA</span>-4096 constitutes
<em>“an improvement so marginal that it’s really not worth mentioning”</em>, adding
that <em>“the way to go would be to switch to elliptical curve cryptography”</em>.</p>
<p>The assertion that this improvement is <em>“marginal”</em> is <a href="https://security.stackexchange.com/q/171308/32746" rel="external" title="How to interpret this statement against 4096-bit RSA (StackExchange)">debatable</a>,
as is the trust in the elliptical curves to protect us in the future.</p>
<h3 id="longer-rsa-keys"><a class="toclink" href="#longer-rsa-keys">Longer <span class="caps">RSA</span> keys</a></h3>
<p>While the <abbr title="National Institute of Standards and Technology"><span class="caps">NIST</span></abbr> considers <span class="caps">RSA</span>-2048 to be safe for commercial use up to 2030,
it still advises the use of at least a <span class="caps">RSA</span>-3072 key for beyond
(see BlueKrypt’s <a href="https://www.keylength.com/en/4/" rel="external" title="NIST Recommendations (Keylength)">Keylength</a> website to get an overview of various recommendations).</p>
<p>Read quickly, such recommendation sounds like <span class="caps">RSA</span>-2048 should indeed be safe
for todays world.
In fact this depends on the use you intend for your keys, as “safe up to
2030” doesn’t mean that you are safe as long as you plan to migrate to
something else before 2030.
This is not some kind of end-of-support date.
This means that you must <em>assume</em> that whatever you encrypt now <em>will</em> be
decrypted in a dozen of years (and a dozen of years goes pretty fast).</p>
<p>For short term secrets or, to some extents, signature this is usually less of a problem.
The fact for instance that an attacker may be able to fake a signature from
a dozen years ago shouldn’t cause an issue: by that time such signature should
have been revoked and good software should refuse to trust keys size or
algorithms widely known as weak.</p>
<p>But for systems which may imply long-term storage or the exchange of valuable
information, then the fact that the data may be decrypted in a dozen of years
may potentially be devastating.
Concretely, if you store today an encrypted archive protected using <span class="caps">RSA</span>-2048
on a cloud service, you must assume that the content of this archive will be
known to authorities and intelligence services in a dozen of years (and, again,
time goes very fast).</p>
<p>Even if the archive file is to be quickly deleted, some intelligence agencies
attempt to process digital data exchange as a whole (the whole of Internet,
satellites, phone communications, etc.) and massively intercept and copy even
remotely potentially interesting data (an encrypted archive for instance is a perfect
candidate) to be able to analyze or decrypt them few years down the road.</p>
<p>Data acquisition and long-term storage is a major investment for some
intelligence agencies, the most widely known example being of course the <abbr title="National Security Agency"><span class="caps">NSA</span></abbr>.
A year before Snowden events, Laura Poitras published a
<a href="https://www.youtube.com/watch?v=r9-3K3rkPRE" rel="external" title="NSA Whistle-Blower Tells All: The Program | Op-Docs | The New York Times (YouTube)">short documentary</a> on William Binney,
another former <abbr title="National Security Agency"><span class="caps">NSA</span></abbr> employee.
This documentary focused on <abbr title="National Security Agency"><span class="caps">NSA</span></abbr>‘s <em>“Stellar Wind”</em> program and their
<a href="https://nsa.gov1.info/utah-data-center/" rel="external" title="Utah Data Center (Domestic Surveillance Directorate)">Utah data center</a>:</p>
<blockquote>
<p>Binney calculates the facility has the capacity to store 100 years’ worth of
the world’s electronic communications.<sup id="fnref-capacity"><a class="footnote-ref" href="#fn-capacity">2</a></sup></p>
</blockquote>
<h3 id="quantum-computing"><a class="toclink" href="#quantum-computing">Quantum computing</a></h3>
<p>The <abbr title="National Security Agency"><span class="caps">NSA</span></abbr> is a dual headed organization, with both a national intelligence and an
advisor role to protect against foreign intelligence<sup id="fnref-conflict"><a class="footnote-ref" href="#fn-conflict">3</a></sup>.</p>
<p><span class="lb-small floatright"><a href="#nsa-faq.jpg" id="nsa-faq.jpg-thumb" title="Click to enlarge"><img alt="Cover of 'Commercial National Security Algorithm Suite and Quantum Computing FAQ'" src="https://www.whitewinterwolf.com/posts/2017/12/14/rsa-key-lengths-elliptic-curve-cryptography-and-quantum-computing/nsa-faq.jpg"/></a></span>
As part of its advisor role, in January 2016 the <abbr title="National Security Agency"><span class="caps">NSA</span></abbr> wrote a very interesting <span class="caps">FAQ</span> titled
<em><a href="https://cryptome.org/2016/01/CNSA-Suite-and-Quantum-Computing-FAQ.pdf" rel="external" title="Commercial National Security Algorithm Suite and Quantum Computing FAQ (Cryptome)">Commercial National Security Algorithm Suite and Quantum Computing <span class="caps">FAQ</span></a></em>
(I highly encourage you to read it).
As soon as National Security Systems (<span class="caps">NSS</span>) are concerned, <span class="caps">RSA</span>-2048 should simply
be <em>not used anymore</em>.
This is as simple as that.
If you want to protect your data, use <span class="caps">RSA</span>-3072 minimum, this minimum being
kept relatively low for compatibility purposes but knowing that higher is better.</p>
<p>This paper then focuses on the next real threat against modern
cryptography.
According to the <abbr title="National Security Agency"><span class="caps">NSA</span></abbr>, is not the natural evolution of
processing power as it was before, but the progress toward effective quantum computing.</p>
<p>Professor <a href="https://en.wikipedia.org/wiki/Key_size#Effect_of_quantum_computing_attacks_on_key_strength" rel="external" title="Key size: Effect of quantum computing attacks on key strength (Wikipedia)">Gilles Brassard</a> explains the threat as follow:</p>
<blockquote>
<p>It takes no more time to break <span class="caps">RSA</span> on a quantum computer (up to a
multiplicative constant) than to use it legitimately on a classical computer.</p>
</blockquote>
<p>Leading the <abbr title="National Security Agency"><span class="caps">NSA</span></abbr> to conclude, in the above mentioned paper:</p>
<blockquote>
<p>A sufficiently large quantum computer, if built, would be capable of
undermining all widely-deployed public key algorithms used for key
establishment and digital signatures.</p>
</blockquote>
<p>Quantum computing would affect <span class="caps">RSA</span> and <abbr title="Elliptic Curve Cryptography"><span class="caps">ECC</span></abbr> algorithms alike, so <abbr title="Elliptic Curve Cryptography"><span class="caps">ECC</span></abbr>
is not a solution here.
However quantum computing is not some kind of magical threat affecting any kind
of encryption.
Symmetric algorithms, for instance, are said to be more resistant against
quantum computing, and new quantum resistant asymmetric algorithms proposal
have already be done.</p>
<p>According to the <abbr title="National Security Agency"><span class="caps">NSA</span></abbr>, the future in asymmetric encryption is through these
quantum resistant asymmetric algorithms, and not through <abbr title="Elliptic Curve Cryptography"><span class="caps">ECC</span></abbr>,
despite the claim in the GnuPG documentation quoted in the beginning of this article.</p>
<p>This does not mean that <abbr title="Elliptic Curve Cryptography"><span class="caps">ECC</span></abbr> is not an improvement over older algorithms such
as <span class="caps">RSA</span>: it certainly is.
This is a matter of cost: if company or a project cannot afford to implement both <abbr title="Elliptic Curve Cryptography"><span class="caps">ECC</span></abbr> and
then quantum resistant algorithms in a row, they should spare their time and money
to invest it on upcoming quantum resistant algorithms once standardization
has been achieved (a process which should take a few years).
If a project can afford both then its obviously better.
But one should not rush now on <abbr title="Elliptic Curve Cryptography"><span class="caps">ECC</span></abbr> just to find themselves unable to proceed
with quantum resistant algorithms down the road.</p>
<p><abbr title="Elliptic Curve Cryptography"><span class="caps">ECC</span></abbr> algorithms were an answer to the increase in computational power, but as
the threat shifts the answer has to shift too.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>One the advantages of <abbr title="Elliptic Curve Cryptography"><span class="caps">ECC</span></abbr> algorithms is a return to relatively small key
size (a 256 <abbr title="Elliptic Curve Cryptography"><span class="caps">ECC</span></abbr> key providing the same strength as a 3072 bits <span class="caps">RSA</span> key).</p>
<p>According to this <abbr title="National Security Agency"><span class="caps">NSA</span></abbr> paper, this won’t be the case anymore with quantum
resistant algorithms as:</p>
<blockquote>
<p>The key sizes for these algorithms will be much larger than those used
in current algorithms.</p>
</blockquote>
<p>Because of this, the <abbr title="National Security Agency"><span class="caps">NSA</span></abbr> also calls interested parties to measure potential
side-effects before-hand:</p>
<blockquote>
<p>Work will be required to gauge the effects of these larger key sizes on
standard protocols as well.
<abbr title="National Security Agency"><span class="caps">NSA</span></abbr> encourages those interested to engage with standards organizations
working in this area and to analyze the effects of adopting quantum
resistant algorithms in standard protocols.</p>
</blockquote>
</div>
<h3 id="non-standard-key-lengths-and-algorithms"><a class="toclink" href="#non-standard-key-lengths-and-algorithms">Non-standard key lengths and algorithms</a></h3>
<p>From time to time I encounter people advocating the use of
non-standard algorithms or of standard algorithms used in non-standard or
unusual ways:</p>
<ul>
<li>New algorithms which didn’t went through the same amount of scrutiny as the
standardized ones.</li>
<li>Non-standard algorithm combination or usage.</li>
<li>Uncommon key sizes.</li>
</ul>
<p>Cryptography is a very complex and sensitive matter, it should go without
saying none of these practices should be considered in the realm of any real
security scheme.</p>
<ul>
<li>
<p><a href="https://www.schneier.com/blog/archives/2011/04/schneiers_law.html" rel="external" title="Schneier's Law (Schneier)">Schneier’s law</a> says that you should not run your own crypto
(this is discussed more in depth <a href="https://security.stackexchange.com/q/18197/32746" rel="external" title="Why shouldn't we roll our own? (StackExchange)">here</a>), this also apply in
choosing obscure algorithms which weren’t vetted by the cryptographers
community just because you somehow made a wrong relation between
<em>less known</em> and <em>more secure</em>.</p>
</li>
<li>
<p>Cryptographic algorithms are designed to be used a certain way, and they
will deliver the highest security when they are used exactly this way.
As soon as you start to deviate from this way, even “just a little”, you
must assume that you <em>reduce</em> the resulting security.</p>
<p>The most perfect example is hashing algorithms: I regularly see misinformed
people re-hashing something several times “to increase security”, while
for several reasons hashing a hash will in fact decrease the resulting security.</p>
</li>
<li>
<p>Even if an algorithm may theoretically designed to work with an arbitrary
key size, once the community agreed on a common set of sizes it is usually
unwise to settle apart.</p>
<p>Every software and devices being tested with those sizes, using uncommon key
sizes puts you out of usual test-cases and may raise unexpected behaviors.
In the best case, this will be an error message.
In the worst case, this will be a weakness affecting the resulting security.</p>
<p>As with the first bullet, such practice comes from a frequent misconception
that what is uncommon is more secure.
Advocates of such measure usually explain that, assuming that a state actor
is able to break 4096-<span class="caps">RSA</span>, they would require specific optimization
which won’t work against, say, a 3456 bits key which would require specific
development to be broken.</p>
<p>To leave the assumptions realms and go back to Earth, I’ve never
encountered any report stating that a 120 bits key is harder to break than
a 128 bits one.
So if an attacker is able to break <span class="caps">RSA</span> keys up to 4096 bits, then a 3456
bits key will be broken too.</p>
<p>Uncommon key sizes exposes you to software bugs and interoperability issues
without any real security gain.</p>
</li>
</ul>
<div class="footnote">
<hr/>
<ol>
<li id="fn-stuck">
<p>The use of <abbr title="Elliptic Curve Cryptography"><span class="caps">ECC</span></abbr> in <span class="caps">PGP</span> tools has been standardized by the <a href="https://tools.ietf.org/html/rfc6637" rel="external" title="RFC 6637: Elliptic Curve Cryptography (ECC) in OpenPGP (IETF)"><span class="caps">RFC</span> 6637</a>
in 2012.
GnuPG added it in <a href="https://wiki.gnupg.org/ECC" rel="external" title="GnuPG ECC support (GnuPG wiki)"><span class="caps">GPG</span> 2.1</a> released in 2014, turning into stable in
<a href="https://lists.gnupg.org/pipermail/gnupg-announce/2017q3/000413.html" rel="external" title="Announce: GnuPG 2.2.0 released (GnuPG mailing list)"><span class="caps">GPG</span> 2.2</a> in August 2017. <a class="footnote-backref" href="#fnref-stuck" title="Jump back to footnote 1 in the text">↩</a></p>
</li>
<li id="fn-capacity">
<p>I saw several websites trying to estimate the storage space
required or available in such facility in regards to cost, often ending
with astronomical numbers.
The fact is that you don’t need this facility to be able to store 100 years
worth of communication right from the beginning, this would be plain dumb.
You need to be able to store one or just a few years, and simply
ensure that the storage capacity grows at a sufficient pace compared to the
quantity of incoming intercepted data, either by adding new storage units
over the years or replacing existing ones to take advantage of the constant
technological evolution.
You don’t need storage <em>capacity</em>, you need storage <em>scalability</em>, and the
<abbr title="National Security Agency"><span class="caps">NSA</span></abbr> itself <a href="https://nsa.gov1.info/utah-data-center/" rel="external" title="Utah data center (Domestic Surveillance Directorate)">doesn’t say anything different</a>:</p>
<blockquote>
<p><span class="dquo">“</span>The Utah Data Center was built with future expansion in mind and the ultimate capacity will definitely be “alottabytes”!</p>
</blockquote>
<p><a class="footnote-backref" href="#fnref-capacity" title="Jump back to footnote 2 in the text">↩</a></p>
</li>
<li id="fn-conflict">
<p>Of course these two roles don’t go without a certain amount of
conflict of interests, as shown in <a href="https://en.wikipedia.org/wiki/Dual_EC_DRBG#Security" rel="external" title="Dual_EC_DRBG: Security"><abbr title="Dual Elliptic Curve Deterministic Random Bit Generator">Dual_EC_DRBG</abbr> case</a>. <a class="footnote-backref" href="#fnref-conflict" title="Jump back to footnote 3 in the text">↩</a></p>
</li>
</ol>
</div>How to (more) safely use the Firefox password manager2017-11-03T00:00:00+01:002018-02-20T00:00:00+01:00WhiteWinterWolftag:www.whitewinterwolf.com,2017-11-03:/posts/2017/11/03/how-to-more-safely-use-the-firefox-password-manager/<p>Security professionals often recommend to use a dedicated password
manager software, such as <a href="https://keepass.info/" rel="external" title="KeePass project homepage">KeePass</a><sup id="fnref-KeePass"><a class="footnote-ref" href="#fn-KeePass">1</a></sup>, which allows to easily
prevent password reuse while ensuring a safe storage of the passwords.</p>
<p>Did I just say… <em>“easily”</em>?
For the wide public, this “easiness” may not be so obvious.
The fact alone to have to install, learn and use a new software just to store
the password which allows to access the website which, in turn, allows you to
do your things: end-users often consider this over-killing…</p>
<p>And they may be right.</p>
<p>Their usual reaction is therefore either to rely on a single
<em>“well thought and complex password”</em> to secure their whole digital life, or
build an over-engineered mental algorithm to create unique (but easily
guessable, even when they don’t think so) passwords, loosing data because of a
forgotten password or being stuck because they are currently at their office
while their …</p><p>Security professionals often recommend to use a dedicated password
manager software, such as <a href="https://keepass.info/" rel="external" title="KeePass project homepage">KeePass</a><sup id="fnref-KeePass"><a class="footnote-ref" href="#fn-KeePass">1</a></sup>, which allows to easily
prevent password reuse while ensuring a safe storage of the passwords.</p>
<p>Did I just say… <em>“easily”</em>?
For the wide public, this “easiness” may not be so obvious.
The fact alone to have to install, learn and use a new software just to store
the password which allows to access the website which, in turn, allows you to
do your things: end-users often consider this over-killing…</p>
<p>And they may be right.</p>
<p>Their usual reaction is therefore either to rely on a single
<em>“well thought and complex password”</em> to secure their whole digital life, or
build an over-engineered mental algorithm to create unique (but easily
guessable, even when they don’t think so) passwords, loosing data because of a
forgotten password or being stuck because they are currently at their office
while their password is written on a paper stored in drawer at their home.</p>
<p>There is however a good alternative which, after a few easy steps,
can provide a well-balanced solution between security and usability for casual users.</p>
<h3 id="password-managers-limitations"><a class="toclink" href="#password-managers-limitations">Password managers limitations</a></h3>
<p>Power users and technical people have a very different use of the
computer than what I may call the “common folk”.
They use various software: <span class="caps">SSH</span> here, remote desktop there, file transfers
software and command-lines, thick clients, administrative interfaces,
encrypted files and data: all this compose their daily bread.</p>
<p>In this case, adding one supplementary software centralizing credential
management in one safe place has indeed only advantages, both in terms of
security and usability.
But this is not the way the wide public use their computer, nor the way they
use the interweb.</p>
<p>Moreover, the practical security gain added by using a standalone password
manager may not be the one expected.</p>
<p>In fact, if the password manager is used as is, using an encrypted database
stored in the user’s files and relying on a password to decrypt it, the
practical security gain compared to using the browser’s built-in password
manager will be very marginal:</p>
<ul>
<li>
<p>To have access to your browser’s password manager, an attacker needs to
have access to the user’s file.
He will therefore have access to the standalone password manager database
as well.</p>
</li>
<li>
<p>Having access to the users files, the attacker will most likely also have
the possibility to install a key logger or any other malicious software.
It will work mostly the same way no matter the location and the way you
store your passwords.</p>
</li>
</ul>
<p>Note that I’m not implying that standalone password managers are useless, far
be it from me!</p>
<p>Standalone password managers allow:</p>
<ul>
<li>
<p>To centralize the credentials required for various software, when your
browser is not the only one asking you for a password on your machine.
In this case it is more a convenience software than a real security tool.</p>
</li>
<li>
<p>To use other, more advanced authentication forms than a password to
unlock the passwords database, potentially combining several
complementary authentication systems together.</p>
</li>
<li>
<p>To isolate the software storing the password from the software using
it, we will come back on this when we will deal with
<a href="#applications-sandboxing">sandboxing</a>at the end of this article.</p>
</li>
</ul>
<p>I may forget some other specific use-cases, but the idea remain that using
a password manager software just for the sake of using one won’t make you
more secure.
You might as well use your browser’s native password manager and gain in usability.</p>
<p>While security might sometimes impact usability, impacting usability doesn’t
necessarily improve security.
The easiest solutions may also be the safest ones.</p>
<h3 id="what-about-all-in-one-password-manager-add-ons"><a class="toclink" href="#what-about-all-in-one-password-manager-add-ons">What about all-in-one password manager add-ons?</a></h3>
<p>Several add-on propose all-in-one solutions for password storage in Firefox.</p>
<p>Often, these add-ons either reinvent the wheel by storing the password in local
file, or more often your credentials are sent to machines controlled by the
plugin author (so called “cloud-based” password managers).</p>
<p>Developing and maintaining an add-on costs money and effort, developing and
maintaining a cloud infrastructure costs even more.</p>
<p>In these cases, you have to ask yourself:</p>
<ul>
<li>
<p>Who is/are the author(s) of the add-on?</p>
</li>
<li>
<p>What is their benefit?
Especially if a company is involved there <em>must</em> be a benefit somewhere.
In this case remember that if you did not pay for something, it means that
<em>you</em> are the product, and that the company somehow manages to produce
money from the data you provide it.</p>
</li>
<li>
<p>What happens if the project is deemed not profitable and abandoned, will
you loose all your passwords?
Such situations also happens when, on the contrary, the project becomes too
profitable and the company gets bought by another one deciding to close
the current service.</p>
</li>
<li>
<p>What happens if the author unilaterally decides from one day to another to
dramatically change the plugin behavior, adding useless and complex
features, making it invasive, in other words turning it into what you may
perceive as crap?</p>
</li>
<li>
<p>Can you trust the author and his add-on?
Is it widely used?
Has it been checked for vulnerabilities?
Did it encountered security issues by the past, and if yes how were they handled?</p>
</li>
</ul>
<p>In this article we will rely on vanilla Firefox for the main tasks, with a
few small add-ons adding simple features so it shouldn’t be hard to find a
replacement if needed.</p>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>Some add-on offer a storage-less password solution.</p>
<p>They rely on the website <span class="caps">URL</span> and a master key provided by the user to
generate a hash acting as a unique password.
The security of this system is comparable to sharing your whole passwords
database which each website.</p>
<p>Indeed, if an attacker manages to get his hands on one of your hashes, he
can try to brute-force it to try to obtain your master password (given that
most of these add-on never went through the hands of experienced
cryptographers, most chances are that such brute-force attack will be very
quick).
Once the attacker got the master password, the game is over as he virtually
obtained all your passwords.</p>
<p>As a safety measure, passwords for different websites must therefore be
not only different, but also unrelated to avoid such weakness.
There is no other way to achieve this than generating a random password for
each website and storing a list of generated passwords somewhere.</p>
</div>
<h3 id="choosing-a-new-password"><a class="toclink" href="#choosing-a-new-password">Choosing a new password</a></h3>
<p>While Firefox provides a password storage functionality allowing a user to
store a unique password for each account and service, it doesn’t help the
user to choose a good password in any way, resulting in users often choosing
weaker passwords due to various bias.</p>
<p>Moreover, from a functional point-of-view, there is just no point in manually
choosing a password that you won’t need to remember at all.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Human are bad in creating random strings.</p>
<p>Factors such as the key location on the keyboard or characters frequency
in the users’ language often influence users toward the same subset
of characters.
Good password cracking software rely on such bias when brute-forcing a password.</p>
<p>It is expected that letting a software choose a password for you may
produce an odd feeling.
This is actually the most secure way to operate.</p>
<p>Encryption keys used to secure higher security environment are software
generated.
There is not reason why software would not also be used to generate more
basic credential such as passwords.</p>
<p>The only requirement is to rely on widely used software to limit the
chances of a bug in the software resulting in another bias.
For this reason you want to avoid using software found on random
forums and used by two or three people for security purposes.</p>
</div>
<p>The Firefox add-on page offers several password generation tools such as
<a href="https://addons.mozilla.org/en-US/firefox/addon/secure-password-generator/" rel="external" title="Secure Password Generator (Firefox Add-ons)">Secure Password Generator</a>.
Such software allows to easily generate new and safe passwords in just a click
without having to think about it.</p>
<p><span class="lb-small"><a href="#generator.png" id="generator.png-thumb" title="Click to enlarge"><img alt="'Generate password' option" src="https://www.whitewinterwolf.com/posts/2017/11/03/how-to-more-safely-use-the-firefox-password-manager/generator.png"/></a></span></p>
<p>If you need to access previously generated passwords, you can access them in
the <em>Security</em> section of Firefox’s <em>Options</em> screen.</p>
<h3 id="filling-login-forms"><a class="toclink" href="#filling-login-forms">Filling login forms</a></h3>
<p><em>This</em> is a thing I just don’t understand in Firefox password manager
implementation: why in the hell does it want to automatically pre-fill
authentication forms?</p>
<p>The so-called “sweep attack” takes advantage of this behavior to steal users
passwords by automatically, successively and quickly simulating the login pages
of various websites, letting the browser automatically fill the fields and then
retrieving the values filled by the benevolent browser without the user even noticing.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>This attack usually relies on malicious WiFi access points (sometimes
automatically selected as “known access point” without requiring user
consent), but are not limited to them as <span class="caps">ISP</span>-provided Internet routers are
now more and more targeted by malicious software and can be used to achieve
similar attacks.</p>
<p>For more information, see <a href="https://blog.acolyer.org/2017/02/06/password-managers-attacks-and-defenses/" rel="external" title="Password managers: attacks and defenses (The Morning Paper blog)">here</a> and <a href="https://crypto.stanford.edu/~dabo/papers/pwdmgrBrowser.pdf" rel="external" title="Password Managers: Attacks and Defenses (Stanford University)">there</a>.</p>
</div>
<p>The proprietary browser Opera, which often comes with innovative ideas for web
browsing, requires a manual intervention to fill authentication forms.
This is the only sane things to do and effectively prevents such attack.</p>
<p>There are a some modules which attempt to port this functionality to Firefox,
but as long as you just need to protect yourself against this attack without
any additional bell and whistles Firefox already natively proposes good a
solution.
This solution, however, is not used by default to not
<em>
“annoy and alienate all the users who expect autofill to work as it has since
Firefox 1.0”
</em>
(<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=360493#c236" rel="external" title="Brendan Eich comment on Bug 360493 / CVE-2006-6077 (Mozilla bug tracker)">source</a>) and Firefox went this far as to even hide this
functionality from the standard options screen (don’t ask me why, such decisions
dumbing users down just completely boggles my mind), even-though websites such
as <a href="http://kb.mozillazine.org/Signon.autofillForms" rel="external" title='Signon.autofillForms setting (MozillaZine)"'>MozillaZine</a> recommend to take advantage of it.</p>
<p>To disable the automatic filling of authentication forms:</p>
<ol>
<li>In Firefox <span class="caps">URL</span> bar, type <code>about:config</code>.</li>
<li>You should get a warning message, click to continue.</li>
<li>In the search bar, type <code>signon.autofillForms</code>.</li>
<li>The default value for this parameter is <code>true</code>, double-click on it to
change its value to <code>false</code>.</li>
</ol>
<p>From now on, Firefox won’t automatically fill any login form.
Simply click on the login field, and you will get a dropdown menu allowing to
select the account to use:</p>
<p><span class="lb-small"><a href="#signon.png" id="signon.png-thumb" title="Click to enlarge"><img alt="Signon dropdown menu" src="https://www.whitewinterwolf.com/posts/2017/11/03/how-to-more-safely-use-the-firefox-password-manager/signon.png"/></a></span></p>
<h4 id="keeping-the-password-database-available"><a class="toclink" href="#keeping-the-password-database-available">Keeping the password database available</a></h4>
<p>When using a database to store all your passwords, you must take specific
measures to ensure its availability:</p>
<ul>
<li>
<p>Of course when you’re on the go, so you can access your account from home,
work, mobile devices, etc.</p>
</li>
<li>
<p>But also in case of a disaster: you don’t want to loose your passwords the
day your hard-disk <em>will</em> break.</p>
</li>
</ul>
<p>Firefox natively provides <a href="https://www.mozilla.org/en-US/firefox/features/sync/" rel="external" title="Browse uninterrupted with Firefox Sync (Mozilla)">Firefox Sync</a>, a remote repository used to
synchronize the Firefox data, including its password database.
This service has been built with privacy in mind.
The data being encrypted client-side, Mozilla teams cannot access its content
(Mozilla interest here being to get more Firefox users).</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>As with any cloud storage service, Firefox Sync requires some degree of
trust between the service provider and you.</p>
<p>As such, Firefox Sync requires you to trust Firefox developers.
This, however, should already be the case if you are already using
their browser.
Moreover, Mozilla, the foundation behind the Firefox browser, is usually
<a href="https://www.eff.org/deeplinks/2014/04/eff-statement-mozilla-and-importance-open-internet" rel="external" title="EFF Statement on Mozilla and the Importance of the Open Internet">well regarded</a> by freedom advocates such as the <abbr title="Electronic Frontier Foundation"><span class="caps">EFF</span></abbr> and
worked with them on several freedom and privacy-related projects.</p>
<p>Moreover, for advanced use-cases, Firefox Sync is an open-source technology
and it is possible for you to setup your <a href="https://github.com/mozilla-services/syncserver" rel="external" title="Mozilla Sync Server (GitHub)">own Firefox Sync</a>
server, thus preventing even your encrypted data from leaving your
control at any point</p>
</div>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>If you plan to sync personal and professional devices, pay attention at
what you are syncing.</p>
<p>In particular, if <abbr title="Not Safe For Work"><span class="caps">NSFW</span></abbr> websites appear in your personal browsing history
and you are syncing the browsing history with professional devices, these
entries will also appear in the browsing history of the professional
devices and your employer may take you liable about that.</p>
<p>Firefox Sync allows you to select what data you want to sync.
You can perfectly use it to sync only the password and not the rest.</p>
</div>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>Using Firefox Sync does not exempt you from doing proper backups.</p>
<p>Better be safe than sorry!</p>
</div>
<h3 id="local-storage"><a class="toclink" href="#local-storage">Local storage</a></h3>
<p>By default local storage is not protected: password are stored locally as if
they were written in a plain text file.</p>
<p>This, however, does not mean that you necessarily need more or that this makes
you insecure.
A security system is only valid as long as it provides an effective protection
against a given threat.
There is no point in encrypting a file if an attacker having access to the file
would also have access to your password.</p>
<p>Encryption can be enabled at several location, and we will see that encryption
is not necessarily the only nor necessarily the best way if you want an
additional layer of protection around your passwords.</p>
<h4 id="full-disk-encryption"><a class="toclink" href="#full-disk-encryption">Full disk encryption</a></h4>
<p>I’m a proponent of full disk encryption and prefer to enable it whenever possible.</p>
<p>Full disk encryption encrypts most of the hard disk content (encryption at the
partition level).
Usually the bare minimum remains clear to be able to ask the boot password to
the user and allow to initiate the booting sequence.</p>
<p>This protects not only your password database, but all your data from various threats:</p>
<ul>
<li>
<p>When a device is stolen you can be confident that its content will not be readable.</p>
</li>
<li>
<p>When disposing of a old or broken hard disk you can be confident that no
one will ever be able to recover any of its content.</p>
</li>
<li>
<p>When handing your device to a stranger (at a repair-shop for instance) you
can be confident that your data will remain protected from prying eyes.</p>
</li>
</ul>
<p>Full disk encryption however only protects the data of turned off devices.
It does nothing to protect your data while the device is running.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>For mobile devices, some applications exist to turn off or reboot the
device after a certain number of unsuccessful attempts to unlock it.</p>
<p>Such applications allow to automatically retreat behind the stronger full
disk encryption protection when there is a suspicion that the running
device has been stolen.</p>
</div>
<h4 id="file-level-encryption"><a class="toclink" href="#file-level-encryption">File level encryption</a></h4>
<p>As an alternative to full disk encryption, some operating system offer file
level encryption.</p>
<p>In this case, the encryption protects only the content of the files below the
user’s home directory.
File decryption occurs transparently with the password used by the user to open
a session.</p>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>Depending on the system used, only file content may be protected.
File names, sizes and directories structure may remain accessible in clear form.</p>
</div>
<p>This protection is weaker that full disk encryption, but is mainly used to
protect files from prying eyes in the case of multi-users environments.</p>
<p>This features protects the files as long as the user has no opened session on
the machine.
As soon as the user opens a session any process can potentially access any of
the user’s files.</p>
<h4 id="firefox-password-database-encryption"><a class="toclink" href="#firefox-password-database-encryption">Firefox password database encryption</a></h4>
<p>Firefox allows to use a master password to add an additional protection layer
for your web credentials.</p>
<p>The protection added by this feature, even when using reasonable passwords, is
comparable to a small padlock: enough to keep your password out of prying eyes
but can usually be defeated in a very short time by an attacker.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Firefox master password acts as a local storage password.</p>
<p>If combined with Sync, it will be used to wrap both the synchronized
websites credentials and Sync credentials itself in a container encrypted
using the master password<sup id="fnref-sync_mp"><a class="footnote-ref" href="#fn-sync_mp">2</a></sup>.
The master password itself will not be synced among devices but remain local.</p>
<p>Different devices can therefore use Sync using different master passwords
or master passwords can be used only on certain devices and not others.</p>
<p>For instance, you can use Sync without a master password on you home
computer, and Sync with a master password on your company computer,
accessible by network administrators, to somewhat “mark” your passwords
database as private data.</p>
</div>
<p>The encryption implemented by Firefox is far from being the worst among common
browsers<sup id="fnref-ff_encryption"><a class="footnote-ref" href="#fn-ff_encryption">3</a></sup> and still remains hard to brute-force when long
passphrases are being used.
There is an <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=973759" rel="external" title="Bug 973759: Master password should be protected with stronger cryptography (Mozilla bug tracker)">open ticket</a> on Mozilla bug tracking system asking
for improvement of the encryption used which would allow to get good resistance
against brute-forcing even for common size passwords.</p>
<p>While not totally inactive, this ticket doesn’t show much progress most probably
as, would an attacker gain access to your Firefox files, there are dozens of
ways he could use to get access to your passwords (this is the reason why
Google always <a href="https://news.ycombinator.com/item?id=6166731" rel="external" title="Comment on Chrome's insane password security strategy (Hackers News)">refused</a> to implement any equivalent functionality in
Chrome).
So, would the strongest algorithms in the world be used, from a practical
perspective the overall security of the Firefox password manager would still
be equivalent to a small padlock.</p>
<p>Nevertheless, small padlock still have their use, as does Firefox password manager.</p>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>Firefox password manager does not encrypt the cookies.</p>
<p>If you’re planning to use Firefox master password featue, you will most
likely want to set Firefox to delete all cookies upon exit to ensure
(in <em>Options</em> > <em>Privacy</em>, under the <em>History</em> session select
<em>Use custom settings for history</em>, then set <em>Keep until</em> to
<em>I close Firefox</em>).</p>
</div>
<h4 id="applications-sandboxing"><a class="toclink" href="#applications-sandboxing">Applications sandboxing</a></h4>
<p>If you really want to protect Firefox passwords from being accessed by other
processes, instead of encrypting some Firefox files with a password that an
attacker would capture anyway you must simply prevent these other processes
from accessing any Firefox files.</p>
<p>This is achieved through software sandboxing which allows to isolate
potentially untrusted applications from the rest of your system.</p>
<p>Common sandboxing software include:</p>
<ul>
<li>For Windows environments: <a href="https://www.sandboxie.com/" rel="external" title="Sandboxie product home page">Sandboxie</a>.</li>
<li>For Linux environments: <a href="https://firejail.wordpress.com/" rel="external" title="Firejail project home page">Firejail</a><sup id="fnref-Firejail"><a class="footnote-ref" href="#fn-Firejail">4</a></sup>.</li>
<li>Specific Linux distributions also exist such as <a href="https://www.qubes-os.org/" rel="external" title="Qubes OS project home page">Qubes <span class="caps">OS</span></a> and
<a href="https://subgraph.com/sgos/index.en.html" rel="external" title="Subgraph OS project home page">Subgraph <span class="caps">OS</span></a> which provide software isolation services at their core.</li>
</ul>
<p>Simply running Firefox in a sandbox won’t help to protect your Firefox
password manager database!</p>
<p>The idea here would be instead to run the other applications
(in particular every other application communicating over the network) in
sandboxes to prevent <em>them</em> from accessing Firefox files (Firefox itself
can be sandboxed to prevent any malicious code running on web page to
access your private documents or other application files).</p>
<p>Moreover, if you find yourself needing such product, most chances are that
you will be better off using a dedicated password manager software.
This will allow you to implement a stronger isolation between the password
management and the browsing functionality.</p>
<h3 id="summary"><a class="toclink" href="#summary">Summary</a></h3>
<p>Here is my recommended way to get a reasonably secure and easy to use password
storage using Firefox built-in password manager, suitable for casual users:</p>
<ul>
<li>
<p>Use <a href="https://addons.mozilla.org/en-US/firefox/addon/secure-password-generator/" rel="external" title="Secure Password Generator (Firefox Add-ons)">Secure Password Generator</a> to generate new passwords (when creating
an account or replacing an old password).</p>
</li>
<li>
<p>From the <code>about:config</code> page, set <code>signon.autofillForms</code> to <code>false</code> to
require a human intervention to fill a login form.</p>
</li>
<li>
<p>Use <a href="https://www.mozilla.org/en-US/firefox/features/sync/" rel="external" title="Browse uninterrupted with Firefox Sync (Mozilla)">Firefox Sync</a> to share your passwords database between several devices.</p>
</li>
</ul>
<p>And from a more general perspective:</p>
<ul>
<li>
<p>Use full disk encryption whenever possible to prevent the content of your
computer and devices to fall into the wrong hands.</p>
</li>
<li>
<p>Backup you data to prevent the loss of you computer and devices content.</p>
</li>
</ul>
<div class="footnote">
<hr/>
<ol>
<li id="fn-KeePass">
<p>Don’t confuse the free, open-source and well known <em>KeePass</em> with
the unrelated, closed-source and paid <em>KeyPass</em> software. <a class="footnote-backref" href="#fnref-KeePass" title="Jump back to footnote 1 in the text">↩</a></p>
</li>
<li id="fn-sync_mp">
<p>With the advent of Sync 1.5, Mozilla developers seemingly initiated
a movement to push the master password feature away, putting Sync
credential out of its scope and making Sync and master password
incompatible.
This created a lot of reactions among the Firefox users community, mostly
tracked in <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=995268" rel="external" title="Bug 995268: Firefox Sync and Master Passwords are now mutually exclusive (Mozilla bug tracker)">bug #995268</a>.
This thread being quite long the most relevant comments I noted were:
<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=995268#c37" rel="external" title="Richard Newman: If I had to guess, with no studies backing this up, I'd guess: (Firefox bug #995268)">#37</a>, <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=995268#c38" rel="external" title="Michail Pappas: Combining Sync/MP and deletion of cookies at exit has been an extremely effective means to lock out unauthorized accesss to sensitive data. No biggies in configuring these for other non-tech folks as well. (Firefox bug #995268)">#38</a>, <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=995268#c48" rel="external" title="Richard Newman: That decision was made because your FxA credentials are written in cleartext to disk, and thus an app that could have read your passwords (but for MP) can now read your FxA credentials and circumvent a MP by just fetching your passwords from the cloud. (Firefox bug #995268)">#48</a>, <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=995268#c49" rel="external" title="Robert Kaiser: Those two facts together with the blunt help text telling people to just turn off MP will probably easily earn you a Big Brother Award, from what the organizers of that anti-privacy award were telling me on the recent conference (Firefox bug #995268)">#49</a>,
<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=995268#c67" rel="external" title="saintrory: As my work computer is a domain-based computer, my entire hard drive is open for anyone on the network with administrative privileges to browse, modify, copy from, etc. (Firefox bug #995268)">#67</a> and <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=995268#c58" rel="external" title="Mark Hammond: the fix will almost certainly involve storing the FxA credentials in the login manager, so would be as protected by the master-password as any other passwords are. (Firefox bug #995268)">#58</a>. <a class="footnote-backref" href="#fnref-sync_mp" title="Jump back to footnote 2 in the text">↩</a></p>
</li>
<li id="fn-ff_encryption">
<p><a href="http://raidersec.blogspot.fr/2013/06/how-browsers-store-your-passwords-and.html" rel="external" title="How Browsers Store Your Passwords (and Why You Shouldn't Let Them) (RaiderSec blog)">RaiderSec</a> published a comparative study on passwords
protection among common browsers, and even in its current shape Firefox
manage to keep the head up.
For the French readers, the <a href="https://boutique.ed-diamond.com/anciens-numeros/495-misc69.html" rel="external" title="Cryptanalyse du gestionnaire de mots de passe de Firefox (MISC 69)"><span class="caps">MISC</span> magazine</a> also published a very
insightful analysis on Firefox password manager encryption.
<br/>
Moreover there is an urban legend stating that enabling Firefox <span class="caps">FIPS</span>
mode allows to benefit from a better encryption.
This is bullshit, at best this enforces a <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/FIPS_Mode_-_an_explanation" rel="external" title="FIPS Mode - an explanation (Mozilla web docs)">minimum strength</a>
for the user’s master password but the encryption used remains the same.
In fact it seems that the <span class="caps">FIPS</span> mode has no concrete use-case in Firefox and
is on its way to be <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1337950" rel="external" title="Bug 1337950: Cannot enable FIPS in Firefox 53.0a2 (Mozilla bug tracker)">pushed away</a>. <a class="footnote-backref" href="#fnref-ff_encryption" title="Jump back to footnote 3 in the text">↩</a></p>
</li>
<li id="fn-Firejail">
<p>Firejail is named after the network firewalls, it is not a Firefox
plugin nor is limited to Firefox! <a class="footnote-backref" href="#fnref-Firejail" title="Jump back to footnote 4 in the text">↩</a></p>
</li>
</ol>
</div>SELinux cheatsheet2017-09-08T00:00:00+02:002017-09-08T00:00:00+02:00WhiteWinterWolftag:www.whitewinterwolf.com,2017-09-08:/posts/2017/09/08/selinux-cheatsheet/<p>This page is only designed as a memory-refresher.
SElinux may be a complex thing to get right, if you are not familiar with it yet
I highly encourage you to read <a href="/posts/2017/09/06/selinux-system-administration-selinux-cookbook-sven-vermeulen/" title="SELinux System Administration & SELinux Cookbook (Sven Vermeulen)">Sven Vermeulen</a> books.</p>
<h3>SELinux state</h3>
<p>To detect whether SELinux is enabled or not:</p>
<ul>
<li>From a script, <code>selinuxenabled</code> doesn’t produce any output and its exit
code gives SELinux status.</li>
<li>From an interactive prompt, <code>sestatus</code> provides more information.</li>
</ul>
<p>SELinux main configuration file is <em>/etc/selinux/config</em>, it defines:</p>
<ul>
<li>
<p><code>SELINUX=</code>: SELinux state:</p>
<ul>
<li>
<p><code>enforcing</code>: Enabled and block unauthorized actions (policy violations).</p>
</li>
<li>
<p><code>permissive</code>: Enabled, but only logs unauthorized actions and does not
block them (useful for development and <span class="caps">HIDS</span> purposes).</p>
</li>
<li>
<p><code>disabled</code>: SELinux is completely disabled.</p>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>If SELinux has been temporarily disabled (which is <em>not</em> recommended,
there are usually cleaner ways to proceed), a global relabel will be
required before re-enabling SELinux.</p>
<p><a href="#automatic-context-modification">More information</a>.</p>
</div>
</li>
</ul>
</li>
<li>
<p><code>SELINUXTYPE=</code>: The policy currently in use,
available policies depend …</p></li></ul><p>This page is only designed as a memory-refresher.
SElinux may be a complex thing to get right, if you are not familiar with it yet
I highly encourage you to read <a href="/posts/2017/09/06/selinux-system-administration-selinux-cookbook-sven-vermeulen/" title="SELinux System Administration & SELinux Cookbook (Sven Vermeulen)">Sven Vermeulen</a> books.</p>
<h3 id="selinux-state"><a class="toclink" href="#selinux-state">SELinux state</a></h3>
<p>To detect whether SELinux is enabled or not:</p>
<ul>
<li>From a script, <code>selinuxenabled</code> doesn’t produce any output and its exit
code gives SELinux status.</li>
<li>From an interactive prompt, <code>sestatus</code> provides more information.</li>
</ul>
<p>SELinux main configuration file is <em>/etc/selinux/config</em>, it defines:</p>
<ul>
<li>
<p><code>SELINUX=</code>: SELinux state:</p>
<ul>
<li>
<p><code>enforcing</code>: Enabled and block unauthorized actions (policy violations).</p>
</li>
<li>
<p><code>permissive</code>: Enabled, but only logs unauthorized actions and does not
block them (useful for development and <span class="caps">HIDS</span> purposes).</p>
</li>
<li>
<p><code>disabled</code>: SELinux is completely disabled.</p>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>If SELinux has been temporarily disabled (which is <em>not</em> recommended,
there are usually cleaner ways to proceed), a global relabel will be
required before re-enabling SELinux.</p>
<p><a href="#automatic-context-modification">More information</a>.</p>
</div>
</li>
</ul>
</li>
<li>
<p><code>SELINUXTYPE=</code>: The policy currently in use,
available policies depend on the distribution:</p>
<ul>
<li>
<p><code>targeted</code>: SELinux protection targets some system and daemon processes,
the end-user is usually not confined (<em>unconfined_t</em> domain).</p>
</li>
<li>
<p><code>minimum</code> (Red Hat) or <code>strict</code> (Gentoo): Complete protection including
end-user’s processes.</p>
</li>
<li>
<p><code>mls</code>: <abbr title="Multi Category Security"><span class="caps">MCS</span></abbr>/<abbr title="Multi Level Security"><span class="caps">MLS</span></abbr> implementation (Gentoo also provides a <code>mcs</code> policy
offering a single level, as opposed to <code>mls</code> which is still
experimental on this distribution).</p>
</li>
</ul>
</li>
</ul>
<h3 id="current-configuration-overview"><a class="toclink" href="#current-configuration-overview">Current configuration overview</a></h3>
<p>Several traditional commands have been modified to include the <code>-Z</code> option,
such as:</p>
<ul>
<li><code>id -Z</code>: Current user SELinux context.</li>
<li><code>ps -eZ</code>: SELinux of currently running processes.</li>
<li><code>ls -lZ</code>: SELinux context of the files in the current directory.</li>
</ul>
<p>Users list:</p>
<ul>
<li>Unix users: <code>getent passwd</code></li>
<li>SELinux users: <code>semanage user -l</code></li>
</ul>
<p>Relation between Unix and SELinux users: <code>semanage login -l</code>.
This list is also available in the file
<em>/etc/selinux/{<span class="caps">SELINUXTYPE</span>}/seusers</em> (do not modify this file directly!).</p>
<p>Roles list: <code>seinfo -r</code></p>
<p>Category and security level list: <code>chacat -L</code> or in the file
<em>/etc/selinux/{<span class="caps">SELINUXTYPE</span>}/setrans.conf</em>.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>In fact the list is hardcoded in SELinux: <em>s0-s15:c0.c1023</em>, but
<em>setrans.conf</em> lists aliases which are translated in the background by
the <code>mcstrans</code> daemon.</p>
</div>
<p>Relation between SELinux users and roles on one side and security levels and
categories on the other side: <code>semanage user -l</code></p>
<h3 id="modify-a-user-account-context"><a class="toclink" href="#modify-a-user-account-context">Modify a user account context</a></h3>
<ul>
<li>
<p>Relation between a Unix user and a SELinux user:</p>
<ul>
<li>Add: <code>semanage login -a -s staff_u larry</code></li>
<li>Modify: <code>semanage login -m -s staff_u larry</code></li>
<li>Delete: <code>semanage login -d larry</code></li>
</ul>
</li>
<li>
<p>Relation between a SELinux user, a role and a <abbr title="Multi Category Security"><span class="caps">MCS</span></abbr>/<abbr title="Multi Level Security"><span class="caps">MLS</span></abbr> level:</p>
<ul>
<li>Add: <code>semanage user -a -R 'staff_r sysadm_r' larry_u</code></li>
<li>Modify: <code>semanage user -m -R 'staff_r sysadm_r' larry_u</code></li>
<li>Delete: <code>semanage user -d larry_u</code></li>
</ul>
</li>
</ul>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When modifying user’s context, changes are taken into account only on the
next session opening.
It is therefore mandatory to ensure that the user has been fully
logged off:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1
2
3
4
5
6</pre></div></td><td class="code"><div class="codehilite"><pre><span class="c"># Lock the account:</span>
passwd -l larry
<span class="c"># Stop all user's processes:</span>
killall -KILL -u larry
<span class="c"># Unlock the account:</span>
passwd -u larry
</pre></div>
</td></tr></table></div>
</div>
<h3 id="execute-a-command-under-a-different-context"><a class="toclink" href="#execute-a-command-under-a-different-context">Execute a command under a different context</a></h3>
<ul>
<li><code>newrole [-r ROLE] [-t TYPE] [-l LEVEL]</code>:
Open a shell under a different role (the re-authentication process can
be avoided using <span class="caps">PAM</span> settings).</li>
<li><code>runcon [-u USER] [-r ROLE] [-t TYPE] [-l LEVEL] COMMAND</code>:
Launch a command under a different role or as a different user.</li>
<li><code>sudo [-r ROLE] [-t TYPE] COMMAND</code>:
Launch a command under a different role, the role can be automatically
selected from the <em>sudoers</em> file:<div class="codehilite"><pre>larry ALL=(ALL) TYPE=dbadm_t ROLE=dbadm_r ALL
</pre></div>
</li>
</ul>
<p>Those three commands allow to use a different role or type, only <code>runcon</code>
allows to use a different SELinux user, <code>sudo</code> does not allow to use a
different <abbr title="Multi Category Security"><span class="caps">MCS</span></abbr>/<abbr title="Multi Level Security"><span class="caps">MLS</span></abbr> level.</p>
<p>Other commands natively implement SELinux contexts handling and allow more
specific operations, such as <code>run_init</code> to start daemons on non-systemd systems.</p>
<p>Authentication services (such as <code>getty</code>, <code>sshd</code>, and <code>xdm</code>) can rely on <span class="caps">PAM</span>
to handle SELinux context switching (<code>pam_selinux</code>(8) module).
They therefore do not need to be modified.</p>
<p>Application-based context transition can be configured in these files:</p>
<ul>
<li><em>/etc/selinux/{<span class="caps">SELINUXTYPE</span>}/contexts/default_contexts</em>:
Default transition rules.</li>
<li><em>/etc/selinux/{<span class="caps">SELINUXTYPE</span>}/contexts/users/{target-selinux-user}</em>:
Contains rules overwriting the default ones.</li>
</ul>
<p>The first column matches the source context, then the first of the following
contexts allowed for the current user is selected.</p>
<h3 id="file-context"><a class="toclink" href="#file-context">File context</a></h3>
<h4 id="current-context"><a class="toclink" href="#current-context">Current context</a></h4>
<p>The context is usually checked using <code>ls -lZ</code> or <code>stat</code>.</p>
<p>It is stored in the extended attribute <em>security.selinux</em>, as can be checked
using the command <code>getfattr -m . -d <path></code>.
The attributes belonging to the <em>security</em> namespace are not directly editable
by the file owner, he must have the <em>CAP_SYS_ADMIN</em> capability to change them.</p>
<p>Two hardlinks cannot use two different contexts.
If they are used for a similar purpose, a way must be found to make them share the
same context.
Otherwise, a copy (<code>cp</code>) must be used instead of a link.</p>
<p>If the file system does not support extended attributes, or if no extended
attribute or an invalid one is defined:</p>
<ul>
<li>The context to use can be passed through a <code>mount</code>(8) option
(for instance <code>context="system_u:object_r:removable_t"</code>).</li>
<li>Otherwise SELinux falls back on a default context (<code>file_t</code> or
<code>unlabeled_t</code>) which is unaccessible from most domains.</li>
</ul>
<h4 id="modify-files-context"><a class="toclink" href="#modify-files-context">Modify file’s context</a></h4>
<h5 id="automatic-context-modification"><a class="toclink" href="#automatic-context-modification">Automatic context modification</a></h5>
<ul>
<li>
<p><code>restorecon</code> is used to:</p>
<ul>
<li>Restore the context of a file or a directory tree.</li>
<li>Find files with an unexpected context.</li>
</ul>
</li>
<li>
<p><code>fixfiles</code> (Red Hat) and <code>rlpkg</code> (Gentoo) are used to:</p>
<ul>
<li>Restore the context of all the files provided by a given package.</li>
<li>Restore the context of all mounted file systems.</li>
<li>Check files context.</li>
<li>Schedule a global relabel for the next system restart.</li>
</ul>
</li>
<li>
<p>Creating the <em>/.autolabel</em> file (<code>touch /.autolabel</code>) then restart the
system is the recommended way (see <code>selinux</code>(8)) to relabel the whole
file system.
This step is necessary when enabling a previously disabled SELinux.</p>
</li>
</ul>
<p>These tools rely on regular expressions stored in the files
<em>/etc/selinux/{<span class="caps">SELINUXTYPE</span>}/contexts/file_contexts[.</em>*<em>]</em>.</p>
<p>Most of these rules are also viewable using <code>semanage fcontext -l</code>, except:</p>
<ul>
<li>Rules affecting home directories files.</li>
<li>Paths explicitly set to not be automatically relabeled.</li>
</ul>
<p>These rules can match:</p>
<ul>
<li>Any file type (default behavior).</li>
<li>Directories (<code>-d</code>).</li>
<li>Socket files (<code>-s</code>).</li>
<li>Named pipes (<code>-p</code>).</li>
<li>Block devices (<code>-b</code>).</li>
<li>Character devices (<code>-c</code>).</li>
<li>Symbolic links (<code>-s</code>).</li>
</ul>
<p>When several rules match the same file:</p>
<ul>
<li>If a rule defined by the user (using <code>semanage</code> and stored in the file
<em>file_contexts.local</em>) matches, the last defined one will be used.</li>
<li>Otherwise, the <em>“most specific”</em> system rule will be used.
The <em>“specificity”</em> of a rule is mainly measured by the absence of regular expression
pattern or the largest number for fixed character before the pattern.</li>
</ul>
<h5 id="manual-context-modification"><a class="toclink" href="#manual-context-modification">Manual context modification</a></h5>
<p>The ability to manually relabel a file is restricted by the <em>relabelfrom</em> and
<em>relabelto</em> privileges.</p>
<p>Use <code>chcon</code> to test the impact of a context change:</p>
<ul>
<li>
<p>Use the <code>-t</code> option to pass a new context explicitely:</p>
<div class="codehilite"><pre>chcon -R -t httpd_sys_content_t /srv/www
</pre></div>
</li>
<li>
<p>Use the <code>--reference</code> option to apply the same context as a reference file:</p>
<div class="codehilite"><pre>chcon --reference /var/www/index.html /srv/www/index.html
</pre></div>
</li>
</ul>
<p><code>chcat</code> is a wrapper around <code>chcon</code> and allows to add and remove some
categories without having to provide the resulting categories list explicitly:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1
2
3
4</pre></div></td><td class="code"><div class="codehilite"><pre><span class="c"># Add a category:</span>
chcat -- +Customer2 index.html
<span class="c"># Remove a category:</span>
chcat -- -Customer1 index.html
</pre></div>
</td></tr></table></div>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>Don’t forget to use <code>--</code> to mark the end of the options, especially when
removing a category otherwise your command-line parameters may not be
parsed correctly.</p>
</div>
<p><code>setfiles</code> offers the same kind of features, but is older and mainly used
in the background by other, higher-level commands.</p>
<p>Manual modifications of files context will be lost on global relabel operations
unless if the destination context is a <em>“customizable type”</em>, listed in the
file <em>/etc/selinux/{<span class="caps">SELINUXTYPE</span>}/context/customizables_types</em> (this file can
be edited manually but it may be overwritten when updating the system…).
The <code>restorecond</code> and <code>fixfiles</code> will relabel such files only when the <code>-F</code>
flag is used.</p>
<p>The recommended way to make such change permanent is by proceeding in two
steps and define the modification as a user’s rule:</p>
<ul>
<li>
<p>The new context can be passed explicitly:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1
2</pre></div></td><td class="code"><div class="codehilite"><pre>semanage fcontext -a -t httpd_sys_content_t <span class="s2">"/srv/www(/.*)?"</span>
restorecon -R /srv/www
</pre></div>
</td></tr></table></div>
</li>
<li>
<p>Or a substitution can be declared:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1
2</pre></div></td><td class="code"><div class="codehilite"><pre>semanage fcontext -e /var/www /srv/www
restorecon -R /srv/www
</pre></div>
</td></tr></table></div>
</li>
</ul>
<p><code>semanage</code> substitution do not work the same way as <code>chcon</code> references:</p>
<ul>
<li><code>chcon</code> applies the same context to all files given as parameter.</li>
<li><code>semanage</code> saves the substitution declaration in the <em>file_contexts.subs</em>
file and, for instance, <em>/srv/www/icons</em> will receive the same context as
<em>/var/www/icons</em> which can be a different context than other directories.</li>
</ul>
<p>Usually, only the <em>type</em> field of the context is modified.
The <em>user</em> part is only used when <abbr title="User-Based Access Control"><span class="caps">UBAC</span></abbr> is enabled in the SELinux policy,
and the <em>role</em> field remains <code>object_r</code> for all files, roles being currently
only applied to running processes (users and daemons).</p>
<h4 id="new-files-context"><a class="toclink" href="#new-files-context">New files context</a></h4>
<p>SELinux does <em>not</em> rely on the <em>file_contexts.</em>* files for newly created files.
Instead, several situations are possible:</p>
<ul>
<li>
<p>By default, new files context is inherited from their parent directory.</p>
</li>
<li>
<p>The SELinux policy may define transition rules like “If a
process running with context A creates a file in a directory with context B,
then the file will be labeled with context C”.
For instance, if a web server with context <em>httpd_t</em> creates a temporary
file in a directory bearing the context <em>tmp_t</em>, the created file will be
affected the context <em>httpd_tmp_t</em>.
These transition rules can also take the file name into account.</p>
<p><code>sesearch</code> is used to search in a SELinux policy, its <code>-T</code> option allows
to search for types transition:</p>
<div class="codehilite"><pre>sesearch -T -s httpd_t -t tmp-t
</pre></div>
<p>Of course the process’s context must also have enough rights on the
resulting file context (<em>create</em>, <em>read</em>, <em>write</em>,<em>append</em>, etc.):</p>
<div class="codehilite"><pre>sesearch -A -s httpd_t -t httpd_tmp_t
</pre></div>
</li>
<li>
<p>An application directly interacting with SELinux <span class="caps">API</span>, given the right
privileges, can set the context of created files the same way a user would
use <code>chcon</code> or <code>chcat</code>.</p>
</li>
<li>
<p>At last the <code>restorecond</code> daemon can monitor a set of paths defined in the
file <em>/etc/selinux/restorecond.conf</em> and apply the context rules defined in
<em>file_context.</em>*.
This daemon is an ancestor of the type transition rules and is less used nowadays.</p>
</li>
</ul>
<h3 id="processes-and-threads-context"><a class="toclink" href="#processes-and-threads-context">Processes and threads context</a></h3>
<p>Processes and thread context is determined in a similar way than new files
context.
Child process context is handled while processing the call to an <code>exec*</code>
family function (<code>execve()</code>, <code>execl()</code>, etc.), usually following a call to
<code>fork()</code>:</p>
<ul>
<li>
<p>Inheritance of the parent process context by default.</p>
</li>
<li>
<p>A transition rule can force the transition to a different domain.
For instance a process with the domain <em>system_u:system_r:initrc_t</em>
executing a file with the domain <em>httpd_exec_t</em> results in child process
with the domain set to <em>system_u:system_r:httpd_t</em>.</p>
<p><code>sesearch</code> can be used to investigate such rules:</p>
<div class="codehilite"><pre>sesearch -T -s initrc_t -t httpd_exec_t
</pre></div>
<p>Several prerequisites are mandatory for such transition to work correctly:</p>
<ul>
<li>
<p>The parent process must have the right to execute the target file:</p>
<div class="codehilite"><pre>sesearch -A -s initrc_t -t httpd_exec_t -c file -p execute
</pre></div>
</li>
<li>
<p>The transition from the type <em>initrc_t</em> to the type <em>httpd_t</em> must
be authorized:</p>
<div class="codehilite"><pre>sesearch -A -s initrc_t -t httpd_t -c process -p transition
</pre></div>
</li>
<li>
<p>The target type <em>httpd_t</em> must be authorized for the process role
<em>system_r</em>:</p>
<div class="codehilite"><pre>seinfo -rsystem_r -x <span class="p">|</span> grep httpd_t
</pre></div>
</li>
<li>
<p>The type <em>httpd_exec_t</em> must be identified as an entry point for the
type <em>httpd_t</em>:</p>
<div class="codehilite"><pre>sesearch -A -s httpd_t -t httpd_exec_t -c file -p entrypoint
</pre></div>
</li>
</ul>
</li>
<li>
<p>At last an application with the <em>dyntransition</em> privilege can use the
<code>setexeccon()</code> function to set the context to use upon the next call to
an <code>exec*</code> family function.</p>
</li>
</ul>
<p>A thread can also have its own context set the same way, but its privileges
must be a subset of the parent thread privileges.
The child cannot have any privilege that the parent thread doesn’t already
have.
The Apache module <em>mod_selinux</em> for instance is implemented this way.</p>
<p>If a label is missing or invalid, the default label <em>unlabeled_t</em> is used.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>A common cause for invalid domains is the update of the SELinux policy to
a new version which doesn’t know the label of some currently running processes.</p>
</div>
<h3 id="network-communication-context"><a class="toclink" href="#network-communication-context">Network communication context</a></h3>
<h4 id="network-ports-labels"><a class="toclink" href="#network-ports-labels">Network ports labels</a></h4>
<p>Similarly to files, network ports (sockets) are associated to a label allowing
to determine authorized actions depending on the process label.</p>
<p>To view current ports labels: <code>semanage port -l</code></p>
<p>To modify ports labels:</p>
<ul>
<li>Add a label: <code>semanage port -a -t http_port_t -p tcp 81</code></li>
<li>Modify a label: <code>semanage port -m -t http_port_t -p tcp 81</code></li>
<li>Delete a label: <code>semanage port -d -t http_port_t -p tcp 81</code></li>
</ul>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>It is possible to provide a port range by separating the first and last
port using a dash (for instance <code>81-89</code>).</p>
<p>The range must be expressed the same way in both the creation and deletion
commands, otherwise the deletion command <a href="https://serverfault.com/q/448859/228297" rel="external" title="CentOS - semanage - Delete range of ports (StackExchange)">will not work</a>.</p>
</div>
<h4 id="network-packets-labels-secmark"><a class="toclink" href="#network-packets-labels-secmark">Network packets labels (<span class="caps">SECMARK</span>)</a></h4>
<p>Network packets labeling is handled by Netfilter.
In addition to the classical actions <em><span class="caps">ACCEPT</span></em>, <em><span class="caps">DROP</span></em>, <em><span class="caps">RETURN</span></em> and <em><span class="caps">QUEUE</span></em>,
<code>iptables</code> or <code>ip6tables</code> may use:</p>
<ul>
<li><em><span class="caps">SECMARK</span></em>: to label an individual packet.</li>
<li><em><span class="caps">CONNSECMARK</span></em>: to label a whole connection.</li>
</ul>
<p>Two operational modes are available:</p>
<ul>
<li>
<p>Label of incoming/outgoing traffic: SELinux extends Netfilter filtering
features but the label remains local to the host.</p>
</li>
<li>
<p>End-to-end traffic labeling: as long as both the client and the server
hosts share the same naming convention and that an appropriate protocol
is used, it becomes possible to apply labels to the network communication
from end-to-end.</p>
<p>Protocols used for end-to-end network labeling are:</p>
<ul>
<li>
<p>IPSec: The most modern and common solution, allows to carry the whole
context information along with the network data.</p>
</li>
<li>
<p>Netlabel/<abbr title="Commercial Internet Protocol Security Option"><span class="caps">CIPSO</span></abbr>: Solution providing backward compatibility with legacy
hardened operating systems.
It carries only the sensitivity information.</p>
</li>
</ul>
</li>
</ul>
<h3 id="querying-the-current-policy"><a class="toclink" href="#querying-the-current-policy">Querying the current policy</a></h3>
<h4 id="rules-syntax"><a class="toclink" href="#rules-syntax">Rules syntax</a></h4>
<p>SELinux rules are composed of <abbr title="Type Enforcement"><span class="caps">TE</span></abbr> rules and <abbr title="Access Vector"><span class="caps">AV</span></abbr>.</p>
<p>The syntax is as follow:</p>
<div class="codehilite"><pre><av_kind> <source_type(s)> <target_type(s)> : <class(es)> <permission(s)>
</pre></div>
<p>Where:</p>
<ul>
<li>
<p><em>av_kind</em>: Rule type, for instance:</p>
<ul>
<li>
<p><code>allow</code>: Defines an authorized operation, matches <code>sesearch</code> option
<code>-A</code>.</p>
</li>
<li>
<p><code>type_transition</code>: Defines the transition between domains, matches
<code>sesearch</code> option <code>-T</code>.</p>
</li>
</ul>
</li>
<li>
<p><em>source_type(s)</em>: is one of the following:</p>
<ul>
<li>
<p>A <em>domain</em>, also called <em>“type”</em>, hence the fact that SELinux is called a
<em>“Type Enforcement based <abbr title="Mandatory Access Control"><span class="caps">MAC</span></abbr>”</em> since the rules rely on type information
to control the access.</p>
<p>To list available types: <code>seinfo -t</code></p>
</li>
<li>
<p>An <em>attribute</em>, this is a group name allowing to target a potentially
large number of domains in a single rule.</p>
<p>To list available attributes: <code>seinfo -a</code>, to list the domains
contained in a given attribute: <code>seinfo -a<attribute_name> -x</code>.</p>
</li>
<li>
<p>At a lower level it can also be an <em>alias</em>, this is just a synonym of an
existing type.</p>
</li>
</ul>
</li>
<li>
<p><em>target_type</em>: Any valid type name.</p>
</li>
<li>
<p><em>classe(es)</em>: Defines the type of resource (file, socket, etc.) covered by
the rule.</p>
<p>To list available classes: <code>seinfo -c</code></p>
</li>
<li>
<p><em>permission(s)</em>: The permissions granted, the object class dictates the list
of possible permissions.</p>
<p>To list available permissions: <code>seinfo -c<class_name> -x</code></p>
</li>
</ul>
<p><code>sesearch</code> provides a variable degree of granularity in the search results.
Providing more arguments provides narrower results:</p>
<div class="codehilite"><pre>sesearch (-A|-T|...) [-s SOURCE [-t TARGET [-c CLASS [-p PERMISSION]]]]
</pre></div>
<h4 id="constraints"><a class="toclink" href="#constraints">Constraints</a></h4>
<p>In addition to the Type Enforcment rules which rely on object types, SELinux
also provides constraints.
These are a kind of conditional expression allowing to take the context’s user
and role into account.</p>
<p>For instance, <abbr title="User-Based Access Control"><span class="caps">UBAC</span></abbr> is implemented this way:</p>
<div class="codehilite"><pre>u1 == u2
or u1 == system_u
or u2 == system_u
or t1 != ubac_constrained_type
or t2 != ubac_constrained_type
</pre></div>
<p>Technically, the command <code>seinfo --constrain</code> should allow to list enabled
constraints, but the output is generated in postfixed notation with attributes
expanded and is therefore hardly usable as-is.</p>
<h3 id="modify-the-current-policy-behavior"><a class="toclink" href="#modify-the-current-policy-behavior">Modify the current policy behavior</a></h3>
<h4 id="boolean-settings"><a class="toclink" href="#boolean-settings">Boolean settings</a></h4>
<p>To list available boolean settings with their current value and a description:
<code>semanage boolean -l</code></p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>As the list of all available boolean settings may be long, use <code>grep</code> to
limit to the relevant entries.</p>
</div>
<p>To quickly know the value of a boolean given its name:
<code>getsebool <boolean_name></code></p>
<p>To find a boolean name starting from an <em>allow</em> rule:
<code>sesearch -AC -s httpd_t -t user_home_t -p read</code></p>
<p>To find the rules affected by a given boolean:
<code>sesearch -ATC -b <boolean_name> | grep -e ^E -e ^D</code>.
This command output follows the convention below:</p>
<ul>
<li>First letter:<ul>
<li><em>E</em>: The rule is currently enabled.</li>
<li><em>D</em>: The rule is currently disabled.</li>
</ul>
</li>
<li>Second letter:<ul>
<li><em>T</em>: The rule is enabled when the boolean setting is enabled (<em>true</em>).</li>
<li><em>F</em>: The rule is enabled when the boolean setting is disabled (<em>false</em>).</li>
</ul>
</li>
</ul>
<h5 id="temporary-modification"><a class="toclink" href="#temporary-modification">Temporary modification</a></h5>
<p>To temporarily modify a boolean value (the change will be reverted on next
system restart):</p>
<ul>
<li><code>setsebool <bool_name>=(0|false|off|1|true|on) ...</code>: Assign an explicit
value to the boolean setting (<code>0</code>, <code>false</code> and <code>off</code> are equivalent,
so are <code>1</code>, <code>true</code> and <code>on</code>).</li>
<li><code>togglesebool <bool_name> ...</code>: Reverse the value of the boolean settings
passed as parameter.
This is an older command dating back to the time when <code>setsebool</code> was
unable to set multiple settings in an atomic way.</li>
</ul>
<p>Unlike persistent modifications, temporary modification of a boolean value is a
nearly instantaneous operation.</p>
<h5 id="persistent-modification"><a class="toclink" href="#persistent-modification">Persistent modification</a></h5>
<p>To modify a boolean setting so it remains persistent across system reboots:</p>
<ul>
<li><code>setsebool -P <boolean_name>=(on|off) ...</code></li>
<li><code>semanage boolean -m -(0|-off|1|-on) <bool_name></code>, <code>semange</code> can also take
its input from a file listing the settings to modify using the <code>-F</code> option.</li>
</ul>
<p>Persistent changes imply to rebuild the whole SELinux policy, they may
therefore take some time depending on the size of the current SELinux policy.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>On Gentoo systems, the SELinux policies only contains rules covering
installed packages, while on Red Hat systems the SELinux policies are
monolithic and always contain all created rules.</p>
<p>As a result, permanently modifying a setting takes a noticeably longer
time on Red Hat systems.
Be patient…</p>
</div>
<h4 id="rules-troubleshooting-and-modification"><a class="toclink" href="#rules-troubleshooting-and-modification">Rules troubleshooting and modification</a></h4>
<p>Use <code>audit2allow</code> to troubleshoot and generate rules and modules:</p>
<ul>
<li>
<p>To analyze the cause of an interdiction (equivalent of <code>audit2why</code> on
some distributions):</p>
<div class="codehilite"><pre>grep avc /var/log/messages <span class="p">|</span> audit2allow -w
</pre></div>
</li>
<li>
<p>To generate the <em>allow</em> rules matching interdictions.
<code>audit2allow</code> tells when a boolean allows to reach the same goal or if the
interdiction has been raised by a constraint instead of a <abbr title="Type Enforcement"><span class="caps">TE</span></abbr> rule.
The <code>-R</code> option allows the use of macros, this makes the result more
readable but also more prone to unwanted side-effects.</p>
<div class="codehilite"><pre>grep avc /var/log/messages <span class="p">|</span> audit2allow <span class="o">[</span>-R<span class="o">]</span>
</pre></div>
</li>
<li>
<p>To generate a directly loadable module, the command below would generate
a policy package file named <em>example.pp</em>:</p>
<div class="codehilite"><pre>grep avc /var/log/messages | audit2allow -M example
</pre></div>
</li>
</ul>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>While <code>audit2allow</code> remains a good troubleshooting utility, its automatic rules
generation feature however must be used with great care to not affect the
system security negatively.</p>
</div>
<p>Use <code>semodule</code> to manage SELinux modules:</p>
<ul>
<li>
<p>To list loaded modules: <code>semodule -l</code></p>
</li>
<li>
<p>To load an additional module (this is a persistent change: the module will
remain loaded upon system restarts): <code>semodule -i /path/to/example.pp</code></p>
</li>
<li>
<p>To enable/disable a module: <code>semodule -(e/d) <module_name></code></p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Disabling a module while keeping it loaded is useful to keep its types
definition without enabling any of its rules.</p>
</div>
</li>
</ul>
<p><code>selocal</code> is available only on Gentoo and helps to manage locally defined
SELinux rules.</p>
<p>To manually build a SELinux policy package:</p>
<ol>
<li>
<p>Start from a plain text source code (*<em>.te</em> file).</p>
</li>
<li>
<p>Build a intermediary module (*<em>.mod</em> file):</p>
<div class="codehilite"><pre>checkmodule -M -m -o mymodule.mod mymodule.te
</pre></div>
</li>
<li>
<p>Generate the policy package suitable to be loaded by <code>semodule</code> (*<em>.pp</em> file):</p>
<div class="codehilite"><pre>semodule_package -o mymodule.pp -m mymodule.mod
</pre></div>
</li>
</ol>SELinux System Administration & SELinux Cookbook (Sven Vermeulen)2017-09-06T00:00:00+02:002017-09-06T00:00:00+02:00WhiteWinterWolftag:www.whitewinterwolf.com,2017-09-06:/posts/2017/09/06/selinux-system-administration-selinux-cookbook-sven-vermeulen/<p>Sven Vermeulen, the author of these two books, is deeply involved in the Gentoo community.</p>
<p>Quoting his biography from the book introduction:</p>
<blockquote>
<p>In 2003, he joined the ranks of the Gentoo Linux project as a documentation
developer and has since worked in several roles, including Gentoo Foundation
trustee, council member, project lead for various documentation initiatives,
and (his current role) project lead for Gentoo Hardened SELinux integration
and the system integrity project.</p>
</blockquote>
<p>He is both knowledgeable technically, pedagogically and in SELinux.
In these books, he uses his talent to spread the light on a domain which is
often conceived as obscure and daunting, explaining in a clear and effective
way how and why the things are the way they are so everything finally takes
its place into our minds.</p>
<p>Don’t let the affiliation with the Gentoo project let you think that these books
are only about Gentoo.
These books …</p><p>Sven Vermeulen, the author of these two books, is deeply involved in the Gentoo community.</p>
<p>Quoting his biography from the book introduction:</p>
<blockquote>
<p>In 2003, he joined the ranks of the Gentoo Linux project as a documentation
developer and has since worked in several roles, including Gentoo Foundation
trustee, council member, project lead for various documentation initiatives,
and (his current role) project lead for Gentoo Hardened SELinux integration
and the system integrity project.</p>
</blockquote>
<p>He is both knowledgeable technically, pedagogically and in SELinux.
In these books, he uses his talent to spread the light on a domain which is
often conceived as obscure and daunting, explaining in a clear and effective
way how and why the things are the way they are so everything finally takes
its place into our minds.</p>
<p>Don’t let the affiliation with the Gentoo project let you think that these books
are only about Gentoo.
These books takes into account the various implementation of SELinux, and
in particular the first volume, *SELinux System Administration, takes time to
compare the notable differences which can be found between the Red Hat and
Gentoo implementation and the reasons behind them.</p>
<h3 id="selinux-system-administration"><a class="toclink" href="#selinux-system-administration">SELinux System Administration</a></h3>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Having bought this book some time ago, I have only read the first edition
which counted 120 pages.
While writing this article I see that Sven published a second edition which
is now 285 pages (!).</p>
<p>Checking the new table of content, this massive update adds whole new
chapters on Docker and virtualization, on D-Bus and systemd, and also
see to borrow some content from <em>SELinux Cookbook</em>.</p>
</div>
<p>This is the first book of the set and goes from introducing SELinux to the
reader to making him able to administrate SELinux features.</p>
<p>From the start of the book, the reader is taken very gently in this domain
which, due to the lack of proper documentation, is often reputed as highly
complex and daunting.</p>
<p>Sven however manages to explain things clearly, progressing step-by-step.
At the beginning, the reader is not expected to know anything about SELinux and
is provided a general overview on how and why SELinux works.
At the end of the journey, the reader is capable of using SELInux in a sensible
manner to improve the daily activity security and troubleshoot potential issues.</p>
<p>While the 120 pages of the edition I read may seem short, Sven writing is
really up-to-the point and dense while still remaining clear and easy-to-follow,
with a countless number of practical examples to keep the link with real-world situations.</p>
<p>I’m always impressed by people managing to keep
things short and concise while still remaining clear and complete.
Sven manages to do it, thanks to his experience in documentation writing and
probably thanks to the help of the numerous other people who participated to
this project: this book is indeed not the result of a lone-person work but a
dozen of other names are mentioned in the introduction.
There is no secret to achieve high-quality books such as this one.</p>
<p>So as a conclusion, if you would like to start on SELinux or complete your
knowledge on this technology, you can blindly go for this book.
There are other older books on the subject, but SELinux is a very moving
thing and most if not all their content is most likely outdated now.
I don’t know any other recent book on this topic and don’t see the need for one
yet (especially since the latest update which keeps its content fresh), since
this one has all you would need to start and administrate a SELinux.</p>
<p class="buy button"><a href="https://www.amazon.com/SELinux-System-Administration-Sven-Vermeulen/dp/1787126951?tag=electronicfro-20" rel="external" title="Buy 'SELinux System Administration' (Amazon)">Buy on Amazon</a></p>
<h3 id="selinux-cookbook"><a class="toclink" href="#selinux-cookbook">SELinux Cookbook</a></h3>
<p><span class="lb-small floatright"><a href="#cover_cookbook.jpg" id="cover_cookbook.jpg-thumb" title="Click to enlarge"><img alt="SELinux Cookbok cover" src="https://www.whitewinterwolf.com/posts/2017/09/06/selinux-system-administration-selinux-cookbook-sven-vermeulen/cover_cookbook.jpg"/></a></span>
The goal of the first book was to take our hand from the discovery of SELinux
to its administration and troubleshooting.</p>
<p>SELinux has always been known to rely on complex sets of rules.
One the things that the first book explains is that writing and maintaining
those core rules is the duty of the upstream distribution and SELinux project teams.</p>
<p>Expecting a lone administration to write a complete set of SELinux from
scratch is not only a complex and daunting task, it is insane.
Under normal circumstances, SELinux administration goes through two main
tasks which do not require to touch the rules at all:</p>
<ul>
<li>Ensuring that all objects (such as files, network ports, etc.) are
correctly labeled.</li>
<li>Setting a set of boolean values where SELinux behavior must be changed from
the default.</li>
</ul>
<p>However, it may happen a time where those basic tasks are not enough to properly
match the specific needs of a complex or unusual environment.
In such cases you need to go a step further from SELinux administration and
enter the SELinux development realm.</p>
<p>This is what this book is about.
It starts by detailing how to build a proper SELinux development environment,
including links to some useful scripts developed by the author himself to
help with common tasks.</p>
<p>Then, for a wide range of domains, the author analyzes several cases where
he describes a concrete initial situation, the exact steps to solve it and
the background explanation of the solving process allowing it to better adapt
it to your own needs and requirements.</p>
<p>This is a cookbook book and it addresses advanced topics in SELinux, so its
reading from cover to cover may be less the “comfortable trip” that Sven
offered us in the first volume.
However, the content of the book remains organized in a logical manner, and
personally I really enjoyed reading it as Sven remains still as efficient in
explaining complex things in a simple yet accurate way.
I think it is even preferable to read the full book at least once to get a
beter idea of the various feature you can leverage in SELinux, before using
it as an actual cookbook.</p>
<p>People not already familiar with SELinux should not directly start with this book but
should use Sven’s <em>SELinux System Administration</em> as an introduction.
Having read and practiced a bit its content however is enough to fully follow
the <em>SELinux Cookbook</em>.</p>
<p>The <em>SELinux Cookbook</em> is suitable both for people having to use SELinux in an
advanced way, and people simply wanting to know more about SELinux without
having any direct need to apply its solutions.</p>
<p class="buy button"><a href="https://www.amazon.com/SELinux-Cookbook-Sven-Vermeulen/dp/1783989661?tag=electronicfro-20" rel="external" title="Buy 'SELinux Cookbook' (Amazon)">Buy on Amazon</a></p>Isolate your services using jails and containers2017-08-10T00:00:00+02:002017-08-10T00:00:00+02:00WhiteWinterWolftag:www.whitewinterwolf.com,2017-08-10:/posts/2017/08/10/isolate-your-services-using-jails-and-containers/<p>Containers and jails allow you to make your system more secure, more reliable,
more flexible and, at the end of the day, easier to manage.
Once you get used to it, it become difficult to conceive to setup a server
without such features.</p>
<p>But what are they exactly?</p>
<h3>Containers and jails</h3>
<p>Containers and jails designate different implementations of
operating-system-level virtualization.
Like a lot of low-level security features we encounter in today’s world, this
functionality can be traced back to the old mainframes, where reliability and
parallelism are at the core of the system, and which allow to partition
a host system into smaller isolated systems.</p>
<p>This feature then went through commercial Unixes to finally reach open-source
operating systems.
The first open-source <span class="caps">OS</span> to really implement this feature was FreeBSD which
offers its <em>jail</em> functionality since 2000 (FreeBSD 4.0).
In the mean time there were several more-or-less successful attempts …</p><p>Containers and jails allow you to make your system more secure, more reliable,
more flexible and, at the end of the day, easier to manage.
Once you get used to it, it become difficult to conceive to setup a server
without such features.</p>
<p>But what are they exactly?</p>
<h3 id="containers-and-jails"><a class="toclink" href="#containers-and-jails">Containers and jails</a></h3>
<p>Containers and jails designate different implementations of
operating-system-level virtualization.
Like a lot of low-level security features we encounter in today’s world, this
functionality can be traced back to the old mainframes, where reliability and
parallelism are at the core of the system, and which allow to partition
a host system into smaller isolated systems.</p>
<p>This feature then went through commercial Unixes to finally reach open-source
operating systems.
The first open-source <span class="caps">OS</span> to really implement this feature was FreeBSD which
offers its <em>jail</em> functionality since 2000 (FreeBSD 4.0).
In the mean time there were several more-or-less successful attempts to
implement an equivalent functionality in Linux, but Linux users had to wait
until 2008 (2014 for the first stable version) for a standardized and upstream
supported solution to be available as <em>Linux Containers</em> (<span class="caps">LXC</span>).</p>
<p>Linux containers and FreeBSD jails (I will use the term <em>jail</em> to designate
both implementations in the rest of this article) are very flexible and allows
you to isolate anything from a full-fledged, multi-user environment up to a
single process.</p>
<p>But, as long as general server hardening is concerned, where they really shine
(<span class="caps">IMO</span>) is in their ability to partition a server on a functional basis.</p>
<h3 id="practical-example"><a class="toclink" href="#practical-example">Practical example</a></h3>
<p>Let’s take the example below.
It looks like a classical web server platform with the notable difference that
each components, while isolated from each other, are all consolidated inside the
same single host.</p>
<p><span class="lb-small"><a href="#jails.png" id="jails.png-thumb" title="Click to enlarge"><img alt="Webserver with jail segregation" src="https://www.whitewinterwolf.com/posts/2017/08/10/isolate-your-services-using-jails-and-containers/jails.png"/></a></span></p>
<p>All incoming and outgoing request are handled by the <em><span class="caps">WAF</span></em> and the <em>Proxy</em>
jails.
The idea here is that anything which enters or leaves the host must go
through centralized checkpoints, both to enforce a policy and detect suspicious activity.</p>
<p>Actual services run deeper in the server.
They have no direct access to the external network, and cannot be directly
reached from the outside world either.</p>
<p>This picture is very simplified as normally you will have several other jails
gravitating around these main one (I strongly recommend at least one syslog
jail to store the log out of jails reach, while technically doable from within
the <em>Proxy</em> jail functional separation should encourage you to dedicate another
jail for <span class="caps">DNS</span> resolution, etc.).</p>
<p>Keep in mind two advantages of jails compared to other virtualization solutions:</p>
<ul>
<li>
<p>There is little to no overhead.
Running a service directly on the host or running it in a jail consumes the
same amount of resources.
Similarly, running two services in the same jail or in two different jails
doesn’t change anything resource-wise.</p>
</li>
<li>
<p>Jails file-system is available for inspection from the host with no
possibility for any process in the jail to fake or hide anything, even
with the root privileges.</p>
<p>This allows very-effective Tripwire-like <span class="caps">HIDS</span> checks, with both the
integrity database stored out of jail reach while still having a direct
access to the jails file-systems.</p>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>Manipulating jail files outside of the jail context is always a risky
operation.
For instance, due to <a href="https://packetstormsecurity.com/files/132281/OSSEC-2.8.1-Local-Root-Escalation.html" rel="external" title="CVE-2015-3222: Root escalation via syscheck">a vulnerability</a> in a new feature of
<span class="caps">OSSEC</span>, an attacker able to give a specially crafted name to a file
was in measure to make <span class="caps">OSSEC</span> execute any arbitrary shell command as the
root user.</p>
<p>Depending on the enabled features, such <span class="caps">HIDS</span> tool doesn’t necessarily
need to be executed in the host context.
As long as file integrity checking is concerned, it is possible to
run the inspection tool in one (or several) dedicated jail(s) where
other jails directories are accessed through read-only null mounts.</p>
</div>
</li>
</ul>
<h4 id="the-waf-jail"><a class="toclink" href="#the-waf-jail">The <em><span class="caps">WAF</span></em> jail</a></h4>
<p>The <em><span class="caps">WAF</span></em> (Web Application Firewall) here usually also does <span class="caps">SSL</span>-termination
(as you always use <span class="caps">HTTPS</span> whenever possible, don’t you?),
allowing to store cryptographic material out-of-reach of the web server
would it be compromised.</p>
<p>Depending on your exact setup, the <span class="caps">WAF</span> <em>may</em> also need an access
to the Internet to do <span class="caps">OCSP</span> stapling.
In this case, don’t grant it a direct Internet access but instead force it to go
through the <em>Proxy</em> jail.
Always keep things under tight control.</p>
<p>As the main entry point, the <span class="caps">WAF</span> also routes validated incoming requests to
the correct web server.</p>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>It may be tempting to use the <span class="caps">WAF</span> to cache answers.
Even micro-caching (caching an answer for a few seconds) would allow
to greatly reduce web server load without any noticeable difference in
content update reactivity.</p>
<p>However, it is very easy to screw things up at this step and it regularly
happens that private or sensitive information (like valid session
identifiers) gets cached and then leak, notably in search engines results,
making them available to attackers.</p>
<p>Double-check your <span class="caps">HTTP</span> headers and cache behavior!</p>
</div>
<h4 id="the-web-server-jails"><a class="toclink" href="#the-web-server-jails">The <em>Web server</em> jails</a></h4>
<p>This example shows two web servers, one (<em>Web server 2</em>) needs access to some
external resources while the other (<em>Web server 1</em>) don’t.</p>
<p>Web servers have no direct access to the network and are not directly
reachable from the network.
This means that if an attacker manages to executes a payload on the web
server (be it through <span class="caps">PHP</span> code injection or whatever):</p>
<ul>
<li>
<p>Opening a listening port will have no effect has the port will never be
reachable from the outside.</p>
</li>
<li>
<p>Callback sessions will not work either as outgoing connections requests
initiated by the web server are not allowed.
As such event should not happen under normal circumstances, an <span class="caps">HIDS</span> can
even be configured to ring a specific alert at this occasion.</p>
</li>
<li>
<p>Several automated bots payloads rely on a <code>ping</code> or a <code>wget</code>
equivalent to notify the C2 server of a vulnerable host.
Here such command will fail, keeping the C2 server from being
notified even in case of a effectively vulnerable service (this can save
your butt during the short time lapse between a vulnerability disclosure and
patch application).</p>
</li>
</ul>
<h4 id="the-proxy-jail"><a class="toclink" href="#the-proxy-jail">The <em>Proxy</em> jail</a></h4>
<p>The <em>Proxy</em> enforces a whitelist-based policy.
Unlike proxies commonly found on networks, this one should not block
a few blacklisted addresses and operations, but instead ensure on a
jail-per-jail basis that each jail only attempts to access the resources
it is expected to access (and optionally that the answer also match expected criterion).</p>
<p>Here you have to think about how to handle <span class="caps">SSL</span> connections:</p>
<ul>
<li>
<p>You may use end-to-end encryption from the inner jail up to the remote
service, but in this case the proxy won’t have access to the communication
details between the jail and the remote host.</p>
</li>
<li>
<p>You may use <span class="caps">SSL</span> termination on the proxy, this will allow you to enforce
tighter rules to control jails communication but in case of a
compromise of the proxy jail an attacker will be in measure to
intercept all communication going through it.</p>
</li>
</ul>
<p>Personally I decide this on a case-per-case basis with the following rule
of thumb:</p>
<ul>
<li>
<p>If a jail needs to access or transmit potentially sensitive data to
a very narrow subset of trusted websites, then it is usually fine to
allow encrypted communication with these websites.</p>
</li>
<li>
<p>If a jails needs to exchange non-sensitive data with various or untrusted
websites, then <span class="caps">SSL</span> termination on the proxy associated with a tighter
checking and logging of the communication is usually preferable.</p>
</li>
</ul>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>I’m talking about <span class="caps">SSL</span> <em>termination</em>, not <span class="caps">SSL</span> <em>interception</em> as there
is no point in encrypting the communication between the proxy and the inner
jail.
While the Squid proxy support <span class="caps">SSL</span> termination, not all client tools support
delegating <span class="caps">SSL</span> to the proxy (personally I use a customized <code>curl</code> for this
purpose, I will release the patch as-soon-as I have a clean version of it).</p>
</div>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>FreeBSD users, note that I’ve found a serious flaw affecting FreeBSD from
version 7.0 to 10.3 included (while supported until April 2018, there is no
fix planned for this version), as an attacker has a direct access to <span class="caps">SHM</span>
objects (as used by the popular Squid proxy) from any jail.</p>
<p>In the current example, an attacker would be able to <span class="caps">DOS</span> a Squid proxy (and
potentially execute arbitrary code in the <em>Proxy</em> jail) from
any jail (even from <em>Web server 1</em> which should normally not be able to
interact at all with the <em>Proxy</em> jail).</p>
<p>More details are available in my <a href="/posts/2017/08/02/freebsd-jail-shm-hole/" title="FreeBSD jail SHM hole">dedicated article</a>.</p>
</div>
<h4 id="the-database-jail"><a class="toclink" href="#the-database-jail">The <em>Database</em> jail</a></h4>
<p>At last the databases are stored in jails located even deeper in the host.</p>
<p>As it is not possible, even for the root user, to change or spoof a jail <span class="caps">IP</span>,
checking the source <span class="caps">IP</span> on the database server side is very
effective to ensure that each jail only accesses the databases they are
supposed to access.</p>
<p>Nevertheless, if an attacker manages to execute arbitrary code in the
database jail he will still be in measure to access other websites
databases.
As long as you keep your system updated, the probability of such an attack
is quite low (don’t forget that such attacker must already have taken
control of a web server as a prerequisite), nevertheless nothing prevents
you from using several database jails to mitigate or get rid of this risk.</p>
<h3 id="implementation"><a class="toclink" href="#implementation">Implementation</a></h3>
<p>The example shown here shows a classical web server situation, but this can
be expanded to any kind of service and the recipe is always the same:</p>
<ul>
<li>Divide the service in functional blocks, describing the communication
between each functional block.</li>
<li>Design each jail as the implementation of the functional blocks.</li>
<li>Setup the jails on a local interface and control jail communication
using the host firewall.</li>
</ul>
<h4 id="divide-the-service-in-functional-blocks"><a class="toclink" href="#divide-the-service-in-functional-blocks">Divide the service in functional blocks</a></h4>
<p>If done correctly, this is what will make your server really easier to
administrate as jails will turn into independents, and potentially reusable modules.</p>
<p>If we take the example above:</p>
<ul>
<li>
<p>Inter-jail communication is documented from a purely functional
point-of-view.
Replacing a software by another may become as easy as replacing a jail with
another, this does not depend nor affect other jails configuration.</p>
</li>
<li>
<p>If we screw something up in <em>Web server 1</em> or need to restart it for
maintenance purpose, this will not impact <em>Web server 2</em> activity.</p>
</li>
</ul>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Containers modularity properties opened the way to new software solutions
allowing to easily develop, share and install software stacks.
A classical example of this is the Docker system.</p>
<p>Bear in mind however that the target of these solutions is easiness,
<a href="https://security.stackexchange.com/q/100389/32746#100578" rel="external" title="More details on Docker security">not security</a>.</p>
<p>Use them to accelerate the deployment of new applications if you like,
but don’t use them as a “secure containment system”.</p>
</div>
<h4 id="design-each-jail"><a class="toclink" href="#design-each-jail">Design each jail</a></h4>
<p>Thanks to the isolation between the host and guest systems, it becomes very
easy to design and test the jail in a <span class="caps">QA</span> environment, try various solutions
and settings, etc.</p>
<h4 id="setup-the-jails"><a class="toclink" href="#setup-the-jails">Setup the jails</a></h4>
<p>Here are a few notes about setting up the jail:</p>
<ul>
<li>
<p>FreeBSD jails as well as Linux Containers are tied to an <span class="caps">IP</span> address and an interface.</p>
<ul>
<li>Always use the loopback interface to strictly isolate the jails from the
network (even for jails communicating with the outside such as the <em>Waf</em>
and <em>Proxy</em> jails in the example above: it’s the host’s firewall job to
deliver incoming packets).</li>
<li>The 127.0.0.0/8 network range associated to the loopback interface
allows more than 16 million addresses, feel free to use this range for
your jails <span class="caps">IP</span> addresses.</li>
<li>Use a netmask of /32 (255.255.255.255) to avoid any side effects.</li>
</ul>
</li>
<li>
<p>FreeBSD doesn’t have anything similar to Linux users namespace.
It is therefore recommended to assign system-wide unique UIDs to your jail
users.
Personally I use the jail <span class="caps">IP</span> as a prefix for the <span class="caps">UID</span> to keep things clear
and easy to maintain, but this is my own concoction and you may find a
different system to better suit your needs.</p>
<p>Under the hood, <span class="caps">LXC</span> also applies the same process by mapping each
guest’s <span class="caps">UID</span> to a host <span class="caps">UID</span> range, but it makes it transparent to the guest
system and also associate the guest’s root account to an unprivileged <span class="caps">UID</span>
(which is very good thing missing in FreeBSD jails).</p>
</li>
<li>
<p>FreeBSD jails are quite secure by default, and offer a few <code>sysctl</code> options
to tighten the security even further.
This is not the same story with Linux <span class="caps">LXC</span> where you will most likely need
to create your own <span class="caps">LXC</span> profile to get something hopefully secure.</p>
<p>Most default <span class="caps">LXC</span> profiles aim to be easy to deploy and use (see my note
above about Docker) and do not attempt to block a potentially malicious
actor having taken control of a container.
I have good results though by creating Ubuntu jails (Ubuntu is the prime
target for <span class="caps">LXC</span> development team and most stable profile for now) while
relying on the <em>gentoo.moresecure.conf</em> profile to harden the default Ubuntu profile.</p>
</li>
<li>
<p>As a <span class="caps">BSD</span> system, FreeBSD has a clear distinction between the base system
and third-party applications.
This allows to mount the same base system as a read-only file-system into
each jail.
This improves the security, makes updating easier and allow to save disk
space as only third-party binaries and variable data will need to be
stored in a jail directory.
The result is that a new jail only occupy around 5 <span class="caps">MB</span>.</p>
<p>There is no such distinction in Linux where everything is a package, from
the system kernel up to the document writer: for now at least each
container must have its own complete Linux distribution to work.
A minimal Ubuntu <span class="caps">LXC</span> container is around 300 <span class="caps">MB</span>, which makes
Linux containers considerably heavier than their FreeBSD counterparts.
Using an embedded distribution such as Alpine Linux as the guest should be
a game changer in this regard (an Alpine Linux container is about 10 <span class="caps">MB</span>),
but last time I checked this Alpine Linux containers were still in early
development (buggy scripts and missing files).</p>
</li>
<li>
<p>The jails having no direct Internet connection, traditional update commands
will not work anymore.
There are three ways to handle software update in your jails:</p>
<ul>
<li>
<p>The quickest way is to bypass just enough of the jail isolation for the
update process, making the update command very close to a standard update.</p>
<p>In FreeBSD, you achieve this by issuing the <code>pkg</code> command from the host
and using its <code>-c</code> parameter to tell it to chroot inside your jail.
In Linux, you execute the update command from within the container
using the <code>lxc-execute</code> command, but using a specific configuration file
where you do not enable network namespace, thus sharing host’s network namespace.</p>
<p>While this should be safe to use on newly created jails, in the long
run such command may become unsafe as they create a weakness in the
jail isolation (especially the FreeBSD solution).</p>
</li>
<li>
<p>Use a proxy jail allowing communication only with the update sources
and, optionally, started only when you are actually applying updates.</p>
<p>While possibly cleaner than the previous solution, personally I don’t
like this solution as I prefer my isolated jails to be really isolated:
with network access at all.</p>
</li>
<li>
<p>Store your packages in a directory accessed by the jails through a
read-only null mount point.</p>
<p>In FreeBSD, such directory is usually filled with customized packages
generated using <a href="https://www.freshports.org/ports-mgmt/poudriere/" rel="external" title="More information on the Poudriere package">Poudriere</a>.
In Linux this depends on the distribution used, on Debian-based systems
the <code>apt-get</code> command proposes a <code>-d</code> (<code>--download-only</code>) option
allowing to download a package and all its dependency at once
(downloaded packages are stored by default under
<em>/var/cache/apt/archives</em>).</p>
<p>This the safest and recommended solution.</p>
</li>
</ul>
</li>
</ul>
<p>Maybe later I will write more step-by-step guide about FreeBSD jails and
Linux Containers, but for now that’s all folks!</p>How to examine Android SELinux policy2016-08-15T00:00:00+02:002016-08-15T00:00:00+02:00WhiteWinterWolftag:www.whitewinterwolf.com,2016-08-15:/posts/2016/08/15/examine-android-selinux-policy/<p>Examining SELinux policy should be a trivial thing, but Android turns this into some kind of nightmare.
In fact, Google has designed Android mainly from a consumer perspective, and not for
power users.
The result is that, as soon as you want to do something outside of using the
latest Facebook app or playing Candy Crush, you very quickly find yourself
back in realm of early-2000 Linux, when a developer-like knowledge was required
to change what should be simple settings.
I believe that the situation will fastly evolve as Android system gets more
mature, but for now we have to do with what we have got…</p>
<p>As you said, there are two reasons why it is necessary to compile your own
SELinux toolset:</p>
<ul>
<li>The system provided toolset is usually a version behind.
While Android’s SELinux relies on policy <span class="caps">DB</span> version 30, current Linux boxes
usually handle only version up …</li></ul><p>Examining SELinux policy should be a trivial thing, but Android turns this into some kind of nightmare.
In fact, Google has designed Android mainly from a consumer perspective, and not for
power users.
The result is that, as soon as you want to do something outside of using the
latest Facebook app or playing Candy Crush, you very quickly find yourself
back in realm of early-2000 Linux, when a developer-like knowledge was required
to change what should be simple settings.
I believe that the situation will fastly evolve as Android system gets more
mature, but for now we have to do with what we have got…</p>
<p>As you said, there are two reasons why it is necessary to compile your own
SELinux toolset:</p>
<ul>
<li>The system provided toolset is usually a version behind.
While Android’s SELinux relies on policy <span class="caps">DB</span> version 30, current Linux boxes
usually handle only version up to 29.</li>
<li>Even if it would be more recent it would not help, in fact building SELinux
from <a href="https://github.com/SELinuxProject/selinux" rel="external">upstream code</a> (which is
easily done, at least on Fedora machines following upstream
recommendations) effectively allows the system to handle policy <span class="caps">DB</span> version
30, however Android’s SELinux has been heavilly modified
(<a href="https://android.googlesource.com/platform/external/libselinux/+/master/README.android" rel="external">Google documentation</a> highlights a few modifications)
so trying to handle Android’s SELinux fails due to syntax and parsing errors.</li>
</ul>
<p>So, to keep on the Android’s SELinux analysis quest, we will have to put our
hands in the dirt… in the cleanest possible way:</p>
<ul>
<li>First we will setup a sane environment.</li>
<li>Once this is done we will compile Android’s SELinux libraries and first tools.</li>
<li>On top of them we will build SELinux tools.</li>
<li>We will finish by adding a few supplementary utilities.</li>
</ul>
<h3 id="build-a-clean-environment"><a class="toclink" href="#build-a-clean-environment">Build a clean environment</a></h3>
<h4 id="environment-properties"><a class="toclink" href="#environment-properties">Environment properties</a></h4>
<p>The cleanest recommended, an possibly maybe only reliably working way is to
dedicate an environment to your Android work:</p>
<ul>
<li>
<p>A virtual machine is perfectly fine (if not the best option).
Prefer to use a VMware one since you will have to connect your phone
through <span class="caps">USB</span> to the guest system. The free alternative Qemu doesn’t seem to
handle such task very well. I did not try with other virualization software.</p>
</li>
<li>
<p>It will need to be a 64 bits system, otherwise the code will simply not
compile due to integers being of the wrong size.</p>
</li>
<li>
<p>It is <em>strongly</em> recommended, possibly mandatory, to use a Ubuntu system.
Feel free to use Xubuntu instead if you prefer <span class="caps">XFCE</span>’s lighter desktop
environment, this does not change the system’s core and available package
and will have no impact on your Android related work (whatever I say about
Ubuntu in this procedure also applies to Xubuntu).
You may find in Android’s SELinux source tree some ReadMe files
recommending the use of Fedora instead, these files are inherited from
upstream <span class="caps">NSA</span>’s SELinux project and their content do not necessarily match
Google’s Android.</p>
</li>
<li>
<p>The exact version of Unbuntu to use depends on the version of Android you
want to build.
For Android 6.0, Ubuntu 14.04 (Trusty) is recommended.
Check <a href="https://source.android.com/source/requirements.html" rel="external">Google requirements page</a>
for more information.</p>
</li>
<li>
<p>You will need plenty of disk space (at least <span class="caps">50GB</span> if you plan only
SELinux-related investigation, at least <span class="caps">100GB</span> if you plan for a complete
build of Android). <span class="caps">CPU</span> and memory are less relevant, they only impact time
for a full build and will have no real impact for SELinux related tasks.</p>
</li>
</ul>
<p>Using Ubuntu has two main advantages:</p>
<ul>
<li>
<p>By using the recommended system, you are working in a well-known and
well-tested environment: system libraries, tools and packages are at the
version and location expected by the project.</p>
</li>
<li>
<p>And more specifically in our current case: Ubuntu itself relies on AppArmor
which is a SELinux alternative, it does not use SELinux.
The good news is that you will therefore be able to install Android’s
SELinux tools and binaries system-wide without risking to alter system reliability.</p>
</li>
</ul>
<h4 id="environment-installation-procedure"><a class="toclink" href="#environment-installation-procedure">Environment installation procedure</a></h4>
<p>You can install Ubuntu the traditional way by starting from a full-fledged
live-<span class="caps">DVD</span>, but a faster alternative is to use a netboot install (textmode
install) and select the desktop environment you prefer at the end.
Doing so will save you the initial update time by directly installing
up-to-date packages version instead of first installing obsolete ones,
then asking to apply 389 pending updates on the first boot.</p>
<p>The <span class="caps">ISO</span> for Ubuntu/Xubuntu 14.04 (same <span class="caps">ISO</span>) netboot installer is
<a href="http://archive.ubuntu.com/ubuntu/dists/trusty/main/installer-amd64/current/images/netboot" rel="external">available here</a>.</p>
<p>To skip VMware’s troublesome “Easy Install” feature, it’s a good habit to start
by selecting the <em>“I will install the operating system later”</em> option.</p>
<p>Be sure to select <em>Linux</em>, then <em>Ubuntu 64 bits</em> as guest <span class="caps">OS</span>.</p>
<p>The <span class="caps">VM</span> will need the following ressources:</p>
<ul>
<li>Mandatory: disk space must be <strong>at the very least</strong> <span class="caps">40GB</span> (the default 20 <span class="caps">GB</span>
will <strong>not</strong> be enough, the source code alone takes more space than that),
higher is recommended.
A full build requires a 100 <span class="caps">GB</span> disk minimum, this is the value I usually
take.
Do not forget that this setting is just a maximum limit: the actual size
taken by the <span class="caps">VM</span> grows dynamically with guest’s requests.</li>
<li>Facultative: Increase <span class="caps">RAM</span> from 1024 to at least 2048 or higher (depends on
your host capacity, I use 4096).</li>
<li>Facultative: Increase the number of processor cores from 1 to 2 or higher
(depends on your host capacity, I use 3).</li>
<li>The <span class="caps">CD</span>-Rom must point to the installation <span class="caps">ISO</span> file.</li>
<li>You may want to switch <span class="caps">USB</span> from the default 1.1 to 2.0 as the former may
give warnings when you connect your device. Depending on your usage, you
can also safely uncheck <em>“Automatically connect new <span class="caps">USB</span> devices”</em> and
<em>“Share Bluetooth devices with the virtual machine”</em>.</li>
<li>Depending on your environment, you may also need to tweak display settings
(disable 3D, enforce a screen size).</li>
</ul>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<ul>
<li>If you choosed the netboot install, do not forget to select your
desktop environment (<em>Ubuntu desktop</em> or <em>Xubuntu desktop</em>) when
reaching the <em>Software selection</em> screen, or you will end-up with a
minimal text-only environment.</li>
<li>Upon first boot, <strong>refuse</strong> to upgrade to the latest release: the whole
point here is to stay in 14.04.</li>
</ul>
</div>
<p>Upon first boot, one of the first you may want to do is install Linux guest tools:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="codehilite"><pre>sudo apt-get install open-vm-tools
</pre></div>
</td></tr></table></div>
<p>This packet sets boot-time triggers, its installation will therefore be
complete only after a guest restart.</p>
<h3 id="fetch-android-source-code"><a class="toclink" href="#fetch-android-source-code">Fetch Android source code</a></h3>
<p>While similar, the procedure details depends on the chosen <span class="caps">ROM</span>:</p>
<ul>
<li>For CyanogenMod, <a href="https://wiki.cyanogenmod.org/w/Devices" rel="external">search for your device</a>
(select the vendor first) then click on the <em>“How to build CyanogenMod”</em>
link to get instruction adapted for your device.</li>
<li>For <span class="caps">AOSP</span>, follow the procedure which <a href="https://source.android.com/source/initializing.html" rel="external">starts here</a>.</li>
</ul>
<p>It can be worth noting that CyanogeMod bundles in its source tree a tool
allowing you to unpack <code>boot.img</code> files.
To say it differently, CyanogenMod provides you a tool which will allow you to
access the <code>sepolicy</code> file stored in devices and <span class="caps">ROM</span> archives.
Google’s <span class="caps">AOSP</span> does not provide such tool, so if you have no other imperative
using CyanogenMod’s source tree may be the most convenient choice, otherwise
you will have to install it appart (which is quick and easy to do, so no worry here).</p>
<p>Here I’m following CyanogenMod 13.0 (Android 6.0) procedure.
Explanation on the commands used is available on the pages linked above.
Please read them, the typescript below is given only for reference purposes.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>While I use <code>apt-get</code> in this post to stick to the lowest common
denominator and keep everybody happy, you may prefer to use <code>aptitude</code>
instead since it will take care of the dependencies in a better way
(when removing a package which required the installation of some
dependencies, these dependencies will be removed too, leaving your system cleaner).</p>
<p><span class="caps">AFAIK</span> the <code>aptitude</code> command must be installed in Ubuntu but is available
by default on Xubuntu.</p>
</div>
<!-- -->
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre> 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18</pre></div></td><td class="code"><div class="codehilite"><pre>sudo apt-get install bison build-essential curl flex git gnupg gperf <span class="se">\</span>
libesd0-dev liblz4-tool libncurses5-dev libsdl1.2-dev libwxgtk2.8-dev libxml2 <span class="se">\</span>
libxml2-utils lzop maven openjdk-7-jdk pngcrush schedtool squashfs-tools <span class="se">\</span>
xsltproc zip zlib1g-dev g++-multilib gcc-multilib lib32ncurses5-dev <span class="se">\</span>
lib32readline-gplv2-dev lib32z1-dev
mkdir -p ~/bin
mkdir -p ~/android/system
<span class="nv">PATH</span><span class="o">=</span>~/bin:<span class="nv">$PATH</span>
curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
chmod u+x ~/bin/repo
<span class="nb">cd</span> ~/android/system/
git config --global user.name <span class="s2">"Your Name"</span>
git config --global user.email <span class="s2">"you@example.com"</span>
repo init -u https://github.com/CyanogenMod/android.git -b cm-13.0
repo sync
<span class="c"># Coffee time: around 20GB are being downloaded, this may take several hours.</span>
<span class="nb">source</span> ./build/envsetup.sh
breakfast
</pre></div>
</td></tr></table></div>
<p>Now you have a clean and nearly complete source tree. The proprietary blobs are
missing, but you don’t need them for SELinux related tasks.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Fetching the sources is a tedious process, it may be worth to do a snapshot
or a backup of your <span class="caps">VM</span> now.</p>
</div>
<h3 id="compile-and-install-androids-selinux-toolset-and-libraries"><a class="toclink" href="#compile-and-install-androids-selinux-toolset-and-libraries">Compile and install Android’s SELinux toolset and libraries</a></h3>
<p>Now the funny part of the trip begins ;) !</p>
<p>Until now the procedure should have been pretty straightforward.
The goal was mainly to ensure that you have the very same environment as me.
If you do, the sequel should remain straightforward too.</p>
<p>Under the hood Google’s do not hesitate to apply deep changes to Android’s
source code between versions, therefore the exact compilation steps will be
quite certainly version dependent (for instance <span class="caps">AOSP</span> master shows that the
<code>sepolicy/</code> directory
<a href="https://android.googlesource.com/platform/external/sepolicy/+/c81ebe522c66dd6e6ef4419ecc7737e2e1740d59" rel="external">will be moved</a>).</p>
<p>I will first share my exact procedure to compile and install Android’s SElinux
libraries and toolset, but in order to keep the relevance of this post over
time I will then add some notes about the generic approach to follow in order
to solve most compilation issues.</p>
<h4 id="step-by-step-procedure"><a class="toclink" href="#step-by-step-procedure">Step-by-step procedure</a></h4>
<p>Android’s SELinux libraries provide the abstraction layer which will allow
upper layer software to deal with Android-specific SELinux policy files.
We will therefore need to compule and install them first (which, in itself,
actually represents the core if the difficulties here, until you’ve found your way).</p>
<p>We will then be able to build and install SELinux tools. As we will see,
fortunately these do not need to be Android specific, they only need to match
the SELinux library version.</p>
<p>This procedure has been tested both using CyanogenMod and <span class="caps">AOSP</span> source code trees.</p>
<h5 id="compile-and-install-android-selinux-libraries-and-first-tools"><a class="toclink" href="#compile-and-install-android-selinux-libraries-and-first-tools">Compile and install Android SELinux libraries and first tools</a></h5>
<p>First install dependances:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1
2</pre></div></td><td class="code"><div class="codehilite"><pre>sudo apt-get install libapol-dev libaudit-dev libdbus-glib-1-dev libgtk2.0-dev <span class="se">\</span>
libustr-dev python-dev python-networkx swig xmlto
</pre></div>
</td></tr></table></div>
<p>In this post the variable <code>$ANDROID_BUILD_TOP</code> stores your source location
(the directory where you issued the <code>repo sync</code> command).
Feel free to change its name as you like.</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1
2
3</pre></div></td><td class="code"><div class="codehilite"><pre><span class="nv">ANDROID_BUILD_TOP</span><span class="o">=</span>~/android/system
<span class="nb">cd</span> <span class="nv">$ANDROID_BUILD_TOP</span>
<span class="nb">source</span> ./build/envsetup.sh
</pre></div>
</td></tr></table></div>
<p>By default the policy core utils compilation fails due to <code>restorecond</code><span class="quo">‘</span>s
Makefile being unable to locate some libraries. You have to edit this Makefile
in order to use paths dynamically generated by <code>pkg-config</code> instead of
hardcoded ones (do not confuse backticks with single quotes!):</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1
2</pre></div></td><td class="code"><div class="codehilite"><pre>sed -i <span class="s1">'s/^CFLAGS ?= -g -Werror -Wall -W$/& `pkg-config --cflags --libs dbus-1 gtk+-2.0`/'</span> <span class="se">\</span>
<span class="nv">$ANDROID_BUILD_TOP</span>/external/selinux/policycoreutils/restorecond/Makefile
</pre></div>
</td></tr></table></div>
<p>Feel free to open the Makefile with some text editor to ensure that the
modification has been correctly taken into account.</p>
<p>And now compile and install:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre> 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19</pre></div></td><td class="code"><div class="codehilite"><pre><span class="nb">cd</span> <span class="nv">$ANDROID_BUILD_TOP</span>/external/bzip2/
make -f Makefile-libbz2_so
sudo make install
<span class="nb">cd</span> <span class="nv">$ANDROID_BUILD_TOP</span>/external/libcap-ng/libcap-ng-0.7/
./configure
make
sudo make install
<span class="nb">cd</span> <span class="nv">$ANDROID_BUILD_TOP</span>/external/selinux/
make -C ./libsepol/
sudo make -C /libsepol/ install
<span class="nv">EMFLAGS</span><span class="o">=</span>-fPIC make -C ./libselinux/
sudo make -C ./libselinux/ install
make -C ./libsemanage/
sudo make -C ./libsemanage/ install
make
sudo make install
make swigify
sudo make install-pywrap
sudo cp ./checkpolicy/test/<span class="o">{</span>dispol,dismod<span class="o">}</span> /usr/bin/
</pre></div>
</td></tr></table></div>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>Do not forget the <code>EMFLAGS=-fPIC</code> environment variable setting when
building <code>libselinux</code>.
It will not generate any error yet, but in the next step you will be unable
to build SETools. In case you missed it or did anything else wrong, simply
issue a <code>make clean</code> and restart your compilation.</p>
</div>
<h5 id="compile-and-install-selinux-tools"><a class="toclink" href="#compile-and-install-selinux-tools">Compile and install SELinux tools</a></h5>
<p>SELinux tools are provided in a prebuilt form which includes:</p>
<ul>
<li>Python scripts (and their shell script wrappers) within the
<code>$ANDROID_BUILD_TOP/external/selinux/prebuilts/bin/</code> directory</li>
<li>Python packages (including <code>*.o</code> compiled files) below
<code>$ANDROID_BUILD_TOP/prebuilts/python/linux-x86/2.7.5/lib/python2.7/site-packages/</code>.</li>
</ul>
<p>I would have expected the source code of these tools to be available below
<code>$ANDROID_BUILD_TOP/external</code>, but it isn’t.
Actually, I did not find any place where Google shared the exact version of
SETools they used (<span class="caps">FYI</span> the <span class="caps">GPL</span> only mandates to share the code if it has been
modified), so we will have to guess and try and do as best as we can.</p>
<p>The tools themselves are Python scripts, this a new evolution from SETools 4
(in SETools 3, commands like <code>sesearch</code> were binary executable coded in C).
However, the tools themselves still show a version of 3.3.8:</p>
<div class="codehilite"><pre><span class="gp">user@host:~$</span> <span class="nv">$ANDROID_BUILD_TOP</span>/external/selinux/prebuilts/bin/sesearch --version
<span class="go">3.3.8</span>
</pre></div>
<p>So my guess is that Google took some early development snapshot from SETools 4.
Until 4.0.0 beta SETools relied on <code>libsepol</code> versoin 2.4, with 4.0.0 release
they started to rely on the version 2.5 of the library which is not compatible
with the version of SELinux bundled in Android 6.0 (you can try to compile this,
it will just fail).</p>
<p>So the wisest choice seems to go with SETools 4.0.0 Beta.</p>
<p>Install supplementary dependencies:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="codehilite"><pre>sudo apt-get install python-setuptools
</pre></div>
</td></tr></table></div>
<p>Download and extract the source code:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1
2
3
4</pre></div></td><td class="code"><div class="codehilite"><pre><span class="nb">cd</span> ~/android/
wget https://github.com/TresysTechnology/setools/archive/4.0.0-beta.tar.gz
tar xzf 4.0.0-beta.tar.gz
<span class="nb">cd</span> ./setools-4.0.0-beta/
</pre></div>
</td></tr></table></div>
<p>Due to <a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=488274" rel="external">a bug</a>
affecting Flex 2.5, we need to remove <code>-Wredundant-decls</code> from compiler’s flags:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="codehilite"><pre>sed -i <span class="s1">'/-Wredundant-decls/d'</span> ./setup.py
</pre></div>
</td></tr></table></div>
<p>And finally compile and install:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1
2</pre></div></td><td class="code"><div class="codehilite"><pre>python ./setup.py build
sudo python ./setup.py install
</pre></div>
</td></tr></table></div>
<h4 id="generic-procedure-or-how-to-unstuck-yourself"><a class="toclink" href="#generic-procedure-or-how-to-unstuck-yourself">Generic procedure (or “How to unstuck yourself”)</a></h4>
<p>In case the procedure above did not work in your case, here is a higher level
view on how to try to progress.</p>
<p>There is sadly no magic (and no helper :( ) around here: the only way to get
this code to compile is the classical yet dreaded cyclic “try-and-see” approach.</p>
<p>Try to compile a first time, it will most likely fail due to some <code>*.h</code> file
being not found:</p>
<ol>
<li>
<p>Search in Android’s <code>external/</code> directory:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="codehilite"><pre>find <span class="nv">$ANDROID_BUILD_TOP</span>/external -name filename.h
</pre></div>
</td></tr></table></div>
<p>If you find the requested file, then this means that a specific version of
the corresponding library or tool has been bundled within Android source code.
You should therefore not try to install it from Ubuntu’s package system, but
instead compile and install the version bundled in Android source code.</p>
<p>Be aware that this goes against general advice you may found on forums:
<em>“Your compilation fails because of this library missing? Install this package then it will be fine!”</em>,
by doing this you will most probably just go into worse issue: if a specific
version is bundled, it is most probably because a specific version is needed
(due to compatibility issues or because this version contains specific
changes from Google).</p>
<p><span class="caps">BTW</span>, if you are wondering: of course this library or tool may also have
dependencies raising errors due to some <code>*.h</code> file being not found, and yes
you should apply this very same cyclic “try-and-see” approach.</p>
</li>
<li>
<p>Search systemwide:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="codehilite"><pre>find / -name filename.h 2>/dev/null
</pre></div>
</td></tr></table></div>
<p>If you find “missing” the file to be already present in your system in some
standard shared library location, this mean that this dependency is
probably already met in your environment but the Makefile who raised the
error is too dumb to find it.</p>
<p>If you manually directly call this Makefile, it may be possible for you to
set some environment variable fixing this (<code>LIBDIR=/usr/lib make</code> for
instance), otherwise you may need to modify the Makefile itself (the
<code>pkg-config</code> command may be of precious help to automatically generate
missing build parameters).</p>
</li>
<li>
<p>Search in the packaging system:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="codehilite"><pre>apt-cache search filename-dev
</pre></div>
</td></tr></table></div>
<p>Where <code>filename-dev</code> represents the name of the missing file in lowercase
with the <code>.h</code> extension replaced by the <code>-dev</code> suffix (for instance, if
<code>Python.h</code> is not found, search for <code>python-dev</code>).
Some tweaking in the exact name may be needed to find the right package.</p>
</li>
<li>
<p>If you remain stuck and that even a quick search on Internet did not provide
any clear answer, then <code>apt-file</code> will be your best friend.
<code>apt-file</code> is not installed by default, you need to install it and generate
its database:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1
2</pre></div></td><td class="code"><div class="codehilite"><pre>sudo apt-get apt-file
sudo apt-file update
</pre></div>
</td></tr></table></div>
<p><code>apt-file</code> allows you to search for packages (even uninstalled ones)
providing a particular file. To avoid having too much result,
I recommend to associate it with <code>grep</code> as below:</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="codehilite"><pre>apt-file search filename.h <span class="p">|</span> grep -w filename.h
</pre></div>
</td></tr></table></div>
<p>If there is a package in Ubuntu’s repository providing this file, then
<code>apt-file</code> should be able to find it.</p>
<p>Once you’ve found the right package, install it using
<code>apt-get install packagename</code> where <code>packagename</code> is your package’s name.</p>
</li>
</ol>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>If you screwed something on your system, the command to reinstall a package
is this one: <code>apt-get reinstall pkg_name</code>.</p>
<p>It will work even when a classical remove <span class="amp">&</span> install would not be possible
due to breaking dependencies (which is most likely for system’s libraries).</p>
</div>
<h3 id="supplementary-tools"><a class="toclink" href="#supplementary-tools">Supplementary tools</a></h3>
<p>At this step, you should now have a clean environment allowing you to
investigate Android’s SELinux rules both in compiled and source formats.</p>
<p>However, most chances are that at the end of your investigation you will want
to take some action. In its current shape, your environment will not permit you
to modify a device’s <code>sepolicy</code> file.
In fact, this file cannot be easily replaced: it is part of the device root
directory, and the content of the root directory is extracter at boot time from
a <span class="caps">RAM</span> disk file, which in turn is stored in the device’s boot image.</p>
<p>So you still miss two things before your environment is complete:</p>
<ul>
<li>A way to access and modify the device’s boot image,</li>
<li>A way to modify its <code>sepolicy</code> file.</li>
</ul>
<p>Fortunately, these are precisely the subject of the two last sections of this
post! :)</p>
<h4 id="fetch-and-update-devices-boot-image"><a class="toclink" href="#fetch-and-update-devices-boot-image">Fetch and update device’s boot image</a></h4>
<p>Tools to fetch and update devices’ boot image can be used for a wide variety of
things apart from SELinux rules tampering.
I have therefore created
<a href="https://android.stackexchange.com/a/154621/107603" rel="external">a dedicated answer</a>,
please refer to it.</p>
<h4 id="modify-devices-selinux-rules"><a class="toclink" href="#modify-devices-selinux-rules">Modify device’s SELinux rules</a></h4>
<p>You have two main possibilities here:</p>
<ul>
<li>Build a new <code>sepolicy</code> file from the rules in your source tree (search for
<code>.te</code> files to find them: <code>find $ANDROID_BUILD_TOP -name \*.te</code>, they are
spread into several directories).</li>
<li>Modify the <code>sepolicy</code> file currently used by the device.</li>
</ul>
<p>Unless you really need to build your rules from scratch, which is more a
development-related task and therefore out-of-scope here, the second choice
seems by far the safest one as you are sure that the only changes will be the
one your explicitely made.</p>
<p>There has been a project to make a tool allowing you to decompile a <code>sepolicy</code>
file into a recompilable form, allowing to freely edit rules in between.
However this project has been abandonned in proof-of-concept state.
You will find all information at the end of
<a href="https://ge0n0sis.github.io/posts/2015/12/exploring-androids-selinux-kernel-policy/" rel="external">this blog post</a>,
the rest of the article contains enough details to allow anyone else interested
to take over.</p>
<p>The currently recommended way to alter <code>sepolicy</code> rules goes another route: by
directly modifying the <code>sepolicy</code> binary file.
<a href="https://bitbucket.org/joshua_brindle/sepolicy-inject" rel="external">sepolicy-inject</a> tool
allows just that and is actively maintained.</p>
<p>For completeness sake, note that
<a href="https://github.com/phhusson/sepolicy-inject" rel="external">a fork</a> of this tool exist.
It adds a few features, some of them being on the original author’s to-do list
(like the possibility to remove a rule), don’t ask me why they choosed to fork
instead of contributing…</p>
<p>To compile and install <code>sepolicy-inject</code>, simply proceed as follow:</p>
<div class="codehilite"><pre>cd ~/android/
git clone https://bitbucket.org/joshua_brindle/sepolicy-inject.git
cd ./sepolicy-inject/
LIBDIR=/usr/lib make
sudo cp ./sepolicy-inject /usr/bin/
</pre></div>
<h4 id="use-case-example"><a class="toclink" href="#use-case-example">Use-case example</a></h4>
<p>Let’s say for instance you want to add the autorization matching the following
error message:</p>
<div class="codehilite"><pre>avc: denied { read } for pid=128 comm="file-storage"
path="/data/media/0/path/to/some/file"
dev="mmcblk0p28" ino=811035 scontext=u:r:kernel:s0
tcontext=u:object_r:media_rw_data_file:s0 tclass=file permissive=0
</pre></div>
<p>You will need to fetch device’s boot image, then unpack it to get access to
it’s <code>sepolicy</code> file.</p>
<p>A quick check using <code>sesearch</code> shows that there is indeed no allow rule (yet!):</p>
<div class="codehilite"><pre><span class="gp">user@host:~$</span> sesearch -A -s kernel -t media_rw_data_file -c file -p <span class="nb">read</span> ./sepolicy
<span class="gp">user@host:~$</span>
</pre></div>
<p>The command has no output.</p>
<p>Then, use the command below to add the required rule (note the similarity
between <code>sesearch</code> and <code>sepolicy-inject</code> parameters):</p>
<div class="hilitewrapper"><table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="codehilite"><pre>sepolicy-inject -s kernel -t media_rw_data_file -c file -p <span class="nb">read</span> -P ./sepolicy
</pre></div>
</td></tr></table></div>
<p>Now we can call again our <code>sesearch</code> command:</p>
<div class="codehilite"><pre><span class="gp">user@host:~$</span> sesearch -A -s kernel -t media_rw_data_file -c file -p <span class="nb">read</span> ./sepolicy
<span class="go">allow kernel media_rw_data_file:file read;</span>
<span class="gp">user@host:~$</span>
</pre></div>
<p><code>sesearch</code> output shows that the policy has correctly been updated.</p>
<p>You can now repack the device’s <code>boot.img</code> file and flash it back to the device.
Checking the last modification time of the <code>/sepolicy</code> file is an easy way to
ensure that your device is now running the newly updated <code>sepolicy</code> file.</p>
<h3 id="conclusion"><a class="toclink" href="#conclusion">Conclusion</a></h3>
<p>You should now have a complete environment allowing you to freely inspect and
modify Android devices SELinux policies. Enjoy! :)</p>
<p>As a side note, there are also tools allowing to analyze and modify SELinux
policy <a href="https://android.stackexchange.com/q/152186/107603" rel="external">directly from the device</a>.</p>
<hr/>
<p class="footnote">Article based on a <a href="https://android.stackexchange.com/q/128965/107603#154947" rel="external">StackExchange answer</a>.</p>What is the difference between HTTP and HTTPS with a self-signed certificate?2015-08-28T00:00:00+02:002015-08-28T00:00:00+02:00WhiteWinterWolftag:www.whitewinterwolf.com,2015-08-28:/posts/2015/08/28/what-is-the-difference-between-http-and-https-with-a-self-signed-certificate/<h3>Security difference</h3>
<p>First, let’s talk about <span class="caps">SSL</span> (now called <span class="caps">TLS</span> by the way), which adds the ‘S’ at
the end of <span class="caps">HTTP</span><strong>S</strong> and is in charge of “<em>securing the communication</em>“.
The clue to answer this question is indeed to fully understand what we mean by
“securing the communication”.</p>
<p><span class="caps">SSL</span>, no matter if it is a self-signed certificate which is being used or one
signed by a trusted <span class="caps">CA</span>, will ensure that the communication between you and the
remote host remains confidential and that no one can tamper with any data exchanged.</p>
<p>The warning message shown by browser about self-signed certificates is
therefore not about that.</p>
<p>But, how can you be <em>sure</em> that the remote host answering to your requests is
really the one you expect?
With public websites, for which you have no direct way to authenticate the
certificate by yourself, this is just impossible.
Here comes external …</p><h3 id="security-difference"><a class="toclink" href="#security-difference">Security difference</a></h3>
<p>First, let’s talk about <span class="caps">SSL</span> (now called <span class="caps">TLS</span> by the way), which adds the ‘S’ at
the end of <span class="caps">HTTP</span><strong>S</strong> and is in charge of “<em>securing the communication</em>“.
The clue to answer this question is indeed to fully understand what we mean by
“securing the communication”.</p>
<p><span class="caps">SSL</span>, no matter if it is a self-signed certificate which is being used or one
signed by a trusted <span class="caps">CA</span>, will ensure that the communication between you and the
remote host remains confidential and that no one can tamper with any data exchanged.</p>
<p>The warning message shown by browser about self-signed certificates is
therefore not about that.</p>
<p>But, how can you be <em>sure</em> that the remote host answering to your requests is
really the one you expect?
With public websites, for which you have no direct way to authenticate the
certificate by yourself, this is just impossible.
Here comes external trusted <span class="caps">CA</span>: by trusting a <span class="caps">CA</span> you assume that all
certificates signed by him are used only for legit purposes to secure the
traffic with the server(s) explicitly mentioned in the certificate.</p>
<p>This is all this warning is about: your browser warns you that, while the
communication with the remote host is secured, it has no automated way to
authenticate the certificate (and therefore the remote host identity) and
relies on you to explicitly accept or refuse to establish the connection.</p>
<p>If the self-signed certificate is associated to one of your servers, you should
be able to proceed with this manual verification: you should be able to check
the certificate fingerprint, or at least you should know if the certificate has
been changed recently or not.</p>
<p>Once this manual verification has been done, your browser offers you the
possibility to “remember” this certificate: this means that the browser will
associate this self-signed certificate to this <span class="caps">URL</span> and provide no warning in
the future since, now, the browser has an automated way to authenticate the certificate.</p>
<p>However, as soon as the self-signed certificate will be changed on the server,
the browser will display the warning again, and it will again be up to the
end-user to determine wether this certificate change is normal and if the new
certificate presented by the server is indeed a genuine one.</p>
<h3 id="user-experience-difference"><a class="toclink" href="#user-experience-difference">User experience difference</a></h3>
<p><span class="caps">IMHO</span> the default way browsers inform the user’s about current security is
mostly ineffective.
Users just <a href="https://ux.stackexchange.com/questions/43295/do-users-care-about-https" rel="external">do not care about the padlock</a>, and
<a href="http://commerce.net/wp-content/uploads/2012/04/The%20Emperors_New_Security_Indicators.pdf" rel="external">do not notice when the <span class="caps">SSL</span> security is missing</a>.
Even users who care haven’t access to the right information (nothing prevents a
website showing an <a href="https://en.wikipedia.org/wiki/Extended_Validation_Certificate" rel="external">Extended Validation Certificate</a> to configure their
website to use poor and weak cryptography systems or to rely on less secured
third-party content: default browser’s interface will still be happy about that
and show the “top-notch security” green bar).</p>
<p>Hopefully, depending on the browser used there might be some plugins trying to
remedy this situation.
On Firefox, you have <a href="https://addons.mozilla.org/en-US/firefox/addon/ssleuth/" rel="external">SSLeuth</a> which will by default add a new notification
area to the left or the <span class="caps">URL</span> bar (next to the padlock when there is one)</p>
<p>This new notification area has the following properties:</p>
<ul>
<li>
<p>The background color ranges from red (no security: <span class="caps">HTTP</span>), through orange
(poor security setup) to blue and green (good and best security according
to current best-practices).</p>
</li>
<li>
<p>An option allows to extend this color to the whole <span class="caps">URL</span> bar, so <span class="caps">HTTP</span>
websites will now display a fully red <span class="caps">URL</span> bar,</p>
</li>
<li>
<p>At last a score (between 0 and 10) is displayed to show an estimation of
the current <span class="caps">SSL</span>/<span class="caps">TLS</span> security level. It takes into account several criterion,
amongst them the type of certificate (self-signed, <span class="caps">CA</span> signed, Extended
Validation Certificate), the cryptographic configuration used, third-party
content security, etc. Clicking on the notification area provides all score
details, mostly useful when the result is not the expected one
(aka “<em>Why is my bank website granted an orange <span class="caps">URL</span> bar?</em>“).</p>
</li>
</ul>
<hr/>
<p class="footnote">Article based on a <a href="https://security.stackexchange.com/q/98006/32746#98014" rel="external">StackExchange answer</a>.</p>Can SELinux really confine the root user?2015-08-20T00:00:00+02:002015-08-20T00:00:00+02:00WhiteWinterWolftag:www.whitewinterwolf.com,2015-08-20:/posts/2015/08/20/can-selinux-really-confine-the-root-user/<p>Several projects such as [this one][play_root] propose a free root access to a
Linux box in order to demonstrate SELinux confinement abilities.
Even given a root access on a box, SELinux still prevents any harm from being done.</p>
<p>Is this for real or is there any trick behing such setup?</p>
<p>This is indeed possible because SELinux does not actually care about the
current Unix user: all it sees is a supplementary metadata called the context
(which includes, among other fields, a <em>domain</em> field) and which lets SELinux
decide whether the requested action can be authorized or not.</p>
<p>What one usually conceives as the root user should be mapped in SELinux as a
root Unix user running either the <code>unconfined_t</code> or <code>sysadm_t</code> SELinux domain.
It is the classical full-powered omnipotent root user.</p>
<p>However, one could perfectly setup his system to spawn a root shell (I mean
root Unix user shell …</p><p>Several projects such as [this one][play_root] propose a free root access to a
Linux box in order to demonstrate SELinux confinement abilities.
Even given a root access on a box, SELinux still prevents any harm from being done.</p>
<p>Is this for real or is there any trick behing such setup?</p>
<p>This is indeed possible because SELinux does not actually care about the
current Unix user: all it sees is a supplementary metadata called the context
(which includes, among other fields, a <em>domain</em> field) and which lets SELinux
decide whether the requested action can be authorized or not.</p>
<p>What one usually conceives as the root user should be mapped in SELinux as a
root Unix user running either the <code>unconfined_t</code> or <code>sysadm_t</code> SELinux domain.
It is the classical full-powered omnipotent root user.</p>
<p>However, one could perfectly setup his system to spawn a root shell (I mean
root Unix user shell) running the restricted user <code>user_t</code> SELinux domain.
As per SELinux policies, such shell would be no different than any other
restricted user shell and would have no special privilege on the system, thus
effectively confining the root user.</p>
<p>Appart from an experimental point-of-view, doing such thing <em>as-is</em> has no
practical use.
However similar practices find their way in the real world.</p>
<p>A classic example can be a database administrator needing to be able to
stop/start the database daemons, edit configuration files, etc.
Without SELinux, all these actions would require the user to escalate toward
root privileges (even if it is normally for a single command line via the
<code>sudo</code> tool for instance, but even that may be prone to leaks).</p>
<p>Thanks to SELinux, we can give this user a genuine root shell, but instead
of running <code>unconfined_t</code> or <code>sysadm_t</code> domains it will run the <code>dbadm_t</code>
domain.
This mean that he will have more privileges than a restricted user, but these
new privileges will be limited to what is needed to administrate the database
server: this user will not be able to tamper with other services, files or run
other administrative commands than those strictly required to do his job.</p>
<p>The same way, the web server and other services administrators could also have
other root shells running in parallel on the same system, every one will see
their current Unix user being <em>root</em>, but thanks to SELinux each one will have
effectively different privileges limited to what is needed
<a href="https://en.wikipedia.org/wiki/Principle_of_least_privilege" rel="external">for their own purposes</a>.</p>
<hr/>
<p class="footnote">Article based on a <a href="https://unix.stackexchange.com/q/106595/53965#224373" rel="external">StackExchange answer</a>.</p>Do randomized PIDs bring more security?2015-05-23T00:00:00+02:002015-05-23T00:00:00+02:00WhiteWinterWolftag:www.whitewinterwolf.com,2015-05-23:/posts/2015/05/23/do-randomized-pids-bring-more-security/<h3>The issue</h3>
<p>I read an article in the french magazine <span class="caps">MISC</span> (<a href="https://boutique.ed-diamond.com/misc/594-misc-74.html" rel="external">no. 74 - July/August, 2014</a>)
publishing a flaw affecting <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0016" rel="external">stunnel</a> and <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0017" rel="external">libssh</a>.</p>
<p>To make things short, this flaw relies on the fact that a hello cookie created
by the server is generated using the current Unix timestamp (so up to the
second) and the <span class="caps">PID</span> of the process handling the request.
The exploit sends a high number of connection attempts in order to force the
server to generate duplicated cookies.
At the end this attacks aims to deduce the server private keys.</p>
<p>The author explains that such attack is not realizable on systems using
traditionnal sequential <span class="caps">PID</span> because it would require more than 65000
connections attempts to made in less than one second.</p>
<p>However, thanks to random PIDs used on some “hardened” systems the author
demonstrates that, with 20 connection attempts per seconds, there is
statistically more than one …</p><h3 id="the-issue"><a class="toclink" href="#the-issue">The issue</a></h3>
<p>I read an article in the french magazine <span class="caps">MISC</span> (<a href="https://boutique.ed-diamond.com/misc/594-misc-74.html" rel="external">no. 74 - July/August, 2014</a>)
publishing a flaw affecting <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0016" rel="external">stunnel</a> and <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0017" rel="external">libssh</a>.</p>
<p>To make things short, this flaw relies on the fact that a hello cookie created
by the server is generated using the current Unix timestamp (so up to the
second) and the <span class="caps">PID</span> of the process handling the request.
The exploit sends a high number of connection attempts in order to force the
server to generate duplicated cookies.
At the end this attacks aims to deduce the server private keys.</p>
<p>The author explains that such attack is not realizable on systems using
traditionnal sequential <span class="caps">PID</span> because it would require more than 65000
connections attempts to made in less than one second.</p>
<p>However, thanks to random PIDs used on some “hardened” systems the author
demonstrates that, with 20 connection attempts per seconds, there is
statistically more than one chance over two to generate a duplicate in less
than 5 minutes.</p>
<p>For me, this clearly shows that random <span class="caps">PID</span> creates new security weaknesses
(and a remotely exploitable in this case) over sequential PIDs.</p>
<p>I was therefore wondering what was the exact threat that random PIDs are trying
to solve.
The answer I got from <a href="http://www.vanheusden.com/linux/rnd_pid_faq.php" rel="external">here</a>, <a href="http://lists.freebsd.org/pipermail/freebsd-security/2010-February/005550.html" rel="external">here</a>, <a href="https://books.google.fr/books?id=t2yA8vtfxDsC&pg=PT667&lpg=PT667&dq=random%20pid%20security&source=bl&ots=4i3xvu1Ea6&sig=CMwixJVq9xe4UwZAAC_6UFLursE&hl=fr&sa=X&ei=qA1LVYjPH8PyUImxgZAM&ved=0CDkQ6AEwAzgK#v=onepage&q=random%20pid%20security&f=false" rel="external">there</a> and my personal
experience do not satisfy me:</p>
<ul>
<li>
<p><em>Poorly coded software</em> are using the <span class="caps">PID</span> to generate “unique” temporary
file names and as a main source of entropy: poorly coded software should
remain limited to “Hello world” projects and minesweepers ports and should
never be used for sensitive tasks.</p>
<p>Moreover, weakening the whole <span class="caps">OS</span> just to bring a marginal security gain for
such software seems really counter-productive.</p>
<p>At last, the flaws above show that as long as entropy is concerned,
sequential <span class="caps">PID</span> would be even more secure than random ones…</p>
</li>
<li>
<p><em>Protection against unknown future threats</em>: I do not see the logic behind
opening severe current and known threats to protect against potential
future and unknown threats…</p>
</li>
<li>
<p><em>Race conditions</em>: if it refers to the poor software using the <span class="caps">PID</span> to
generate temporary file names, then I already covered this point.
Otherwise, the flaw above shows how random <span class="caps">PID</span> is actually more prone to
race conditions than sequential ones.</p>
</li>
<li>
<p><em>OpenBSD already uses it</em>: this is indeed a good explanation regarding the
fashion aspect of this measure, but it has nothing to do with security.</p>
</li>
</ul>
<p><span class="lb-small"><a href="#xkcd1739_fixing-problems.png" id="xkcd1739_fixing-problems.png-thumb" title="Click to enlarge"><img alt="XKCD #1739: Fixing problems" src="/images/xkcd1739_fixing-problems.png"/></a></span></p>
<h3 id="the-origin"><a class="toclink" href="#the-origin">The origin</a></h3>
<p><span class="caps">PID</span> randomization was popularized by OpenBSD which
<a href="http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/sys/kern/kern_fork.c#rev1.8" rel="external">added it as soon as 1997</a>.
At that time it pursued two main goals:</p>
<ul>
<li>
<p><em>Protect against <span class="caps">PID</span> prediction vulnerabilities</em> affecting mostly software
which use the <span class="caps">PID</span> value to generate temporary file names.
This was a common concern at that time, but today I think it would be quite
rare to encounter production-level software still not using a cleaner method.</p>
</li>
<li>
<p><em>As a general preventive measure</em>,
<a href="http://www.openbsd.org/papers/dev-sw-hostile-env.html" rel="external">“If something can be random, make it random.”</a>, encompassing putting
randomness at several places in the <span class="caps">OS</span> (from the <span class="caps">IP</span> stack to the memory
allocation).
While some of the protection resulting from this randomness proved to be
useful and became more common, <span class="caps">PID</span> randomization has a more troubled history.</p>
</li>
</ul>
<p>As detailed above, <em>the cure may be worse than the disease</em>.
Due to faster <span class="caps">PID</span> reuse, fully random <span class="caps">PID</span> may allow remotely exploitable flaws,
while sequential <span class="caps">PID</span> was mainly known to allow local-only exploits.</p>
<p>As a side note, in an ideal world, all this should not cause any issue (yes, I
talk of this ideal world where software is free of bug and vulnerability).
In fact these vulnerabilities usually find their root in wrong usage of the <span class="caps">PID</span>.
<a href="https://en.wikipedia.org/wiki/Process_identifier" rel="external">Wikipedia</a> finely defines the <span class="caps">PID</span> as being a
“<em>number used […] to uniquely identify an active process</em>“.</p>
<p>Therefore:</p>
<ul>
<li>
<p><em>A <span class="caps">PID</span> is not designed to build temporary file names</em>.</p>
<p>Temporary files are usually created in a <strong>shared place</strong>, and that means
<em><a href="https://security.stackexchange.com/questions/34397/how-can-an-attacker-use-a-fake-temp-file-to-compromise-a-program" rel="external">danger</a></em>!
Because of this, temporary files must be created using dedicated functions
which will ensure that the three required actions (checking that the file
doesn’t already exists, create it and set restricted access permissions)
are done in an atomic (uninterrupted) way.
The C language proposes <a href="http://pubs.opengroup.org/onlinepubs/9699919799/functions/mkstemp.html" rel="external"><code>mkstemp()</code></a> and <a href="http://pubs.opengroup.org/onlinepubs/9699919799/functions/tmpfile.html" rel="external"><code>tmpfile()</code></a>, and most
Unix environments offer a <a href="http://linux.die.net/man/1/mktemp" rel="external">mktemp</a> command to be used by shell scripts, etc.</p>
</li>
<li>
<p><em>A <span class="caps">PID</span> is not designed to seed a random number generator or generate session <span class="caps">ID</span> or cookies</em>.</p>
<p>Here again you must refer to your language or environment documentation to
get a proper entropy source. On <span class="caps">UNIX</span> systems the <code>/dev/urandom</code> device file
is there for this purpose.</p>
</li>
<li>
<p>It is not by accident that Wikipedia definition precises <em>active process</em>.</p>
<p>On some language like C, you stay proprietary of the child process’ <span class="caps">PID</span>
until you <code>wait</code> for it, but this is not true for all languages (for
instance in shell scripts…) and never true for processes which are not
your child.
In these cases the <span class="caps">PID</span> is just a shared resource, and you should remember
that this “<strong>shared</strong>” notion implies “<strong><a href="https://stackoverflow.com/questions/9152979/check-if-process-exists-given-its-pid/9153003#9153003" rel="external">danger</a></strong>“, so must therefore
ensure that your take proper care and use the right functions designed to
match your situation and needs.</p>
</li>
</ul>
<h3 id="operating-systems-positons"><a class="toclink" href="#operating-systems-positons">Operating systems positons</a></h3>
<h4 id="linux"><a class="toclink" href="#linux">Linux</a></h4>
<p>Linux kernel main stream never implemented <span class="caps">PID</span> randomization, however this
feature was commonly provided for several years through a security oriented
third-party patch which finally decided to abandon it.</p>
<p>Around year 2000-2001, several people tried to implement <span class="caps">PID</span> randomization for
the Linux kernel (examples can be found <a href="http://lkml.iu.edu/hypermail/linux/kernel/0001.1/0400.html" rel="external">here</a> and <a href="http://www.vanheusden.com/Linux/sp/" rel="external">there</a>), however
none of these patches were accepted by the kernel development team who rejected
them mostly as <em>“security through obscurity”</em>.</p>
<p>However,
<a href="http://www.vanheusden.com/Linux/rnd_pid_faq.php" rel="external">since randomness may actually increase the global security posture of the <span class="caps">OS</span></a>
and prevent some attacks, these kernel modifications finally reached their
public through the third-party project: <a href="https://grsecurity.net" rel="external">grsecurity</a>.</p>
<p>This project started in 2001, bringing several new and advanced security
features to the Linux kernel. It allowed to enable/disable randomized <span class="caps">PID</span> using
a specific <code>sysctl</code> parameter: <code>kernel.grsecurity.rand_pids</code>. However, in
<a href="https://grsecurity.net/news.php#grsec2110" rel="external">the late 2006</a> (I guess - I hate news thread mentioning dates with no
year!) they finally decided to drop randomized <span class="caps">PID</span> functionality:</p>
<blockquote>
<p>grsecurity 2.1.10 was released today for Linux 2.4.34 and 2.6.19.2.
Changes in this release include:</p>
<ul>
<li>Removal of randomized PIDs feature, since it provides no useful
additional security and wastes memory with the 2.6 kernel’s pid bitmap</li>
</ul>
</blockquote>
<h4 id="openbsd"><a class="toclink" href="#openbsd">OpenBSD</a></h4>
<p>OpenBSD having initiated randomized <span class="caps">PID</span> functionality, it is still present for
historical purposes but has no real security scope nowadays. It is up to the
application themselves to ensure they correctly handle fast <span class="caps">PID</span> reuse.</p>
<p>OpenBSD aim is to encourage good development practices and thorough code
security auditing. That’s why they consider that
<a href="https://www.mail-archive.com/misc%40openbsd.org/msg138443.html" rel="external">it is not the responsibility of the <span class="caps">OS</span> to protect their users against flawed application</a>.
On the contrary an application flaw should be detected as soon as possible and
corrected in the application instead of remaining hidden by <span class="caps">OS</span>
(<em>”<a href="http://www.openbsd.org/papers/dev-sw-hostile-env.html" rel="external">The sooner we can break it, the sooner we can fix</a>”</em>).</p>
<p>As a side note, while such assertion justifies itself for the base <span class="caps">OS</span> which is
under direct control of the OpenBSD team, this becomes more discutable with
third-party software:</p>
<ul>
<li>Which <a href="http://www.openbsd.org/faq/faq15.html#Intro" rel="external">do not go through the same audit as the base <span class="caps">OS</span></a>,</li>
<li>While OpenBSD
<a href="http://www.openbsd.org/faq/faq15.html#Ports" rel="external">provides and recommends the use of binary packages over ports</a>,
no binary updates are provided between <span class="caps">OS</span> releases (every 6 months)<sup id="fnref-mtier"><a class="footnote-ref" href="#fn-mtier">1</a></sup></li>
<li>Some software versions <a href="http://www.openbsd.org/faq/faq15.html#Latest" rel="external">may be outdated</a>.
For each case this might be either a deliberate choice from OpenBSD team or
a consequence of the lack of resources in OpenBSD team.</li>
</ul>
<p>However, in this context OpenBSD team makes the assumption that, for a
correctly developed and audited software, the <span class="caps">PID</span> generation algorithm chosen
by the <span class="caps">OS</span> must have no impact (neither stability nor security) on the software
behavior.
If some software is vulnerable to an attack taking advantage of the <span class="caps">PID</span> being
reused, then
<a href="https://www.mail-archive.com/misc%40openbsd.org/msg138442.html" rel="external">it’s up to the software to be corrected, and not to the <span class="caps">OS</span> to ensure that the <span class="caps">PID</span> are not reused too quickly</a><sup id="fnref-pidtable"><a class="footnote-ref" href="#fn-pidtable">2</a></sup>.</p>
<h4 id="freebsd"><a class="toclink" href="#freebsd">FreeBSD</a></h4>
<p>FreeBSD provides a <code>sysctl</code> parameter allowing the administrator to tune the
<span class="caps">PID</span> generation algorithm from sequential to fully random. It is sequential by default.</p>
<p>FreeBSD <a href="https://svnweb.freebsd.org/base/stable/4/sys/kern/kern_fork.c#rev53842" rel="external">implemented random PIDs in 1999</a> (FreeBSD 4), using OpenBSD as
reference, and <a href="https://svnweb.freebsd.org/base/stable/4/sys/kern/kern_fork.c#rev53842" rel="external">improved it</a> to let the administrator set a balance between
potential issues caused by sequential <span class="caps">PID</span> (mostly <span class="caps">PID</span> prediction) and potential
issues caused by <span class="caps">PID</span> randomization (<span class="caps">PID</span> reuse and resource consumption).</p>
<p>In fact, FreeBSD design seems quite original since it is not actually the <span class="caps">PID</span>
which is random, but the <span class="caps">PID</span> increment which is a random value taken between 1
and the <code>kern.randompid</code> parameter:</p>
<ul>
<li>If <span class="caps">PID</span> randomization is disabled, then the increment will always be 1,</li>
<li>At maximum this parameter can be set to <code>PID_MAX</code><sup id="fnref-pid_max"><a class="footnote-ref" href="#fn-pid_max">3</a></sup>, in this case
the next <span class="caps">PID</span> will be fully randomly chosen.</li>
</ul>
<p>By default <span class="caps">PID</span> randomization is disabled (PIDs are generated sequentially).
To take effect, the <code>kern.randompid</code> parameter must be at least greater than 100.</p>
<p>If sequential <span class="caps">PID</span> remains a concern, I would personally recommend to set this
parameter to a low value like a few hundreds: this should be sufficient to
limit trivial <span class="caps">PID</span> prediction issues while avoiding more nasty issues caused by
PIDs re-use.</p>
<hr/>
<p class="footnote">Article based on a <a href="https://security.stackexchange.com/q/88692/32746#89961" rel="external">StackExchange answer</a>.</p>
<div class="footnote">
<hr/>
<ol>
<li id="fn-mtier">
<p>A third-party commercial company, <a href="https://stable.mtier.org/" rel="external">M:Tier</a>, provides its own
update system to reduce this issue. <a class="footnote-backref" href="#fnref-mtier" title="Jump back to footnote 1 in the text">↩</a></p>
</li>
<li id="fn-pidtable">
<p>For the curious, OpenBSD added in 2013 an hardcoded table storing the 100
lastly freed PIDs in order to limit reliability issues on low loaded systems.
However, <a href="https://www.mail-archive.com/misc%40openbsd.org/msg138442.html" rel="external">this does not constitutes a security measure</a> since it quickly
becomes effectless with higher loads, would this load be caused by genuine
activity or as part of an attack. <a class="footnote-backref" href="#fnref-pidtable" title="Jump back to footnote 2 in the text">↩</a></p>
</li>
<li id="fn-pid_max">
<p>For the purists it is actually <code>PID_MAX - 100</code> to cover only non-reserved <span class="caps">PID</span> range, it can also be set by using the special value <code>-1</code>. Out of range values are corrected immediately when setting the <code>sysctl</code> parameter so no risk to cause any damage here anyway. <a class="footnote-backref" href="#fnref-pid_max" title="Jump back to footnote 3 in the text">↩</a></p>
</li>
</ol>
</div>