<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Posts on Ramblings from Jessie</title>
        <link>https://blog.jessfraz.com/post/</link>
        <description>Recent content in Posts on Ramblings from Jessie</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en-us</language>
        <lastBuildDate>Fri, 22 Jan 2021 12:17:58 -0700</lastBuildDate>
            
            <atom:link href="https://blog.jessfraz.com/post/index.xml" rel="self" type="application/rss+xml" />
                
                
            <item>
                <title>DUM-E and U</title>
                <link>https://blog.jessfraz.com/post/dum-e-and-u/</link>
                <pubDate>Fri, 22 Jan 2021 12:17:58 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/dum-e-and-u/</guid>
                    <description>

&lt;p&gt;DUM-E (“dummy”) and U (“you”) are the names of the robot arms in the Iron Man movies. After watching this movie for the n-teenth time, I have a strong urge to also have robotic arms in a workshop like Tony Stark. You can see the value of the robots clearly throughout the movie. The robots allow Tony to produce suits more quickly, help test the suits, and provide periodic comedic relief. At one point, DUM-E even saves Tony’s life. As a bit of a thought experiment, I considered what it would take to get the same functionality in reality. What this ends up leading to is a configuration management system for manufacturing, much like a build system. This post is going to outline that a bit!&lt;/p&gt;

&lt;p&gt;The most popular open-source framework for building robots is &lt;a href=&#34;https://www.ros.org/&#34;&gt;ROS (Robotics Operating System)&lt;/a&gt;. You can add different components like cameras or sensors and program all the functionality you need for your specific use case. The underlying infrastructure works by passing messages, through a pub/subsystem. &lt;a href=&#34;https://www.elementaryrobotics.com/&#34;&gt;Elementary Robotics&lt;/a&gt; created their own OS called &lt;a href=&#34;https://github.com/elementary-robotics/atom&#34;&gt;atom&lt;/a&gt;. It’s pretty cool, it uses Redis for the messaging layer and &lt;a href=&#34;https://atomdocs.io/tutorials.html#camera-element-tutorial&#34;&gt;docker&lt;/a&gt; for packaging and defining the individual components. Need a camera on your robot? Include the camera container in your atom OS config file. You can then pipe the messages from the camera into machine learning in another container. It’s important to know the basics of these frameworks to continue into how we would build DUM-E and U.&lt;/p&gt;

&lt;p&gt;Let’s dive in. The end goal here is to be as productive as Tony Stark at building things.&lt;/p&gt;

&lt;h2 id=&#34;fire-extinguisher-robot&#34;&gt;Fire extinguisher robot&lt;/h2&gt;

&lt;p&gt;One of my favorite scenes with DUM-E is when Tony is testing the suits and it’s DUM-E’s job to blast him with a fire extinguisher when he is on fire. For comic relief in the movie, DUM-E messes this up a bunch and blasts Tony when he’s not on fire.&lt;/p&gt;

&lt;p&gt;Let’s break this down, starting with a robot that will shoot a fire extinguisher on any fire. First, what you would need is the robotic arm base, maybe you build your own, maybe it’s ABB, Kuka, FANUC, or any other robot arm maker. Let’s assume you have some sort of robotic arm with an SDK/API you can program. You also need a fire extinguisher. Since we are hackers we will just duct tape this to the robot arm and have a trigger on the switch to fire it programmatically. Next, we need a camera. Let’s also duct tape this and all the wires to the robot. We need to know if something in our proximity is on fire and where it is. We will need some code to determine if something is on fire. You could likely train a machine learning model to do this. So when the ML model identifies something as on fire, we need to calculate where it is in relation to the distance from the camera identifying it to the fire extinguisher we duct-taped to the robot. This is all doable and pretty much depends on how well we trained our model.&lt;/p&gt;

&lt;p&gt;In the movie, DUM-E is quite bad at identifying fire. It is &lt;em&gt;just a movie&lt;/em&gt; but we should consider it might be hard for the model to differentiate fire from the color of the suit when it’s not on fire. If you recall, Iron Man’s suit is crimson and gold which could be misidentified as fire if it’s moving in the same pattern a fire might move. Tony does fly and move around at very fast speeds. This really comes down to how well Tony trains the model. As long as DUM-E continually learns, which he should, by the time Iron Man has been blasted by mistake a few times, the model should know the difference between the two (on fire and suit that looks like fire moving in a weird way). We also get to witness this learning in the movie.&lt;/p&gt;

&lt;h2 id=&#34;lifesaver&#34;&gt;Lifesaver&lt;/h2&gt;

&lt;p&gt;DUM-E, despite his namesake, is very intelligent. A major scene in the movie is when he saves Tony’s life by passing him the reactor to power the magnet in his chest. The reactor is just out of Tony’s reach as he is dying and DUM-E realizes this and passes it to him. This could be programmed in a few different ways.&lt;/p&gt;

&lt;p&gt;One way would be the equivalent of hard coding this behavior. Maybe Tony trained DUM-E to pass him the reactor. That’s a bit lame and wouldn’t be very useful outside this context. Let’s assume DUM-E was programmed a different way.&lt;/p&gt;

&lt;p&gt;What would be more useful overall is if DUM-E had some programming that when Tony is reaching for an object just outside his reach, DUM-E should know to pass it to him. Again this relies on a camera and a very precise machine learning model. Instead of the fire extinguisher though, we would need a claw to pick up the object and pass it. The machine learning model for this behavior would have to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identify a human&lt;/li&gt;
&lt;li&gt;Identify the difference between a human in a resting state and a human reaching for something, arms extended&lt;/li&gt;
&lt;li&gt;Identify the object the human is reaching for, scan for objects a certain distance from the end of the hand&lt;/li&gt;
&lt;li&gt;We’d also want some code to know when the object is out of reach and the robot should help or if the human is fine on their own, don’t want the robot arm getting in my way when I actually want to grab something. So maybe watch the rate of the reach and calculate if it’s possible for the arm to extend to any objects around where it is headed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This should all be possible. For bonus points let’s make it even more useful. Tony uses his robots to help him build things in his workshop and at times he asks them to pass him tools. Let’s add a microphone component to the robot and a model to identify when I am asking for an object. Now the robot needs to correctly identify objects based on a name, and let’s hope it parses what I said correctly in the first place. We could also help the robot identify objects, by using the camera to identify if I pointed to a specific object when I asked for it. This would be super helpful and like having another set of arms around.&lt;/p&gt;

&lt;h2 id=&#34;assembling-the-suits&#34;&gt;Assembling the suits&lt;/h2&gt;

&lt;p&gt;Both DUM-E and U help Stark assemble the Iron Man suits. To do this, the robots need to know the final configuration of the suits when put together. They also need to know where on Tony’s body they need to attach the suit. So we need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A camera to identify Tony and where to put the parts of the suit, we need to identify the parts of the suit as well&lt;/li&gt;
&lt;li&gt;We need a claw with the tools to do the final welding and putting together of the suit&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;hardware-mode&#34;&gt;Hardware mode&lt;/h2&gt;

&lt;p&gt;After building a few suits, Tony’s workshop is viewed in less of an “I am actively building things” configuration. You can see the floor is more clear and there are fewer tools and materials strewn about. It’s basically like someone cleaned up and things have been at rest for a while. When Tony needs to build the reactor to create the element to power the suit, he tells the robots “We are going back into hardware mode.” This had me thinking, wouldn’t it be cool if there were different configurations of factory floor layouts that could be named and switched to on a whim? How would we do this?&lt;/p&gt;

&lt;p&gt;Up til now, we’ve programmed all the robots to do what we wanted with code. Assuming we used ROS or Atom, we would have some configuration files and code laying around in a repo somewhere. Let’s assume a repo per robot or a repo per behavior of the robot, either way, we have a single place where code is defined that determines the behavior of the robots. What we need on top of this is a few things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A programmatic map of the factory floor with coordinates so we can assign robots to specific coordinates, we could use geocoding or something else for this, imagine a whatever powers your Roomba&lt;/li&gt;
&lt;li&gt;Each robot needs a camera and a tracking mechanism for knowing where to go to get to its defined coordinates for this workflow (if it is stationary), if not the robot might already have a task of moving something from one place to another, more on that later&lt;/li&gt;
&lt;li&gt;A configuration to specify:

&lt;ul&gt;
&lt;li&gt;What robot or machine is being used for this step&lt;/li&gt;
&lt;li&gt;Where a robot or machine needs to go&lt;/li&gt;
&lt;li&gt;The code that needs to be loaded into the robot for its tasks now, or if the code is already loaded into the robot, the robot needs to know what code to execute for this configuration&lt;/li&gt;
&lt;li&gt;Ex. is this code for analyzing an object to make sure it is up to quality? Is this code to pass objects? Should it now go into laser cutting mode?&lt;/li&gt;
&lt;li&gt;Any artifacts the robots need to work with&lt;/li&gt;
&lt;li&gt;For example, say in the configuration file our first step was a desktop metal printer printing a certain stl file. Much like a CI pipeline, our artifact would be the finished 3D printed object itself.&lt;/li&gt;
&lt;li&gt;The next robot in the pipeline might be a robotic arm, assigned to coordinates between the printer and the assembly line. This next configuration in the file would know it needs to take that artifact (since we would define it) and pass it to the assembly line&lt;/li&gt;
&lt;li&gt;Then the rest of the configuration file could define the assembly line&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So now we can have configuration files for several different assembly processes. If we want to start building something different we just load the new file and the robots would update their code. So when Tony says, “we are going back to hardware mode” we can think of this as him telling the system to load the new file.&lt;/p&gt;

&lt;h3 id=&#34;sample-file&#34;&gt;Sample file&lt;/h3&gt;

&lt;pre&gt;&lt;code class=&#34;language-yaml&#34;&gt;name: “hardware-mode”
steps:
machine: desktop-metal-1
  runs: |
    my-super-cool-stl-file.stl
  artifact: part-hook
machine: dum-e
	location: near-desktop-metal-1 # or maybe actual code coordinates, would be nice if there were shortcuts that translated to those
  runs: |
    part-hook | assembly-line # code the dum-e robot needs to execute, or maybe point to the repo where the code is stored
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The cool thing about this setup is now our entire factory is configured in code. We can roll back by reverting a commit or we can add more functionality by modifying the file. We also gain tracking the entire history of the factory setup for free. Possibly CAD programs could help generate these files. It is the equivalent of a build pipeline but for manufacturing, I guess it could be considered a physical build pipeline.&lt;/p&gt;

&lt;p&gt;Overall, this was a fun thought experiment. I can only hope to get a few robots and try and hack a real pipeline together one day. I do think becoming as productive as Tony Stark could be possible, just need the funds and time to hook it all together. And, of course, something to build!&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Battery Day</title>
                <link>https://blog.jessfraz.com/post/battery-day/</link>
                <pubDate>Tue, 29 Sep 2020 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/battery-day/</guid>
                    <description>

&lt;p&gt;Tesla had its first Battery Day on September 22nd, 2020&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34;&gt;1&lt;/a&gt;&lt;/sup&gt;. What a fantastic world
we live in that we can witness the first Apple-like keynote for batteries.
Batteries are a part of our everyday life; without them, the world would be
a much different place. Your cellphone, flashlight, tablet, laptops, drones,
cars, and other devices would not be portable and operational without batteries.&lt;/p&gt;

&lt;p&gt;At the heart of it, batteries store chemical energy and convert it into
electrical energy. The chemical reaction in a battery involves the flow of
electrons from one electrode to another. When a battery is discharging,
electrons flow from the electrode known as the anode, or negative electrode, to
the electrode known as the cathode, or positive electrode. This flow of
electrons provides an electric current that can be used to power devices.
Electrons have a negative charge; therefore, as the flow of negative electrons
moves from one electrode to another, an electrolyte is used to balance the
charge by being the route for charge-balancing positive ions to flow.&lt;/p&gt;

&lt;p&gt;Let’s break this process down a bit and uncover the chemical reactions at play
within batteries. To have an electrical current, we need a flow of electrons.
Where do those electrons come from?&lt;/p&gt;

&lt;p&gt;Electrons in the anode are produced by a chemical reaction between the anode, or
negative electrode, and the electrolyte. Simultaneously, another chemical
reaction occurs in the cathode, or positive electrode, enabling it to accept
electrons. Through these chemical reactions, a flow of electrons is created,
resulting in an electrical current.&lt;/p&gt;

&lt;p&gt;A chemical reaction that involves the exchange of electrons is known as
a reduction-oxidation reaction, or redox reaction.  Reduction refers to a gain
of electrons. Thus, half of this reaction, defined as reduction, occurs at the
cathode because it gains electrons. Oxidation refers to a loss of electrons.
Therefore, half of this reaction, defined as oxidation, occurs at the anode
because it loses electrons to the cathode.  Each of these reactions, reduction
and oxidation, has a particular standard potential. An electrochemical cell can
be made up of any two conducting materials that have reactions with different
standard potentials since the more robust material, which makes up the cathode,
will gain electrons from the weaker material, which makes up the anode.&lt;/p&gt;

&lt;p&gt;Batteries can be made up of one or more electrochemical cells, each cell
consisting of one anode, one cathode, and an electrolyte, as described above.
The electrodes and electrolyte are generally made up of different types of
metals or other chemical compounds. Different materials for the electrodes and
electrolyte produce different chemical reactions that affect how the battery
works, how much energy it can store, and its voltage.&lt;/p&gt;

&lt;h3 id=&#34;volts&#34;&gt;Volts&lt;/h3&gt;

&lt;p&gt;The word “volt”
refers to the measure of electric potential. The term came from the Italian
scientist Alessandro Volta, who is credited for inventing the first battery. In
1780, Luigi Galvani, another Italian scientist, observed that the legs of frogs
hanging on iron or brass hooks would twitch when touched with a probe of some
other type of metal. Galvani believed that this was caused by electricity from
within the frogs’ tissues. He called it ‘animal electricity.’&lt;/p&gt;

&lt;p&gt;Volta believed the electric current came from the two different metal types: the
hooks on which the frogs were hanging and the probe&amp;rsquo;s different metal. He
thought the current was merely being transmitted through, not from, the frogs’
tissues. Volta experimented with stacks of silver and zinc layers interspersed
with layers of cloth or paper soaked in saltwater and found an electric current
flowed through a wire applied to both ends of the pile. Volta also found that
the amount of voltage could be increased by using different metals in the pile.
Leading to what we know today as the scientific unit of a &amp;ldquo;volt&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34;&gt;2&lt;/a&gt;&lt;/sup&gt;.&amp;rdquo;&lt;/p&gt;

&lt;p&gt;There are two ways to increase a battery&amp;rsquo;s voltage: stack several cells together
or increase the cell&amp;rsquo;s electrochemical potential by choosing different
materials.&lt;/p&gt;

&lt;p&gt;When cells are combined in a series, it has an additive effect on the battery’s
voltage. Essentially, the force at which the electrons move through the battery
can be seen as the total force as it moves from the first cell&amp;rsquo;s anode through
the number of cells the battery contains to the last cell’s cathode.&lt;/p&gt;

&lt;p&gt;In contrast, when cells are combined in parallel, it increases the battery’s
possible current, which is defined as the total number of electrons flowing
through the cells, but not its voltage.&lt;/p&gt;

&lt;h3 id=&#34;measuring-electricity&#34;&gt;Measuring electricity&lt;/h3&gt;

&lt;p&gt;When you buy
a light bulb, the box indicates the wattage for the bulb. Watts are
a measurement of power. Watts describe the rate of electricity that is being
used at a specific moment. Therefore, a 60-watt light bulb uses 60 watts of
electricity at any moment while turned on.&lt;/p&gt;

&lt;p&gt;Watt-hours (Wh), on the other hand, are a measurement of energy. Watt-hours
describe the total amount of electricity used over time. You can derive from the
name that watt-hours are a combination of watts, the rate electricity is used,
and hours, the length of time used.  Going back to our example, a 60-watt light
bulb that draws 60 watts of electricity at any moment while turned on uses 60
watt-hours of electricity over one hour.&lt;/p&gt;

&lt;p&gt;Watt-hours will only get you so far, however. If you want to measure the
electricity used by a large appliance or a household, folks tend to use
kilowatt-hours (kWh). A kilowatt is equal to one thousand watts; therefore, one
kilowatt-hour is equal to one thousand watt-hours.&lt;/p&gt;

&lt;p&gt;If you want to measure the output of a power plant or the amount of electricity
used by an entire city, you will use megawatts. A megawatt is one thousand
kilowatts or one million watts. Getting even larger, a gigawatt is one thousand
megawatts, or one million kilowatts, or one billion watts. Gigawatts is where
the namesake for Tesla’s Gigafactories comes from. In 2018, battery production
at the Gigafactory in Nevada reached 20 gigawatt-hours (GWh) per year&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:3&#34;&gt;&lt;a href=&#34;#fn:3&#34;&gt;3&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h3 id=&#34;alkaline-batteries&#34;&gt;Alkaline batteries&lt;/h3&gt;

&lt;p&gt;Most people are probably familiar with alkaline batteries. These are
the batteries that you typically use to power toys, electronics, flashlights,
etc. The bulk of alkaline batteries produced are single-use, although there are
some rechargeable alkaline batteries in existence. So what makes up an alkaline
battery?&lt;/p&gt;

&lt;p&gt;Alkaline batteries have zinc as their anode and manganese dioxide
(MnO&lt;sub&gt;2&lt;/sub&gt;) as
their cathode. Their name, however, comes from the alkaline solution used as the
electrolyte. The electrolyte is typically potassium hydroxide (KOH), which can
contain a large number of dissolved ions. The more ions the electrolyte solution
can absorb, the longer the redox reaction that drives the battery can keep
going.&lt;/p&gt;

&lt;p&gt;The zinc anode is usually in powdered form. Powder has a greater surface area
for a reaction, which means the cell can quickly release its power. The zinc
anode gives up its electrons to the manganese dioxide cathode, to which carbon,
in the form of graphite, is added to improve its conductivity and help it keep
its shape.&lt;/p&gt;

&lt;p&gt;Alkaline batteries are popular because they have a low self-discharge rate,
giving them a long shelf life, and don’t contain toxic heavy metals like lead or
cadmium. They account for the bulk of batteries that are made today, although
their place at the top will likely soon be challenged by the lithium-ion
batteries in our phones, laptops, cars, and an increasing number of other
gadgets.&lt;/p&gt;

&lt;h3 id=&#34;lithium-ion-batteries&#34;&gt;Lithium-ion batteries&lt;/h3&gt;

&lt;p&gt;Lithium-ion batteries are popular due to their
energy density. Because the energy is dense, your phone can last all day and
still be the small, portable, handheld device we are all familiar with. As you
likely know from the behavior of your phone, lithium-ion batteries are
rechargeable. The namesake for the battery comes from the fact that lithium ions
(Li+) are involved in the chemical reactions that make up the battery.&lt;/p&gt;

&lt;p&gt;In a lithium-ion cell, both electrodes, anode and cathode, are made of materials
that can absorb lithium ions. The absorbing action is known as intercalation
when charged ions of an element can be stored inside a material without
significantly disturbing it. The lithium ions are paired to an electron within
the structure of the anode. When the battery discharges, the intercalated
lithium ions are released from the anode and travel through the electrolyte
solution to be intercalated in the cathode.&lt;/p&gt;

&lt;p&gt;A lithium-ion battery starts its life in a state of full discharge: all its
lithium ions are intercalated within the cathode, and its chemistry cannot yet
produce any electricity. Before the battery can be used, it needs to be charged.
As the battery is charged, an oxidation reaction occurs at the cathode, meaning
that it loses some negatively charged electrons. An equal number of positively
charged intercalated lithium ions are dissolved into the electrolyte solution to
maintain the charge balance in the cathode. These travel over to the anode,
where they are intercalated, or absorbed, within what is typically graphite.
This intercalation reaction also deposits electrons into the graphite anode, to
pair with the lithium ion.  There are many other types of batteries, but you
mostly need to understand lithium-ion batteries as context for this article&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:4&#34;&gt;&lt;a href=&#34;#fn:4&#34;&gt;4&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h2 id=&#34;new-technologies&#34;&gt;New technologies&lt;/h2&gt;

&lt;h3 id=&#34;solid-state-batteries&#34;&gt;Solid-state batteries&lt;/h3&gt;

&lt;p&gt;Counter to the liquid or polymer gel
electrolyte found in batteries today, solid-state batteries use a solid
electrolyte and solid electrodes. If we recall from earlier, positive ions flow
through the electrolyte to balance the electrons&amp;rsquo; negative charge. Today,
batteries are quite efficient at transferring positive ions since a liquid
electrolyte is in contact with the electrodes&amp;rsquo; entire surface area. Using
a solid makes this a bit harder. Imagine the difference between dipping a chip
in soup and dipping it into chopped tomatoes. The chip dipped in the soup will
have soup covering more of the chip&amp;rsquo;s surface area than the chopped tomatoes
cover the other chip.&lt;/p&gt;

&lt;p&gt;So why even use a solid electrolyte if it is less efficient? Today’s lithium-ion
batteries typically rely on flammable liquids as the electrolyte. By using
a solid electrolyte, batteries can be less prone to catching fire. Most folks
probably remember Samsung’s Galaxy Note 7, which had the unfortunate side effect
of catching fire&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:5&#34;&gt;&lt;a href=&#34;#fn:5&#34;&gt;5&lt;/a&gt;&lt;/sup&gt;. Solid electrolytes provide a much safer alternative.&lt;/p&gt;

&lt;p&gt;Research and experimentation in solid electrolytes typically tend to be either
solid polymers at high temperatures or ceramics at room temperature. The
downside of solid polymers at high temperatures is they need to operate at
temperatures above 220°F (105°C)&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:6&#34;&gt;&lt;a href=&#34;#fn:6&#34;&gt;6&lt;/a&gt;&lt;/sup&gt;. That is certainly not practical for a handheld
device like a phone or tablet, but could be apt for storing energy to power
a home.&lt;/p&gt;

&lt;p&gt;Quite a few companies are working on using ceramics at room temperature to
create a solid-state battery. Toyota has been talking about theirs for years&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:7&#34;&gt;&lt;a href=&#34;#fn:7&#34;&gt;7&lt;/a&gt;&lt;/sup&gt; and
aims to have it completed in 2025&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:8&#34;&gt;&lt;a href=&#34;#fn:8&#34;&gt;8&lt;/a&gt;&lt;/sup&gt;. Startups, such as &lt;a href=&#34;https://solidpowerbattery.com/&#34;&gt;Solid Power&lt;/a&gt; and
&lt;a href=&#34;https://ionicmaterials.com/2019/06/a123-systems-and-ionic-materials-advance-all-solid-state-battery-development-using-solid-polymer-electrolyte-with-conventional-lithium-ion-electrodes/&#34;&gt;A123 Systems (with the help of Iconic Materials)&lt;/a&gt;,
aim to do the same.&lt;/p&gt;

&lt;p&gt;A lot of the novel research being done on solid-state batteries is the work of
Jürgen Janek&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:9&#34;&gt;&lt;a href=&#34;#fn:9&#34;&gt;9&lt;/a&gt;&lt;/sup&gt;. Jürgen recently published a benchmark of the performance of
all-solid-state lithium batteries&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:10&#34;&gt;&lt;a href=&#34;#fn:10&#34;&gt;10&lt;/a&gt;&lt;/sup&gt;. Another high-profile battery scientist,
Gerbrand Ceder, published a paper on interface stability in solid-state
batteries&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:11&#34;&gt;&lt;a href=&#34;#fn:11&#34;&gt;11&lt;/a&gt;&lt;/sup&gt;. New and novel research on solid-state batteries is being published
quite frequently. While there are many skeptics of solid-state batteries since
it has yet to be commercially delivered and scaled, I would not dismiss it
entirely from having a seat at the table in the future.&lt;/p&gt;

&lt;h3 id=&#34;nuclear-batteries&#34;&gt;Nuclear batteries&lt;/h3&gt;

&lt;p&gt;Until
now, we have only discussed batteries powered by chemical reactions, such as
those powering flashlights, phones, and other gadgets. Chemical batteries, also
known as galvanic cells, discharge in a given amount of time and either need to
be thrown away or recharged. Begging the question: is there a type of battery
that could last long term?&lt;/p&gt;

&lt;p&gt;Nuclear batteries, also known as atomic batteries, using the energy of beta
decay, are being researched to create a battery that lasts longer than those
powered by chemical reactions. Batteries powered by beta decay are known as
betavoltaics. Radioactive isotopes used in nuclear batteries have half-lives
ranging from tens to hundreds of years, so their power output remains nearly
constant for a very long time. If nuclear batteries last from tens to hundreds
of years, why are we not using them everywhere today? Doesn’t everyone want
a phone that could last at least ten years without needing to be charged?&lt;/p&gt;

&lt;p&gt;There are a few side effects of nuclear batteries. They cannot be turned off;
electrons are continually being produced, even when they are not needed.
Research is being done into stimulating beta decay&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:12&#34;&gt;&lt;a href=&#34;#fn:12&#34;&gt;12&lt;/a&gt;&lt;/sup&gt;, which would create more
current on-demand, allowing the output to drop to almost nothing when it is
turned off. Another downside is the power density of betavoltaic cells is much
lower than that of chemical batteries. However, it is interesting to note that
betavoltaics were used in the 1970s to power cardiac pacemakers, before being
replaced by cheaper lithium-ion batteries, even though lithium-ion batteries
have a shorter lifetime.&lt;/p&gt;

&lt;p&gt;In 2016, Russian researchers from MISIS presented a prototype betavoltaic
battery based on nickel-63&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:13&#34;&gt;&lt;a href=&#34;#fn:13&#34;&gt;13&lt;/a&gt;&lt;/sup&gt;. A downside of using nickel-63 is that it is not
readily available, making their research hard to commercialize. CityLabs sells
a betavoltaic battery, with a 14.4-year half-life, you can buy today starting at
$1,000&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:14&#34;&gt;&lt;a href=&#34;#fn:14&#34;&gt;14&lt;/a&gt;&lt;/sup&gt;, but you would need 1.2 million of these just to have one watt of power.
&lt;a href=&#34;https://ndb.technology/&#34;&gt;NDB&lt;/a&gt; is a startup working on a nano diamond battery that could last for thousands
of years&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:15&#34;&gt;&lt;a href=&#34;#fn:15&#34;&gt;15&lt;/a&gt;&lt;/sup&gt;. &lt;a href=&#34;http://www.upowertech.com/&#34;&gt;UPower&lt;/a&gt; is another startup working on a megawatt-scale atomic
generator.&lt;/p&gt;

&lt;h3 id=&#34;silicon-anode&#34;&gt;Silicon anode&lt;/h3&gt;

&lt;p&gt;Today, the material typically used for the anode is
graphite because it is economical, reliable, and relatively energy-dense,
especially compared to current cathode materials. The limiting factor of
lithium-ion batteries is the amount of lithium that can be stored in the
electrodes. Using silicon as the material for the anode, rather than graphite,
allows around nine times more lithium ions to be held in the anode.&lt;/p&gt;

&lt;p&gt;The ability to store more lithium ions using silicon sounds amazing; why isn’t
everyone doing this? The problem is a silicon anode swells to 3-4 times its
original volume when it absorbs lithium ions. Making the casing bigger doesn’t
circumvent the problem because the expansion causes the silicon to fracture,
causing the battery to fail. It also gums up with a passivation layer, also
known as the solid electrolyte interphase (SEI), formed on electrode surfaces
from the decomposition of electrolytes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“With silicon, the cookie crumbles and gets gooey.” - Elon Musk.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As a solution to this problem, many companies use silicon as a fraction of the
anode material. But these materials are expensive and highly engineered.
Examples of this include silicon structured in SiO glass ($6.6 per kWh), silicon
structured in graphite ($10.2 per kWh), and silicon nanowires (&amp;gt;$100 per
kWh)&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:16&#34;&gt;&lt;a href=&#34;#fn:16&#34;&gt;16&lt;/a&gt;&lt;/sup&gt;.
&lt;a href=&#34;https://silanano.com/&#34;&gt;Sila Nanotechnologies&lt;/a&gt; is using silicon as their anode material&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:17&#34;&gt;&lt;a href=&#34;#fn:17&#34;&gt;17&lt;/a&gt;&lt;/sup&gt;.
&lt;a href=&#34;https://www.amprius.com/technology/&#34;&gt;Amprius&lt;/a&gt; claims
to use silicon for 100% the anode material with silicon nanowires, a highly
engineered, expensive material. &lt;a href=&#34;https://www.advano.io/&#34;&gt;Advano&lt;/a&gt;,
&lt;a href=&#34;https://www.enevate.com/&#34;&gt;Enevate&lt;/a&gt;, and &lt;a href=&#34;https://enovix.com/&#34;&gt;Enovix&lt;/a&gt; are startups working
on a silicon solution for the anode material.&lt;/p&gt;

&lt;h3 id=&#34;tesla-s-battery-day&#34;&gt;Tesla’s Battery Day&lt;/h3&gt;

&lt;p&gt;At Tesla’s
Battery Day event, they announced many changes to their battery that encompass
more than just the materials used. Tesla has on staff one of the most renowned
battery scientists, Jeff Dahn. His most recent papers on “A Wide Range of
Testing Results on an Excellent Lithium-Ion Cell Chemistry to be used as
Benchmarks for New Battery Technologies&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:18&#34;&gt;&lt;a href=&#34;#fn:18&#34;&gt;18&lt;/a&gt;&lt;/sup&gt;” and “Is Cobalt Needed in Ni-rich
Positive Electrode Materials for Lithium-Ion Batteries?&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:19&#34;&gt;&lt;a href=&#34;#fn:19&#34;&gt;19&lt;/a&gt;&lt;/sup&gt;” help gives some insight
into what Tesla has been working on.&lt;/p&gt;

&lt;p&gt;The battery day outcomes increase their vehicles’ range while being more
economical; they plan to halve the cost per kilowatt-hour. Most startups&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:20&#34;&gt;&lt;a href=&#34;#fn:20&#34;&gt;20&lt;/a&gt;&lt;/sup&gt; in this
space tend to take a single design decision into account for their products, for
example, anode material and focus on that. Tesla, on the other hand, took a very
well rounded approach. They took into account not only the materials for the
cathode and anode but also the cell design, factory, and integration with the
vehicle&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:21&#34;&gt;&lt;a href=&#34;#fn:21&#34;&gt;21&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/vert-integration.png&#34; alt=&#34;vert-integration&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source: Tesla’s Battery Day Presentation &lt;a href=&#34;https://www.youtube.com/watch?v=l6T9xIeZTds&#34;&gt;https://www.youtube.com/watch?v=l6T9xIeZTds&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s break down each of these improvements.&lt;/p&gt;

&lt;h4 id=&#34;cell-design&#34;&gt;Cell design&lt;/h4&gt;

&lt;p&gt;For Tesla’s batteries,
while discharging, the positive ions flow over the tabs, while the lithium ions
flow from the anode to the cathode, as shown below. The tabs allow the cell’s
energy to be transferred to an external source.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/cell-flow.png&#34; alt=&#34;cell-flow&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source: Tesla’s Battery Day Presentation &lt;a href=&#34;https://www.youtube.com/watch?v=l6T9xIeZTds&#34;&gt;https://www.youtube.com/watch?v=l6T9xIeZTds&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Tesla team sought out to increase the cell size to 46 millimeters, which
optimizes vehicle range and cost reduction. However, increasing the cells&amp;rsquo; size
has a negative side effect on supercharging because of thermal issues. To
circumvent these issues, the Tesla team removed the tabs, calling their new
design tabless.&lt;/p&gt;

&lt;p&gt;The tabless design leads to simpler manufacturing, fewer parts, and a five times
reduction in the electrical path. Going from 250-millimeter to 50-millimeter
electrical path length leads to substantial thermal benefits. The electrical
path length is significant because the distance the electron has to travel is
much less. Even though the cell is much bigger, the power to weight ratio is
better than a smaller cell with tabs.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/tabless.png&#34; alt=&#34;tabless&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source: Tesla’s Battery Day Presentation &lt;a href=&#34;https://www.youtube.com/watch?v=l6T9xIeZTds&#34;&gt;https://www.youtube.com/watch?v=l6T9xIeZTds&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s dive into why this new tabless design matters. Instead of calling it
tabless, Tesla could have called it “many tabs” because each of the folded pins
is a tab, as shown in the image above. What is the function of a tab?&lt;/p&gt;

&lt;p&gt;Growing up, my family would always leave sporting events before they ended to
avoid the crowd trying to leave the stadium after the event was over. If we had
stayed to the end of the event, it would take more time for us to exit the
stadium and be very uncomfortable since everyone would be trying to leave
through very few exits at the same time. As people are trying to exit, they get
closer and closer to one another, and the environment becomes very hot and
rowdy. If we think of people as electrons, a stadium with a single exit is
similar to a battery’s behavior with a single tab; electrons are all trying to
leave through the single tab and bumping up against one another until they heat
up. There are multiple tabs in Tesla’s new design, equivalent to a stadium with
lots of exits. Now people, or electrons, can exit quickly while staying cool and
calm.&lt;/p&gt;

&lt;p&gt;There aren’t many details from the presentation on the new tabless design and
its implementation, but it can be attributed to “secret sauce.”&lt;/p&gt;

&lt;p&gt;Manufacturing a cell consists of an electrode process where the active materials
are coated into films onto foils; the coated foils are then wound in the winding
process. The roll is then assembled into the can, sealed, and filled with
electrolyte and then sent to Formation where the cell is charged for the first
time. If you recall from above, a lithium-ion battery starts its life in
a discharged state. For a battery cell with tabs, manufacturing is much more
complicated. When the
cell with tabs is going through the assembly line, it has to keep stopping where
all the tabs are so you can’t do continuous motion production. It is also a lot
more error-prone.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“It is really a huge pain in the ass to have tabs from a production standpoint.” - Elon Musk.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The new batteries are 46 millimeters by 80 millimeters, leading to the name
4680. The first two digits refer to the diameter, and the second two digits
refer to the length. Previously, an extra zero was added onto the end of the
name, but it was removed since it had no purpose.&lt;/p&gt;

&lt;p&gt;The 4680 batteries have five times more energy with six times the power and
enable a 16% range increase. At the battery pack level, the form factor
improvements alone lead to a 14% reduction in cost per kWh.&lt;/p&gt;

&lt;h4 id=&#34;cell-factory&#34;&gt;Cell factory&lt;/h4&gt;

&lt;p&gt;We
learned a bit about how removing tabs from the battery cells simplified the
manufacturing process above. In an assembly line, you don’t want things to stop
and start but continuously move. Any time the process is stopped leads to
inefficiency. The Tesla team aims to speed up its process to make one factory
have multiple scales of efficiency better than a typical battery factory.&lt;/p&gt;

&lt;p&gt;We learned above that the electrode process is where the active materials are
coated into films onto foils. The wet process step of the electrode process
consists of first: mixing. Mixing occurs when the powders are mixed with either
water or a solvent, typically a solvent for the cathode. The mix then goes into
a large coat and dry oven, tens of meters long, where the slurry is coated onto
the foil and dried. The solvent then has to be recovered. Finally, the coated
foil is compressed to the final density. This process is complex and
inefficient, especially since humans need to transport the mix from the mixing
step to the ovens. It is also inefficient due to the need to put the solvent in
and then recover it.&lt;/p&gt;

&lt;p&gt;One significant change they are making is skipping the solvent step of the
electrode coating&amp;rsquo;s wet process in favor of a dry process. The dry process
transforms the powder directly into film. This technology initially stemmed from
Tesla’s acquisition of Maxwell at the beginning of 2019&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:22&#34;&gt;&lt;a href=&#34;#fn:22&#34;&gt;22&lt;/a&gt;&lt;/sup&gt;. At battery day, Elon
mentioned that since the acquisition, they are now on the 4th revision of the
equipment that turns powder into film. Elon noted, “there is still a lot of work
to do. There is a clear path to success but a ton of work between here and
there.” When this process is scaled up, it results in a ten times reduction in
footprint and a ten times reduction in energy, and a massive decrease in CapEx
investment.&lt;/p&gt;

&lt;p&gt;The manufacturing step known as Formation is where the cell is charged for the
first time, and the quality of the cell is verified. Formation is typically 25%
of the CapEx investment. The Tesla team improved density and cost-effectiveness
by using their knowledge from cars and the powerwall charging and discharging.
This led to a 86% reduction in Formation CapEx investment per GWh and a 75%
reduction in footprint. For a factory that previously output 150 GWh, this
translates to that same factory outputting 1 TWh with the more efficient
processes. At the battery pack level, this leads to an 18% reduction in cost per
kWh.&lt;/p&gt;

&lt;h4 id=&#34;anode-material&#34;&gt;Anode material&lt;/h4&gt;

&lt;p&gt;Tesla announced they were moving to silicon as their anode
material. Silicon is excellent because it is the most abundant element in the
earth’s crust after oxygen. Rather than creating a highly engineered material
that would be expensive, Tesla will use the raw silicon found in the earth’s
crust and design for it to expand. They will stabilize the silicon&amp;rsquo;s surface
through an elastic, ion-conducting polymer coating and a highly elastic binder
and electrolyte.&lt;/p&gt;

&lt;p&gt;Tesla’s silicon costs $1.20 per kWh, whereas the solutions we covered earlier
cost anywhere from $6 per kWh to upwards of a hundred. Using silicon leads to
a 5% reduction in cost per kWh at the battery pack level and a 20% longer range
for Tesla vehicles.&lt;/p&gt;

&lt;h4 id=&#34;cathode-material&#34;&gt;Cathode material&lt;/h4&gt;

&lt;p&gt;A helpful analogy for understanding the
cathode is to think of the cathode as a bookshelf. In this case, the lithium
ions would be books. The most efficient bookshelf holds the most books while
still being stable enough to retain its structure as the books get loaned out
and returned.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/tesla-cathode.png&#34; alt=&#34;tesla-cathode&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source: Tesla’s Battery Day Presentation &lt;a href=&#34;https://www.youtube.com/watch?v=l6T9xIeZTds&#34;&gt;https://www.youtube.com/watch?v=l6T9xIeZTds&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Tesla team aims to increase Nickel in its cathode material since it is the
cheapest and has the highest energy density (as shown above). Cobalt is
typically used as a cathode material because it is very stable. However, the
Tesla team aims to leverage novel coatings and dopants to stabilize Nickel
better and remove Cobalt entirely from their materials. Removing Cobalt leads to
a 15% reduction in the cathode’s cost per kWh.&lt;/p&gt;

&lt;p&gt;The Tesla team made sure to keep in mind the cost of the materials used and the
materials’ availability. With silicon for the anode material, availability was
not an issue since silicon is readily available. The same goes for lithium,
which is also highly accessible. For Nickel, on the other hand, the Tesla team
is keeping in mind total Nickel availability by diversifying the amount of
Nickel they are using per the type of vehicle.&lt;/p&gt;

&lt;p&gt;The team also simplified the cathode manufacturing process by removing all the
legacy parts. According to the battery day presentation, the cathode
manufacturing process, which is 35% of the cathode cost per kWh, had not had
a fresh look in a long time and was wildly inefficient.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“If you take a look at the ‘it’s a small world journey’ of I am a Nickel atom
and what happens to me, it’s crazy, you’re going around the world three times,
there is a moral equivalent of digging the ditch, filling in the ditch, and
digging the ditch again. It’s total madness.” - Elon Musk.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A typical cathode process starts with the metal from the mine being turned into
an intermediate material called metal sulfate, which, in turn, is processed
again. The Tesla team removed the intermediate step of turning the metal into
metal sulfate along with a bunch of other unnecessary steps. They also
localized the cathode materials to the US, which decreased the number of miles
required for the materials to travel. This leads to a 66% reduction in CapEx
investment, a 76% reduction in process cost, and zero wastewater. The cathode
material improvements lead to a 12% reduction in cost per kWh at the battery
pack level.&lt;/p&gt;

&lt;h4 id=&#34;cell-vehicle-integration&#34;&gt;Cell vehicle integration&lt;/h4&gt;

&lt;p&gt;In the early days of aircraft, the fuel
was carried as cargo. Later, the fuel tanks were made in wing shape. This was
a breakthrough because the wings are critical to the airplane&amp;rsquo;s function but now
could be used for another purpose. The fuel tank was no longer cargo but
fundamental to the structure of the aircraft. Tesla intends to do the same for
cars.&lt;/p&gt;

&lt;p&gt;By removing the intermediate structure in the battery pack, they can pack the
cells more densely. Instead of having supports and stabilizers in the battery
cells, making up the intermediate structural elements, the battery pack itself
is structural. Typically, Tesla fills the battery packs with a flame retardant.
The new battery packs will be filled with a flame retardant and structural
adhesive, giving it stiffness and stability without intermediate structural
elements. This makes the structure even stiffer than a regular car.&lt;/p&gt;

&lt;p&gt;The cells can now be moved more towards the center of the vehicle because the
volumetric efficiency is better, avoiding a side impact potentially contacting
the cells. This also allows the car to maneuver better because the polar moment
of inertia is improved. Much like an ice skater can turn better with her arms
close to her body rather than extended out.&lt;/p&gt;

&lt;p&gt;The improvements to the battery pack integration lead to a 10% mass reduction in
the car&amp;rsquo;s body, a 14% range increase, and 370 fewer parts. The smaller,
integrated battery and body also help increase the efficiency of manufacturing.
This leads to a 55% reduction in CapEx investment and a 35% reduction in floor
space. At the battery pack level, the integration improvements lead to a 7%
reduction in cost per kWh.&lt;/p&gt;

&lt;p&gt;The sum of all these improvements, including cell design, factory, materials,
and vehicle integration, achieves the goal to halve the cost per kWh. Cheaper
electric vehicles widen Tesla’s market to new buyers reducing the number of
gas-powered vehicles on the road.&lt;/p&gt;

&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;

&lt;p&gt;All in all, it is fantastic to see
a technology we all rely on day to day get its time in the spotlight. Although
not mentioned at battery day, if Tesla were to achieve 400 watt-hours per
kilogram, a zero-emissions jet might just be on the horizon. Now that batteries
are vertically integrated into Tesla’s product, you can only imagine that the
software will track more data on battery efficiency, leading to more and more
improvements in the future.&lt;/p&gt;

&lt;p&gt;It is incredible to see Tesla take a fresh look at making the most efficient and
cost-effective batteries. The level of thought and detail put into rethinking
old processes from first principles to make them more efficient is inspiring.
The Tesla team didn’t just look at one angle, but all the angles: cell design,
manufacturing, vehicle integration, and materials. There is a clear “why” for
every decision made that boils down to economics, not just technical gains.
Hopefully, we see another core technology, such as batteries, in the spotlight
soon.&lt;/p&gt;
&lt;div class=&#34;footnotes&#34;&gt;

&lt;hr /&gt;

&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;&lt;a href=&#34;https://www.tesla.com/2020shareholdermeeting&#34;&gt;https://www.tesla.com/2020shareholdermeeting&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:1&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;&lt;a href=&#34;https://www.science.org.au/curious/technology-future/batteries&#34;&gt;https://www.science.org.au/curious/technology-future/batteries&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:2&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:3&#34;&gt;&lt;a href=&#34;https://www.tesla.com/gigafactory&#34;&gt;https://www.tesla.com/gigafactory&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:3&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:4&#34;&gt;Some people might, however, be interested in a urine-powered battery: &lt;a href=&#34;https://newatlas.com/urine-battery/42866/&#34;&gt;https://newatlas.com/urine-battery/42866/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:4&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:5&#34;&gt;&lt;a href=&#34;https://time.com/4526350/samsung-galaxy-note-7-recall-problems-overheating-fire/&#34;&gt;https://time.com/4526350/samsung-galaxy-note-7-recall-problems-overheating-fire/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:5&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:6&#34;&gt;&lt;a href=&#34;https://qz.com/1588236/how-we-get-to-the-next-big-battery-breakthrough/&#34;&gt;https://qz.com/1588236/how-we-get-to-the-next-big-battery-breakthrough/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:6&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:7&#34;&gt;&lt;a href=&#34;https://cen.acs.org/articles/95/i46/Solid-state-batteries-inch-way.html&#34;&gt;https://cen.acs.org/articles/95/i46/Solid-state-batteries-inch-way.html&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:7&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:8&#34;&gt;&lt;a href=&#34;https://www.caranddriver.com/news/a33435923/toyota-solid-state-battery-2025/&#34;&gt;https://www.caranddriver.com/news/a33435923/toyota-solid-state-battery-2025/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:8&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:9&#34;&gt;&lt;a href=&#34;https://iopscience.iop.org/nsearch?terms=J%C3%BCrgen+Janek&amp;amp;nextPage=-1&amp;amp;previousPage=-1&amp;amp;currentPage=1&amp;amp;orderBy=relevance&amp;amp;pageLength=10&amp;amp;searchDatePeriod=anytime&amp;amp;journals=1945-7111&amp;amp;authors=J%C3%BCrgen+Janek&#34;&gt;https://iopscience.iop.org/nsearch?terms=J%C3%BCrgen+Janek&amp;amp;nextPage=-1&amp;amp;previousPage=-1&amp;amp;currentPage=1&amp;amp;orderBy=relevance&amp;amp;pageLength=10&amp;amp;searchDatePeriod=anytime&amp;amp;journals=1945-7111&amp;amp;authors=J%C3%BCrgen+Janek&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:9&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:10&#34;&gt;&lt;a href=&#34;https://www.nature.com/articles/s41560-020-0565-1&#34;&gt;https://www.nature.com/articles/s41560-020-0565-1&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:10&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:11&#34;&gt;&lt;a href=&#34;https://www.nature.com/articles/s41578-019-0157-5&#34;&gt;https://www.nature.com/articles/s41578-019-0157-5&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:11&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:12&#34;&gt;&lt;a href=&#34;https://www.nist.gov/news-events/news/2016/06/physicists-measured-something-new-radioactive-decay-neutrons&#34;&gt;https://www.nist.gov/news-events/news/2016/06/physicists-measured-something-new-radioactive-decay-neutrons&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:12&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:13&#34;&gt;&lt;a href=&#34;https://phys.org/news/2018-06-prototype-nuclear-battery-power.html&#34;&gt;https://phys.org/news/2018-06-prototype-nuclear-battery-power.html&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:13&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:14&#34;&gt;&lt;a href=&#34;https://citylabs.net/?option=com_wrapper&amp;amp;view=wrapper&amp;amp;Itemid=20&#34;&gt;https://citylabs.net/?option=com_wrapper&amp;amp;view=wrapper&amp;amp;Itemid=20&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:14&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:15&#34;&gt;&lt;a href=&#34;https://techcrunch.com/2020/08/25/self-charging-thousand-year-battery-startup-ndb-aces-key-tests-and-lands-first-beta-customers/&#34;&gt;https://techcrunch.com/2020/08/25/self-charging-thousand-year-battery-startup-ndb-aces-key-tests-and-lands-first-beta-customers/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:15&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:16&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=l6T9xIeZTds&amp;amp;feature=emb_title&#34;&gt;https://www.youtube.com/watch?v=l6T9xIeZTds&amp;amp;feature=emb_title&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:16&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:17&#34;&gt;&lt;a href=&#34;https://silanano.com/wp-content/uploads/2020/09/The-Future-of-Energy-Storage.pdf&#34;&gt;https://silanano.com/wp-content/uploads/2020/09/The-Future-of-Energy-Storage.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:17&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:18&#34;&gt;&lt;a href=&#34;https://iopscience.iop.org/article/10.1149/2.0981913jes/meta&#34;&gt;https://iopscience.iop.org/article/10.1149/2.0981913jes/meta&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:18&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:19&#34;&gt;&lt;a href=&#34;https://iopscience.iop.org/article/10.1149/2.1381902jes/meta&#34;&gt;https://iopscience.iop.org/article/10.1149/2.1381902jes/meta&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:19&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:20&#34;&gt;Except for Sila Nanotechnologies, which seems to be most closely aligned with Tesla’s methodology: &lt;a href=&#34;https://silanano.com/wp-content/uploads/2020/09/The-Future-of-Energy-Storage.pdf&#34;&gt;https://silanano.com/wp-content/uploads/2020/09/The-Future-of-Energy-Storage.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:20&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:21&#34;&gt;Tesla claimed in the presentation there were more aspects they didn’t mention they could improve in the future.
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:21&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:22&#34;&gt;&lt;a href=&#34;https://electrek.co/2019/02/04/tesla-acquires-ultracapacitor-battery-manufacturer/&#34;&gt;https://electrek.co/2019/02/04/tesla-acquires-ultracapacitor-battery-manufacturer/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:22&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
                </item>
                    
            <item>
                <title>The Automated CIO</title>
                <link>https://blog.jessfraz.com/post/the-automated-cio/</link>
                <pubDate>Tue, 08 Sep 2020 06:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/the-automated-cio/</guid>
                    <description>

&lt;p&gt;I previously wrote a bit about our internal infrastructure in my post on &lt;a href=&#34;https://blog.jessfraz.com/post/the-art-of-automation/&#34;&gt;The
Art of Automation&lt;/a&gt;. This
post is going to go into details about our automated Chief infrastructure
Officer (CIO). I joke so much that I automated our CIO that I even named the
repo holding the code&amp;hellip; &lt;a href=&#34;https://github.com/oxidecomputer/cio&#34;&gt;cio&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I took the time this weekend to finally clean up some of this code. Previously,
our infrastructure was held together with bash, popsicle sticks, glue, and some
rust. Now, it is mostly rust and a much more sane architecture to grok. We also
get the freedom of caching all our data in a database that we own so we can
access it even when services are down. Previously, we called out to each
service&amp;rsquo;s API for every script, bot, or whatever, which can get expensive, slow,
and potentially be riddled with rate limits, or worse, downtime.&lt;/p&gt;

&lt;p&gt;Let me give you a diagram of what this looks like now:&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/cio-arch.png&#34; alt=&#34;cio-arch.png&#34; /&gt;&lt;/p&gt;

&lt;h2 id=&#34;sending-data-to-the-database&#34;&gt;Sending data to the database&lt;/h2&gt;

&lt;p&gt;At the very bottom of the diagram, you can see where we are using webhooks and cron jobs to pull
data out of various services APIs and send it to the database.&lt;/p&gt;

&lt;p&gt;Let&amp;rsquo;s dive into a few of these because it is not as simple as a pipe from an API
to a database in most cases.&lt;/p&gt;

&lt;h3 id=&#34;applicants&#34;&gt;Applicants&lt;/h3&gt;

&lt;p&gt;Every applicant to Oxide completes our candidate materials. This is a series of
questions about things they&amp;rsquo;ve worked on. Those get submitted with their resume
and other details into a Google Form.&lt;/p&gt;

&lt;p&gt;A cron job parses the spreadsheet from the
Google Form. In doing so, it knows if an application is new and we need to send
an email to the applicant that we received it. It will also send an email to the
team that we got a new application.&lt;/p&gt;

&lt;p&gt;The cron job also parses the materials they
submitted and their resume into plain text. Materials can be in the form of
HTML, PDF, doc, docx, zip, and even PDF with zip headers ;). The resume and
each question in
the materials is saved in individual database columns, which makes
search and indexing easier when we want to find an application based on
something we remember from their materials or resume.&lt;/p&gt;

&lt;p&gt;When an applicant gets hired or moved into an interview phase, GitHub issues are
opened so we can keep track of their progress through the interview or
onboarding.&lt;/p&gt;

&lt;h3 id=&#34;rfds&#34;&gt;RFDs&lt;/h3&gt;

&lt;p&gt;We wrote about our RFD process on the Oxide blog in &lt;a href=&#34;https://oxide.computer/blog/rfd-1-requests-for-discussion/&#34;&gt;RFD 1 Requests for
Discussion&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Each RFD is written in either markdown or asciidoc. We collect the content for
each RFD and update it in the database along with its equivalent HTML.&lt;/p&gt;

&lt;p&gt;The HTML is used for generating pages in a small website we use for sharing RFDs
with folks external to Oxide. These might be friends of Oxide, engineers who we
value their expertise and feedback, or potential customers and partners.&lt;/p&gt;

&lt;p&gt;By having all the content stored in the database it also makes it easier to
search across the content in all the RFDs.&lt;/p&gt;

&lt;p&gt;Those are just two examples of APIs we build on top of and enrich as we move
data into our database.&lt;/p&gt;

&lt;h3 id=&#34;github&#34;&gt;GitHub&lt;/h3&gt;

&lt;p&gt;It&amp;rsquo;s nice to have a cache of certain GitHub API calls for when GitHub is down or
we get rate limited. We store a few GitHub endpoints data in our database as
well.&lt;/p&gt;

&lt;h2 id=&#34;utilizing-the-data-in-an-easy-way&#34;&gt;Utilizing the data in an easy way&lt;/h2&gt;

&lt;p&gt;Next, we need a way to share all this data with other bots, scripts, fellow
colleagues, and apps
within the company. This is where the API server comes into play.&lt;/p&gt;

&lt;p&gt;The API server acts as the middle-man between the database and any scripts, bots,
users, and apps. The API is read-only since we get all the data from external
services and APIs.&lt;/p&gt;

&lt;p&gt;The API server syncs the database data with Airtable so we can use Airtable as
a front-end for viewing all the data in a specific table at once. This turns out
to be a great use for Airtable because you can also do joins with other tables
in Airtable very easily. It makes for a nice visual experience.&lt;/p&gt;

&lt;p&gt;For example, we can relate an RFD from the RFD table to an item in a different
table related to the roadmap. As folks push changes to their RFDs, the RFD
content will update in Airtable as well.&lt;/p&gt;

&lt;p&gt;All in all, this was pretty fun to build, refactor, build, and refactor again.
It&amp;rsquo;s been something I can pick up and work on when I get a free second and
easily add functionality to when we want to use our data in a specific way.&lt;/p&gt;

&lt;p&gt;For the API server, I got to use our
&lt;a href=&#34;https://github.com/oxidecomputer/dropshot&#34;&gt;dropshot&lt;/a&gt; REST API library for this!
Thanks to &lt;a href=&#34;https://twitter.com/dapsays&#34;&gt;Dave&lt;/a&gt; and
&lt;a href=&#34;https://twitter.com/ahl&#34;&gt;Adam&lt;/a&gt; for writing that :)&lt;/p&gt;

&lt;p&gt;At this point, I can&amp;rsquo;t imagine working at a company without an internal API for
querying everything from Google groups, to applicants, to mailing list
subscribers, to RFDs, and more. That&amp;rsquo;s all for now! I&amp;rsquo;d love to hear about other ideas you might have for internal
infrastructure!&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>A Tale of Two 3D Printers (and all additive manufacturing processes)</title>
                <link>https://blog.jessfraz.com/post/a-tale-of-two-3d-printers/</link>
                <pubDate>Sat, 13 Jun 2020 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/a-tale-of-two-3d-printers/</guid>
                    <description>

&lt;p&gt;I have wanted a 3D printer for a very long time. I hope you can tell from my &lt;a href=&#34;https://queue.acm.org/&#34;&gt;ACM
Queue&lt;/a&gt; column that I like to do a lot of research and
I tend to want &lt;em&gt;the best&lt;/em&gt;
thing. I had been keeping my eyes on the 3D printer product space for quite some
time. This article is going to go over the technical details behind 3D printing
as well as my experience with two different products. When I finally decided to
buy a 3D printer, I wanted to try ones that used different additive
manufacturing processes: material extrusion via fused deposition modeling (FDM)
and vat polymerization via stereolithography (SLA)&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34;&gt;1&lt;/a&gt;&lt;/sup&gt;. While I would have loved to
have gotten a printer for each of the seven different additive manufacturing
processes, I did not. However, I did dig into the details of all the various
additive manufacturing processes and technologies. Let’s dive in!&lt;/p&gt;

&lt;p&gt;If you would prefer to skip the research you can jump down to &lt;a href=&#34;#trying-out-products&#34;&gt;the
review&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&#34;additive-manufacturing-processes&#34;&gt;Additive Manufacturing Processes&lt;/h2&gt;

&lt;p&gt;Popular culture uses the term “3D printing” as a synonym
for additive manufacturing processes. In 2010, the American Society for Testing
and Materials (ASTM) group “ASTM F42 – Additive Manufacturing”, formulated a set
of standards that classify the range of additive manufacturing processes into
seven categories&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34;&gt;2&lt;/a&gt;&lt;/sup&gt;. The processes vary on the material and machine technology used
which has effects on the use cases and applications as well as the economics.&lt;/p&gt;

&lt;h3 id=&#34;material-extrusion&#34;&gt;Material extrusion&lt;/h3&gt;

&lt;p&gt;Material extrusion defines a process where an object is built
by melting and extruding a thermoplastic polymer filament in a predetermined
path layer by layer. Imagine if you were building an object and the only
material you could use is a tube of toothpaste. You’d slowly build the walls of
the object by putting layers of toothpaste on top of each other. Material
extrusion is similar.&lt;/p&gt;

&lt;p&gt;Material extrusion devices are the most commonly available and the cheapest
types of 3D printing technology in the world. It represents the largest
installed base of 3D printers globally. The most common applications are
electrical housings, form and fit testings, jigs and fixtures, and investment
casting patterns. The technology used for the material extrusion process is
known as fused deposition modeling or FDM&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:3&#34;&gt;&lt;a href=&#34;#fn:3&#34;&gt;3&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h4 id=&#34;fused-deposition-modeling-fdm&#34;&gt;Fused deposition modeling (FDM)&lt;/h4&gt;

&lt;p&gt;FDM,
also known as fused filament fabrication (FFF), works with a range of standard
thermoplastic filaments, such as acrylonitrile butadiene styrene (ABS),
polylactic acid (PLA), polyethylene terephthalate (PET), thermoplastic
polyurethane (TPU), nylon, and their various blends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s break down the FDM process in steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, a spool of thermoplastic filament is loaded into the printer. Once the
nozzle has heated to the correct temperature, the filament is fed to the
extrusion head and in the nozzle where it melts.&lt;br /&gt;&lt;/li&gt;
&lt;li&gt;Second, the extrusion head is
connected to a 3-axis system that allows it to move in the X, Y, and
Z dimensions. The melted material is extruded in thin strands and is deposited
layer by layer in predetermined locations, where it cools and solidifies. The
cooling process can be accelerated by using cooling fans attached to the
extrusion head, if the device supports it.&lt;br /&gt;&lt;/li&gt;
&lt;li&gt;Third, filling an area requires
multiple passes, similar to coloring with a marker or drawing with toothpaste.
When a layer is complete, the build platform moves down or the extrusion head
moves up, depending on the device, and a new layer is deposited. This process is
repeated until the object is complete.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because of this process, FDM objects tend to have visible layer lines, unless
smoothed, and might show inaccuracies around complex features.&lt;/p&gt;

&lt;h3 id=&#34;vat-photopolymerization&#34;&gt;Vat photopolymerization&lt;/h3&gt;

&lt;p&gt;Photopolymerization occurs when a photopolymer resin is
exposed to the light of a specific wavelength and undergoes a chemical reaction
to become solid. This is a common approach additive technologies use to build
an object one layer at a time.&lt;/p&gt;

&lt;p&gt;Vat polymerization processes are excellent at producing objects with fine
details and give a smooth surface finish. This makes them ideal for jewelry,
low-run injection molding, dental applications, and medical applications, such
as hearing aids. The main limitation of vat polymerization is the brittleness of
the produced objects. For this reason it is not suitable for mechanical parts&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:4&#34;&gt;&lt;a href=&#34;#fn:4&#34;&gt;4&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h4 id=&#34;stereolithography-sla&#34;&gt;Stereolithography (SLA)&lt;/h4&gt;

&lt;p&gt;Stereolithography was one of the world’s first 3D
printing technology, invented by Charles Hull in 1984&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:5&#34;&gt;&lt;a href=&#34;#fn:5&#34;&gt;5&lt;/a&gt;&lt;/sup&gt;. SLA resin 3D printers use
a laser to cure liquid resin into hardened plastic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s break down the SLA process in steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, a liquid photopolymer is filled in a vat or tank.&lt;br /&gt;&lt;/li&gt;
&lt;li&gt;Second, a concentrated
beam of ultraviolet light or a laser is focused onto the surface of the vat or
tank. The beam or laser creates each layer of the desired 3D object using
cross-linking or degrading the polymer at a specific location. This step is
repeated layer by layer, a 3D physical object is built until completion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SLA objects have high resolution and accuracy, clear details, and smooth surface
finishes. SLA is also quite versatile for many different use cases since
photopolymer resin formulations with a wide range of optical, mechanical, and
thermal properties to match those of standard, engineering, and industrial
thermoplastics have been produced.&lt;/p&gt;

&lt;h4 id=&#34;direct-light-processing-dlp&#34;&gt;Direct light processing (DLP)&lt;/h4&gt;

&lt;p&gt;Direct light
processing is near-identical to SLA, except DLP uses a digital light projector
screen to flash a single image of each layer all at once. Each layer is composed
of square pixels, called voxels, due to the projector being a digital screen.
In a way, it is
almost like an 8-bit ancestor of SLA in the same way that 8-bit drawings have
more defined individual square pixels. Since each layer is exposed all at
once, DLP can have faster print times compared to SLA, which solidifies a layer
in cross sections.&lt;/p&gt;

&lt;h4 id=&#34;continuous-direct-light-processing-cdlp&#34;&gt;Continuous direct light processing (CDLP)&lt;/h4&gt;

&lt;p&gt;Continuous direct light processing, also known as
continuous liquid interface production (CLIP), produces objects in the same way
as DLP. CDLP is called &amp;ldquo;continuous&amp;rdquo; since it relies on the continuous motion of
the build plate on the
Z axis. This results in faster build times because the printer is not required to
stop and separate the part from the build plate after each layer is produced.&lt;/p&gt;

&lt;h3 id=&#34;powder-bed-fusion-pbf&#34;&gt;Powder bed fusion (PBF)&lt;/h3&gt;

&lt;p&gt;Powder bed fusion technologies produce a solid part
using a thermal source that induces fusion, sintering or melting, between the
particles of a plastic or metal powder one layer at a time. Most PBF
technologies have mechanisms for spreading and smoothing thin layers of powder
as a part is constructed, resulting in the final component being encapsulated in
powder after the build is complete. The most common applications are functional
objects, complex ducting (hollow designs), and low run part production.&lt;/p&gt;

&lt;p&gt;The main variations in PBF technologies come from different energy sources, such
as lasers or electron beams, and the powders used in the process, such as
plastics or metals. Polymer-based PBF technologies allow for innovation in that
there is no need for support structures. This makes creating objects with
complex geometries easier.&lt;/p&gt;

&lt;p&gt;Both metal and plastic PBF objects typically are strong and stiff with
mechanical properties that are comparable, or sometimes even
better, than the bulk material. There is a large range of post-processing
methods available which can give objects a very smooth finish. For this reason,
PBF is often used to manufacture functional metal parts for applications in the
aerospace, automotive, medical, and dental industries.&lt;/p&gt;

&lt;p&gt;The limitations of PBF tend to be surface roughness and shrinkage or distortion
during processing, as well as the challenges the arise from powder handling and
disposal&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:6&#34;&gt;&lt;a href=&#34;#fn:6&#34;&gt;6&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h4 id=&#34;selective-laser-sintering-sls&#34;&gt;Selective laser sintering (SLS)&lt;/h4&gt;

&lt;p&gt;Selective laser sintering is the most
common additive manufacturing technology for industrial applications. The
technology originated in the late 1980s at the University of Texas at Austin&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:7&#34;&gt;&lt;a href=&#34;#fn:7&#34;&gt;7&lt;/a&gt;&lt;/sup&gt;.
SLS 3D printers use a high-powered CO&lt;super&gt;2&lt;/super&gt; laser to fuse small
particles of polymer powder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s break down the SLS process in steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, a bed is filled with power.&lt;br /&gt;&lt;/li&gt;
&lt;li&gt;Second, lasers, sinter, or coalesce,
powdered material create a solid structure. This step is repeated, layer by
layer, until the object is complete.  Finally, the object, still encased in
loose powder, is cleaned with brushes and pressurized air.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Differing from SLA and FDM, the nice thing about SLS is it does not require an
object to have support structures. This is due to the unfused powder supporting
the part during printing. This makes SLS ideal for objects with complex geometries,
including interior features, undercuts, and negative features. Parts produced with SLS
printing typically have excellent mechanical characteristics, meaning they
are very strong. However, objects with thin walls may not be
printed due to the minimum 1mm limitation and thin walls in large models may
warp after cooling down.&lt;/p&gt;

&lt;p&gt;The most common material for selective laser sintering is polyamide (nylon),
a popular engineering thermoplastic with great mechanical properties. Nylon
is lightweight, strong, and flexible, as well as stable against impact,
chemicals, heat, UV light, water, and dirt. Alumide, a blend of gray aluminum
powder and polyamide, and rubber-like materials can also be used.&lt;/p&gt;

&lt;p&gt;The combination of low cost per part, high productivity, and established
materials make SLS a popular choice among engineers for functional prototyping
and a cost-effective alternative to injection molding for limited-run or bridge
manufacturing.&lt;/p&gt;

&lt;h4 id=&#34;selective-laser-melting-slm-and-direct-metal-laser-sintering-dmls&#34;&gt;Selective laser melting (SLM) and direct metal laser sintering (DMLS)&lt;/h4&gt;

&lt;p&gt;Both selective laser melting and direct metal laser sintering produce
objects via a method similar to SLS. Differing from SLS, SLM and DMLS
are used in the production of metal parts. SLM fully melts the
powder, while DMLS heats the powder to near melting temperatures until it
chemically fuses. DMLS only works with alloys while SLM can use single component
metals, such as aluminum.&lt;/p&gt;

&lt;p&gt;Unlike SLS, SLM and DMLS require support structures to compensate for the high
residual stresses generated during the build process. Support structures help
to limit the possibility of warping and distortion. DMLS is the most well-established metal
additive manufacturing process with the largest installed base.&lt;/p&gt;

&lt;h4 id=&#34;electron-beam-melting-ebm&#34;&gt;Electron beam melting (EBM)&lt;/h4&gt;

&lt;p&gt;Electron beam melting uses a high energy beam rather than a laser
to induce fusion between the particles of metal powder. A focused electron beam
scans across a thin layer of powder which causes localized melting and solidification
over a specific cross-sectional area. The nice thing about electron beam systems
is that they produce less
residual stresses in objects, meaning there is less need for support structures.
EBM also uses less energy and can produce
layers quicker than SLM and DMLS. However, the minimum feature size, powder
particle size, layer thickness, and surface finish are typically lower quality
than SLM and DMLS.
EBM requires the objects to be produced in a vacuum and the process can
only be used with conductive materials&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:8&#34;&gt;&lt;a href=&#34;#fn:8&#34;&gt;8&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h4 id=&#34;multi-jet-fusion-mjf&#34;&gt;Multi jet fusion (MJF)&lt;/h4&gt;

&lt;p&gt;Multi jet fusion
is essentially a combination of the SLS and material jetting technologies.
A carriage with inkjet nozzles, similar to the nozzles used in inkjet printers,
passes over the print area, depositing a fusing agent on a thin layer of plastic
powder. Simultaneously, a detailing agent that inhibits sintering is printed
near the edge of the part. A high-power infrared radiation (IR) energy source
then passes over the build bed and sinters the areas where the fusing agent was
dispensed, while leaving the rest of the powder untouched. The process repeats
until the object is complete&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:9&#34;&gt;&lt;a href=&#34;#fn:9&#34;&gt;9&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h3 id=&#34;material-jetting&#34;&gt;Material jetting&lt;/h3&gt;

&lt;p&gt;Material jetting is most
comparable to the inkjet printing process. Like an inkjet printer prints ink
layer by layer onto a piece of paper, material jetting deposits material onto
the build surface. The layer is then cured or hardened using ultraviolet (UV)
light. This is repeated layer by layer until the object is completed. Since the
material is deposited in drops, the materials are limited to photopolymers,
metals, or wax that cure or harden when exposed to UV light or elevated
temperatures.&lt;/p&gt;

&lt;p&gt;Material jetting is ideal for realistic prototypes, providing excellent details,
high accuracy, and smooth surface finish. Material jetting allows a designer to
print in multiple colors and multiple materials in a single print. This makes it
great for low run injection molds and medical models. Since material jetting
allows multiple materials in a single print, support structures can be printed
from a dissolvable material that is easily removed after building. The main
drawbacks of material jetting technologies are the high cost and the brittle
mechanical properties of the UV activated photopolymers&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:10&#34;&gt;&lt;a href=&#34;#fn:10&#34;&gt;10&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h4 id=&#34;nanoparticle-jetting-npj&#34;&gt;Nanoparticle jetting (NPJ)&lt;/h4&gt;

&lt;p&gt;Nanoparticle jetting is a process by which a liquid, which contains metal nanoparticles or
support nanoparticles, is loaded into the printer via a cartridge. The liquid is then
jetted, similar to an inkjet printer, onto
a build tray through thousands of nozzles in extremely thin layers of droplets.
High temperatures inside
the building chamber cause the liquid to evaporate leaving behind metal
objects&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:11&#34;&gt;&lt;a href=&#34;#fn:11&#34;&gt;11&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h4 id=&#34;drop-on-demand-dod&#34;&gt;Drop-on-demand (DOD)&lt;/h4&gt;

&lt;p&gt;Drop-on-demand material jetting printers have two
print jets: one to deposit the build materials, typically a wax-like liquid, and
another for a dissolvable support material. Similar to material extrusion, DOD
printers follow a predetermined path and deposit material in a pointwise fashion
to build layers of an object. These machines also employ
a fly-cutter, a single-point cutting tool, that skims the build area after each
layer to ensure a perfectly flat surface before printing the next layer. DOD
technology is typically used to produce wax-like patterns for lost-wax casting,
used to duplicate a metal sculpture that is cast from an original sculpture, and
mold making applications&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:12&#34;&gt;&lt;a href=&#34;#fn:12&#34;&gt;12&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h3 id=&#34;binder-jetting&#34;&gt;Binder jetting&lt;/h3&gt;

&lt;p&gt;A binder jetting process, also
referred to as 3DP, uses two materials: a powder and a binder. The binder, which
is typically a liquid, acts as the adhesive for the powder. A print head, much
like that in an inkjet printer, moves horizontally across the x and y axes to
deposit alternating layers of the powder material and the binder. The platform
holding the bed of powder, the object is printed on, lowers as each layer is
printed. This is repeated until the object is complete. Like SLS, the object
does not need support structures since the powder bed acts as support. The
powder materials can be either ceramic-based such as glass or gypsum or metal
such as stainless steel.&lt;/p&gt;

&lt;p&gt;Ceramic-based binder jetting, which uses a ceramic powder as the material, is
best for aesthetic applications that need intricate designs such as
architectural models, packaging, molds for sand casting, and ergonomic
verification. It is not intended for functional prototypes, as the objects
created are quite brittle.&lt;/p&gt;

&lt;p&gt;Metal binder jetting, which uses a metal powder as the material, is well suited
for functional components and more cost-effective than SLM or DMLS metal parts.
However, the downside is the metal parts have poorer mechanical properties&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:13&#34;&gt;&lt;a href=&#34;#fn:13&#34;&gt;13&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h3 id=&#34;direct-energy-deposition-ded&#34;&gt;Direct Energy Deposition (DED)&lt;/h3&gt;

&lt;p&gt;Direct energy deposition creates objects by
melting powder material as it is deposited, similar to material extrusion. It is
predominantly used with metal powders or wire and is often referred to as metal
deposition since it is exclusive to metals. DED relies on dense support
structures which are not ideal for creating a part from scratch, which makes it
best suited for repairing or adding material to existing objects, such as
turbine blades&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:14&#34;&gt;&lt;a href=&#34;#fn:14&#34;&gt;14&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h4 id=&#34;laser-engineered-net-shape-lens&#34;&gt;Laser engineered net shape (LENS)&lt;/h4&gt;

&lt;p&gt;Laser engineered net shape
utilizes a deposition head which consists of a laser head, powder dispensing
nozzles, and inert gas tubing. The deposition head melts the powder as it is
ejected from the nozzles to build an object layer by layer. The laser creates
a melt pool on the build area and powder is sprayed into the pool, where it is
melted and then solidified.&lt;/p&gt;

&lt;h4 id=&#34;electron-beam-additive-manufacturing-ebam&#34;&gt;Electron beam additive manufacturing (EBAM)&lt;/h4&gt;

&lt;p&gt;Electron beam additive manufacturing uses an electron beam to create metal
objects by welding together metal powder or wire. Differentiating from LENS,
which uses a laser, electron beams are more efficient and operate under a vacuum
that was originally designed for use in space&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:15&#34;&gt;&lt;a href=&#34;#fn:15&#34;&gt;15&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h3 id=&#34;sheet-lamination&#34;&gt;Sheet lamination&lt;/h3&gt;

&lt;p&gt;Sheet
lamination processes include laminated object manufacturing (LOM) and ultrasonic
additive manufacturing (UAM)&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:16&#34;&gt;&lt;a href=&#34;#fn:16&#34;&gt;16&lt;/a&gt;&lt;/sup&gt;. You might be familiar with laminators, I had one
growing up. To laminate a piece of paper, you would place the paper in what is
known as a laminator pouch. The pouch is made up of two types of plastic:
polyethylene terephthalate (PET) on the outer layer and ethylene-vinyl acetate
(EVA) on the inner layer. A heated roller then adheres the two sides of the
pouch together so the paper is fully encased in plastic when it is done.&lt;/p&gt;

&lt;p&gt;The ultrasonic additive manufacturing builds metal objects by fusing and
stacking metal strips, sheets, or ribbons. The layers are bound together using
ultrasonic welding. The process is done on a machine able to computer numerical
control (CNC) mill the workpiece as the layers are built. The process requires
removal of the unbound metal, often during the welding process. UAM uses metals
such as aluminium, copper, stainless steel, and titanium. The process can bond
different materials, build at a fast rate, and make large objects practically
while requiring relatively little energy since the metal is not melted.&lt;/p&gt;

&lt;h2 id=&#34;trying-out-products&#34;&gt;Trying out products&lt;/h2&gt;

&lt;p&gt;Now that we know a bit more about FDM and SLA, I can tell you about
my experience with products built using these technologies. As a preface, what
I was personally looking for was &lt;em&gt;a product&lt;/em&gt;, meaning something easy to set up,
easy to use, and including a fully integrated experience between the hardware
and the software. I didn’t want something I would have to maintain or debug
since I would rather this &lt;em&gt;just work&lt;/em&gt;. I can understand how other folks might be
in the market for something considering their decision matrix, but this was
mine.&lt;/p&gt;

&lt;p&gt;For trying out FDM, I decided to get the
&lt;a href=&#34;https://www.makerbot.com/3d-printers/replicator-educators-edition/&#34;&gt;MakerBot Replicator+&lt;/a&gt;.
I chose this
printer mainly because it is a classic. MakerBot has a great community with
&lt;a href=&#34;https://thingiverse.com/&#34;&gt;Thingiverse&lt;/a&gt;,
their site for sharing and modifying 3D models. Interestingly, the
first Makerbot product was open source and they seem to have snubbed the open
source community when they went from an open source model to closed with their
later products&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:17&#34;&gt;&lt;a href=&#34;#fn:17&#34;&gt;17&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Makerbot has been around since 2009, I figured through 11 years of experience
with 3D printing products they should have, hopefully, nailed it. They also have
an &lt;a href=&#34;https://apps.apple.com/us/app/makerbot/id881138579&#34;&gt;iPad app&lt;/a&gt;
that you can use to print any model from Thingiverse. I use
&lt;a href=&#34;https://www.shapr3d.com/&#34;&gt;Shapr3D&lt;/a&gt;
for creating models on my iPad so this seemed super convenient. I could create
my model in Shapr3D, upload it to Thingiverse, and print it, all from my iPad.
The MakerBot also has a camera so you can watch your 3D print happening from the
iPad app.&lt;/p&gt;

&lt;p&gt;For SLA, I got the Form Labs &lt;a href=&#34;https://formlabs.com/3d-printers/form-3&#34;&gt;Form 3&lt;/a&gt;.
The software you use for printing your
models is called &lt;a href=&#34;https://formlabs.com/software/#preform&#34;&gt;PreForm&lt;/a&gt;
and works on Mac or Windows. While the Form 3 does not
have an iPad app, they do have an online dashboard. You can use this for
tracking your print progress. Having an
&lt;a href=&#34;https://formlabs.com/dashboard/&#34;&gt;online dashboard&lt;/a&gt;
is at least moving the
right direction towards being able to print from my iPad if they should
implement printing from the dashboard in the future. Form Labs, like Makerbot,
was part of the Netflix documentary,
&lt;a href=&#34;https://www.netflix.com/title/80005444&#34;&gt;Print the Legend&lt;/a&gt;. The Form 3 is their third
revision of their product so I figured all the kinks should be, hopefully,
worked out by now.&lt;/p&gt;

&lt;p&gt;I am going to first go over the setup process with both machines and then we can
compare the quality of the prints and the time each machine took to print the
same models.&lt;/p&gt;

&lt;h3 id=&#34;makerbot-replicator&#34;&gt;MakerBot Replicator+&lt;/h3&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/makerbot.jpg&#34; alt=&#34;makerbot&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Above is a picture of the printer as I was setting it up. I decided to set up
the MakerBot from the iPad app since that would be primarily where I would use
it from. If you have ever bought an IoT device you might be familiar with the
setup workflow of joining the IoT device’s WiFi network on your mobile device
and then configuring the main network to be your WiFi network. This is the same
setup process as the MakerBot.&lt;/p&gt;

&lt;p&gt;The MakerBot iOS app leaves a bit to be desired. It feels clunky, not snappy,
non-native, and slow. Kinda feels like what I would expect an app written by
devs with hardware expertise, rather than software expertise, would feel like.
Setting up the network failed for me numerous times from my iPad so I decided to
try an old Android phone instead. Again, the Android app felt clunky and
non-native. It even asked me to go into my Android settings and grant more
permissions to the app versus just prompting me for permissions&amp;hellip; but finally
I got the printer setup through the Android app. Now my printer showed up in my
Makerbot account on the Android device and I could get through the setup
process.&lt;/p&gt;

&lt;p&gt;Being used to the cloud, I expected my printer would just appear on my iPad app
since I was logged in to my MakerBot account that I tied the printer to on my
Android device. It did not. I had to manually enter the printer’s IP address on
my local network to the iOS MakerBot app to add the printer. That seemed like an
unnecessary step, my MakerBot account should have stored that information and
synced it to my other devices after I completed the setup on my initial device.
Or the MakerBot app should be able to scan my local network for printers, but
I digress. At least now it was working!&lt;/p&gt;

&lt;p&gt;I moved forward with calibrating the device and printing the initial test print.
I then continued to print a AAA &amp;amp; AA battery holder, 9V battery holder, and
spaceship cookie cutter. I printed these same models on the Form 3 as well, we
will go over the comparison later.&lt;/p&gt;

&lt;h3 id=&#34;form-3&#34;&gt;Form 3&lt;/h3&gt;

&lt;p&gt;When the Form 3 arrived, I was
thinking “wow this is complex!” The Replicator+ had come in one box while the
Form 3 came in 4 separate boxes. I realized after opening this was because I got
the printer, the &lt;a href=&#34;https://formlabs.com/wash-cure/&#34;&gt;Form Wash&lt;/a&gt;, and the
&lt;a href=&#34;https://formlabs.com/wash-cure/&#34;&gt;Form Cure&lt;/a&gt; as well as a few different resins.
Below is a picture after I got everything unboxed.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/form3.jpg&#34; alt=&#34;form3&#34; /&gt;&lt;/p&gt;

&lt;p&gt;The Form 3 relies on the built-in touch screen for the setup. This was quite
nice after the experience with the Replicator+. I very easily got it connected
onto my WiFi network and was ready to print. Since the PreForm software requires
Windows I had to pull an old Windows desktop out of my pantry for this. The
software is easy to use and soon I was printing my first job. The only trouble
I got into was the pre-print steps when my first job was uploaded to be printed.
The mixer, part of the tank, was getting a bit off track. After searching the
forums, I found this is a common issue&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:18&#34;&gt;&lt;a href=&#34;#fn:18&#34;&gt;18&lt;/a&gt;&lt;/sup&gt; for a first print and after you add some
resin in the tank the mixer will perform better. This turned out to be true so
it was only a minor glitch!&lt;/p&gt;

&lt;p&gt;I then continued to print the AAA &amp;amp; AA battery holder, 9V battery holder, and
spaceship cookie cutter just like I had done with the Replicator+.&lt;/p&gt;

&lt;h3 id=&#34;result-comparison&#34;&gt;Result comparison&lt;/h3&gt;

&lt;h4 id=&#34;aaa-aa-battery-holder-19&#34;&gt;AAA &amp;amp; AA battery holder&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:19&#34;&gt;&lt;a href=&#34;#fn:19&#34;&gt;19&lt;/a&gt;&lt;/sup&gt;&lt;/h4&gt;

&lt;p&gt;This took 9 hours and 37 minutes on the
Replicator+. It took 3 hours and 9 minutes on the Form 3. The model on the left
below is from the Replicator+ and the model on the right is from the Form 3. As you
can tell the quality from the Form 3 is far smoother. There are fewer build
lines, it feels like one continuous piece, and there no strays of filament on
the Form 3 model. The only small imperfections in the Form 3 model come from my
own work of poorly removing the scaffolding.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/aaa-battery.jpg&#34; alt=&#34;aaa-battery&#34; /&gt;&lt;/p&gt;

&lt;h4 id=&#34;9v-battery-holder-20&#34;&gt;9V battery holder&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:20&#34;&gt;&lt;a href=&#34;#fn:20&#34;&gt;20&lt;/a&gt;&lt;/sup&gt;&lt;/h4&gt;

&lt;p&gt;This took 2 hours and 15 minutes on the Replicator+. It took
1 hour and 47 minutes on the Form 3. The model on the left below is from the
Form 3 and the model on the right is from the Replicator+. Again the Form
3 built the smoother model. However, aside from visible lines, the Replicator+
did a fairly good job at this one. The imperfections on the Form 3 model come
from the fact I am terrible at removing the scaffoldings.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/9v-battery.jpg&#34; alt=&#34;9v-battery&#34; /&gt;&lt;/p&gt;

&lt;h4 id=&#34;spaceship-cookie-cutter-21&#34;&gt;Spaceship cookie cutter&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:21&#34;&gt;&lt;a href=&#34;#fn:21&#34;&gt;21&lt;/a&gt;&lt;/sup&gt;&lt;/h4&gt;

&lt;p&gt;This took 1 hour and 48 minutes on the Replicator+. It
took 51 minutes on the Form 3. The model on the left below is from the Form
3 and the model on the right is from the Replicator+. While the Replicator+ did
a good job on this design, the Form 3 is still smoother quality.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/spaceship-cookie-cutter.jpg&#34; alt=&#34;spaceship-cookie-cutter&#34; /&gt;&lt;/p&gt;

&lt;p&gt;One little detail I really love about the Form 3 is on the base of the prints,
that gets removed after printing, is the name of the print, as seen below.
I could imagine this coming in handy if you have a bunch of parts being printed
that look very similar with small differences.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/form3-detail.jpg&#34; alt=&#34;form3-detail&#34; /&gt;&lt;/p&gt;

&lt;p&gt;As shown from the experiments above, the quality and time to build are much
better on the Form 3 than the Replicator+. Where the MakerBot wins is in aspects
of the user experience. While the iPad app leaves some snappy improvements to be
desired, it still exists and works for printing which is on the right track.
I also wish the Form 3 had a built-in camera that I could watch as I did with
the MakerBot. Since the Form 3 is SLA, I think it would be even more
invigorating to watch because I found myself very interested in watching the
model rise from the “goo”, aka the resin. Overall, the Form 3 is great and I can
only anticipate they continue to improve!&lt;/p&gt;

&lt;p&gt;I hope you enjoyed and learned something from this article even if you aren’t in
the market for a 3D printer. In the future, I would love if products did
automatic support removal because in the pictures above from the Form 3 any
imperfections actually came from my removal of the support structures&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:22&#34;&gt;&lt;a href=&#34;#fn:22&#34;&gt;22&lt;/a&gt;&lt;/sup&gt;. I would
also love to see some sort of reliable quality monitoring&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:23&#34;&gt;&lt;a href=&#34;#fn:23&#34;&gt;23&lt;/a&gt;&lt;/sup&gt;. While a lot of
progress has been made in the 3D printing space, I cannot wait to see what will
come in the future. The ability to go from a digital file to a physical object
rapidly with many different materials can enable so many folks to create
something they could only imagine in their wildest dreams until now.&lt;/p&gt;
&lt;div class=&#34;footnotes&#34;&gt;

&lt;hr /&gt;

&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;Obviously having two 3D printers is a bit unseemly so I decided after I tried them both I would donate one to a nearby school.
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:1&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;&lt;a href=&#34;https://www.astm.org/Standards/ISOASTM52900.htm&#34;&gt;https://www.astm.org/Standards/ISOASTM52900.htm&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:2&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:3&#34;&gt;&lt;a href=&#34;https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufacturing/materialextrusion/&#34;&gt;https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufacturing/materialextrusion/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:3&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:4&#34;&gt;&lt;a href=&#34;https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufacturing/vatphotopolymerisation/&#34;&gt;https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufacturing/vatphotopolymerisation/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:4&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:5&#34;&gt;&lt;a href=&#34;http://www.historyofinformation.com/detail.php?id=3864&#34;&gt;http://www.historyofinformation.com/detail.php?id=3864&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:5&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:6&#34;&gt;&lt;a href=&#34;https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufacturing/powderbedfusion/&#34;&gt;https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufacturing/powderbedfusion/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:6&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:7&#34;&gt;&lt;a href=&#34;https://www.me.utexas.edu/news/news/selective-laser-sintering-birth-of-an-industry&#34;&gt;https://www.me.utexas.edu/news/news/selective-laser-sintering-birth-of-an-industry&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:7&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:8&#34;&gt;&lt;a href=&#34;https://www.sciencedirect.com/topics/chemistry/electron-beam-melting&#34;&gt;https://www.sciencedirect.com/topics/chemistry/electron-beam-melting&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:8&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:9&#34;&gt;&lt;a href=&#34;https://www.protolabs.com/services/3d-printing/multi-jet-fusion/&#34;&gt;https://www.protolabs.com/services/3d-printing/multi-jet-fusion/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:9&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:10&#34;&gt;&lt;a href=&#34;https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufacturing/materialjetting/&#34;&gt;https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufacturing/materialjetting/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:10&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:11&#34;&gt;&lt;a href=&#34;https://www.additivemanufacturing.media/blog/post/am-101-nanoparticle-jetting-npj&#34;&gt;https://www.additivemanufacturing.media/blog/post/am-101-nanoparticle-jetting-npj&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:11&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:12&#34;&gt;&lt;a href=&#34;https://www.sciencedirect.com/science/article/abs/pii/S0924424719312701&#34;&gt;https://www.sciencedirect.com/science/article/abs/pii/S0924424719312701&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:12&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:13&#34;&gt;&lt;a href=&#34;https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufacturing/binderjetting/&#34;&gt;https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufacturing/binderjetting/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:13&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:14&#34;&gt;&lt;a href=&#34;https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufacturing/directedenergydeposition/&#34;&gt;https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufacturing/directedenergydeposition/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:14&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:15&#34;&gt;&lt;a href=&#34;https://www.researchgate.net/publication/328169730_A_new_3D_printing_method_based_on_non-vacuum_electron_beam_technology&#34;&gt;https://www.researchgate.net/publication/328169730_A_new_3D_printing_method_based_on_non-vacuum_electron_beam_technology&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:15&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:16&#34;&gt;&lt;a href=&#34;https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufacturing/sheetlamination/&#34;&gt;https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufacturing/sheetlamination/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:16&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:17&#34;&gt;&lt;a href=&#34;https://3dprintingindustry.com/news/failure-makerbot-expert-weighs-78926/&#34;&gt;https://3dprintingindustry.com/news/failure-makerbot-expert-weighs-78926/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:17&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:18&#34;&gt;&lt;a href=&#34;https://forum.formlabs.com/t/form-3-mixer-arm-problem/25331&#34;&gt;https://forum.formlabs.com/t/form-3-mixer-arm-problem/25331&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:18&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:19&#34;&gt;&lt;a href=&#34;https://www.thingiverse.com/thing:3358129&#34;&gt;https://www.thingiverse.com/thing:3358129&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:19&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:20&#34;&gt;&lt;a href=&#34;https://www.thingiverse.com/thing:832281&#34;&gt;https://www.thingiverse.com/thing:832281&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:20&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:21&#34;&gt;&lt;a href=&#34;https://www.thingiverse.com/thing:513900&#34;&gt;https://www.thingiverse.com/thing:513900&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:21&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:22&#34;&gt;&lt;a href=&#34;https://www.arxiv-vanity.com/papers/1904.12117/&#34;&gt;https://www.arxiv-vanity.com/papers/1904.12117/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:22&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:23&#34;&gt;&lt;a href=&#34;https://arxiv.org/pdf/2003.08749.pdf&#34;&gt;https://arxiv.org/pdf/2003.08749.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:23&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
                </item>
                    
            <item>
                <title>Size Matters</title>
                <link>https://blog.jessfraz.com/post/size-matters/</link>
                <pubDate>Mon, 25 May 2020 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/size-matters/</guid>
                    <description>&lt;p&gt;My mom has a tendency to buy these really terribly spec&amp;rsquo;d Windows machines.
She&amp;rsquo;s been doing it for as long as I&amp;rsquo;ve been alive. I was surprised when on one
of our latest Zoom calls she said &amp;ldquo;You know what, I&amp;rsquo;m beginning to think that
size matters.&amp;rdquo; I&amp;rsquo;ve only been telling her this for years! Here&amp;rsquo;s the problem.&lt;/p&gt;

&lt;p&gt;There are a bunch of shitty Windows machines you can buy that cost around $400
dollars and have something like 4GB RAM. For consumers, this is really
compelling; the price seems right. The problem is when they start trying to use
the machine to do &lt;em&gt;anything&lt;/em&gt;, it runs at a snails pace and leaves them with the
world&amp;rsquo;s worst user experience. My mom continually complained about how slow her
computer was and I continually said it&amp;rsquo;s because it&amp;rsquo;s a shit machine and you
have to spend more to actually get good specs.&lt;/p&gt;

&lt;p&gt;Apple wouldn&amp;rsquo;t be caught dead selling a machine with 4GB of RAM. They know
better than that and care about the experience the end user has. My sister has
been lucky enough to never have to buy a computer since she continually inherits
my old ones. After my mom had finished saying that &amp;ldquo;size matters,&amp;rdquo; my sister
noted that my MacBookPro I gave her in 2012 still runs great and is fast. This
is no surprise to me because at the time I bought that computer it was the top
of its line and had 16GB of RAM. Today, that model goes up to 64GB of RAM but
16GB is definitely enough for my sister to run a browser and do what she needs
for work (although Chrome is really pushing the limits these days).&lt;/p&gt;

&lt;p&gt;It infuriates me to no end that consumers have an option of even buying a $400
computer that will give them such a terrible experience. The price is great but
the experience is terrible. Even if consumers have a daughter continually
telling them that &amp;ldquo;size matters,&amp;rdquo; they might still make the very innocent
mistake of buying the machine and realizing later that it is a lemon. It is not
their fault. Manufacturers of computers should be embarrassed for even selling
such a shit machine. I know I would be.&lt;/p&gt;

&lt;p&gt;A few articles and papers have surfaced lately on migrating threads and processes
to different kernels. One of these is called popcorn&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34;&gt;1&lt;/a&gt;&lt;/sup&gt;. Another has been
dubbed teleforking&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34;&gt;2&lt;/a&gt;&lt;/sup&gt;. I&amp;rsquo;m not going to get into the details, but in essence,
what people are trying to do is move
a process from one computer to another. This is great! This could be a huge
problem solver for folks with computers that have terrible specs. It could
also mean a lot for the future of consumer computers.&lt;/p&gt;

&lt;p&gt;Imagine a computer where if you were running especially hot and your user
experience had been compromised&amp;hellip; the computer realizes this and forks your
process into a remote data center,
while maintaining a great user experience locally. It would need to be seamless and
invisible to the end user. If the process is a GUI it would need to still have
the user interface rendering locally while most of the compute is remote. If the
process is a job streaming output into a terminal it is a bit easier. Both
should be possible.&lt;/p&gt;

&lt;p&gt;Future computers should not have limited computing power, just limited &lt;em&gt;local&lt;/em&gt;
computing power. This wouldn&amp;rsquo;t need to just be for your laptops or desktops,
your VR headset or gaming console could fork processes to other available
computers when they needed more computing power. The remote compute would not
always need to be in a data center. An overburdened laptop could fork a process
to your gaming console while you were at work and vice-versa while you were
playing a game.&lt;/p&gt;

&lt;p&gt;Compute should be easily shared and readily available. While consumers should
not even have an option of buying a machine with terrible specs that lead to
a terrible user experience, the ability to offload processes to another computer
would allow them to have a great experience even on a lemon. As I see it, this
should be the future of consumer computing. People should be able to create
anything they imagine on a computer that gives them unlimited power to do so. To
quote one of my favorite lines from Halt and Catch Fire: &amp;ldquo;Computers aren&amp;rsquo;t the
thing. They&amp;rsquo;re the thing that gets us to the thing.&amp;rdquo;&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/computers-arent-the-thing.gif&#34; alt=&#34;computers-arent-the-thing&#34; /&gt;&lt;/p&gt;
&lt;div class=&#34;footnotes&#34;&gt;

&lt;hr /&gt;

&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;&lt;a href=&#34;https://www.ssrg.ece.vt.edu/theses/MS_Katz.pdf&#34;&gt;https://www.ssrg.ece.vt.edu/theses/MS_Katz.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:1&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;&lt;a href=&#34;https://thume.ca/2020/04/18/telefork-forking-a-process-onto-a-different-computer/&#34;&gt;https://thume.ca/2020/04/18/telefork-forking-a-process-onto-a-different-computer/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:2&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
                </item>
                    
            <item>
                <title>Where is the high bandwidth internet for the masses?</title>
                <link>https://blog.jessfraz.com/post/where-is-the-high-bandwidth-internet-for-the-masses/</link>
                <pubDate>Sun, 03 May 2020 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/where-is-the-high-bandwidth-internet-for-the-masses/</guid>
                    <description>

&lt;p&gt;Being cooped up at home got me looking into the new Xbox and PlayStation 5.
I was curious about the innovations in the consoles since their successors. Both
claim to have ray tracing and support for 8K graphics. This then got me thinking
about how prevalent 8K televisions are today. 8K televisions seem to be in the
same state as 4K televisions a few years ago. One thing I know through my life
is that pixel density will continue to get bigger and bigger. I almost wonder if
there is a Moore’s Law equivalent for pixel density… let’s take a look at
televisions through the years.&lt;/p&gt;

&lt;h2 id=&#34;pixel-density-through-the-years&#34;&gt;Pixel density through the years&lt;/h2&gt;

&lt;h3 id=&#34;standard-definition-television&#34;&gt;Standard definition television&lt;/h3&gt;

&lt;p&gt;The first electronic television was invented in 1927&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34;&gt;1&lt;/a&gt;&lt;/sup&gt;. Cable television systems
originated in the United States in the late 1940s and were designed to improve
reception of commercial network broadcasts in remote and hilly areas&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34;&gt;2&lt;/a&gt;&lt;/sup&gt;. We can
consider the televisions of this time to be what we know of as “standard
definition.” Standard definition television (SDTV) is designed on the assumption
that viewers in the typical home setting are located at a distance equal to six
or seven times the height of the picture screen — on average some 10 feet away.&lt;/p&gt;

&lt;h3 id=&#34;high-definition-television&#34;&gt;High definition television&lt;/h3&gt;

&lt;p&gt;High definition television (HDTV) has its roots in research that was started by
Japan’s public broadcaster, NHK, in 1970&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:3&#34;&gt;&lt;a href=&#34;#fn:3&#34;&gt;3&lt;/a&gt;&lt;/sup&gt;. For comparison, a 1080i HDTV signal
offers about six times the resolution of a conventional 480i SDTV signal. HDTV
also features a wider 16:9 aspect ratio format that more closely resembles human
peripheral vision than the 4:3 aspect ratio used by conventional TVs in the
past. Furthermore, HDTV is based on a system of 3 primary image signal
components rather than a single composite signal, thus eliminating the need for
signal encoding and decoding processes that can degrade image quality. Perhaps
the biggest advantage over the old analog SDTV system is that HDTV is an
inherently digital system.&lt;/p&gt;

&lt;h3 id=&#34;4k-resolution&#34;&gt;4K resolution&lt;/h3&gt;

&lt;p&gt;In 1984, Hitachi released the CMOS (complementary metal–oxide–semiconductor)
graphics processor ARTC HD63484, which was capable of displaying up to 4K
resolution when in monochrome mode. The first displays capable of displaying 4K
content appeared in 2001, as the IBM T220/T221 LCD monitors&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:4&#34;&gt;&lt;a href=&#34;#fn:4&#34;&gt;4&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h3 id=&#34;8k-resolution&#34;&gt;8K resolution&lt;/h3&gt;

&lt;p&gt;Just like with HDTV, Japan&amp;rsquo;s public broadcaster, NHK, was the first to start
research and development of 8K resolution in 1995. The format was standardized
in October 2007 and the interface was standardized in August 2010 and
recommended as the international standard for television in 2012. The world&amp;rsquo;s
first 8K television was unveiled by Sharp at the Consumer Electronics Show (CES)
in 2012&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:5&#34;&gt;&lt;a href=&#34;#fn:5&#34;&gt;5&lt;/a&gt;&lt;/sup&gt;. Screenings of 2014 Winter Olympics in Sochi and the FIFA World Cup in
Brazil in June 2014 were done in 8K&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:6&#34;&gt;&lt;a href=&#34;#fn:6&#34;&gt;6&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;While there was a huge gap in time between HD and 4K televisions, HDTV continued
to get better and better during those years. It is not like the industry was
stagnant, there were more improved and better HDTVs made. It might be fun to
take a bet that pixel density will double in size every ten years, but that is
just being presumptuous.&lt;/p&gt;

&lt;h2 id=&#34;what-does-this-mean-for-bandwidth&#34;&gt;What does this mean for bandwidth?&lt;/h2&gt;

&lt;p&gt;While it is fun to stick a finger in the air and try to estimate future pixel
density growth, there is another point I want to make. If the next wave of
consumer televisions is 8K what does that mean for streaming? Surely, this must
have an effect on bandwidth.&lt;/p&gt;

&lt;p&gt;For streaming HD, most providers recommend about 18mbps. For streaming 4K,
providers recommend 25mbps. For streaming 8K, providers recommend 100mbps. This
comes from the fact that 8K televisions have a frame rate of 120fps (frames per
second). This is in contrast to 4K televisions that have a frame rate of either
30fps or 60fps. It’s also important to note, this doesn’t take into account that
typically multiple devices on a network share the same bandwidth, so if your TV
needs 100mbps, it is going to be shared between a computer, iPad, multiple
phones, IoT devices, and whatever else is on your network. Typically you would
want a multiple of the recommended speed so you can have multiple devices
connected at the same time.&lt;/p&gt;

&lt;p&gt;Let’s take a look at the average network speed. According to a report&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:7&#34;&gt;&lt;a href=&#34;#fn:7&#34;&gt;7&lt;/a&gt;&lt;/sup&gt; by
speedtest.net in 2018&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:8&#34;&gt;&lt;a href=&#34;#fn:8&#34;&gt;8&lt;/a&gt;&lt;/sup&gt;, the average network download speed in the United States
was 96.25&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:9&#34;&gt;&lt;a href=&#34;#fn:9&#34;&gt;9&lt;/a&gt;&lt;/sup&gt;. In the United Kingdom, the average network download speed was 50.16&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:10&#34;&gt;&lt;a href=&#34;#fn:10&#34;&gt;10&lt;/a&gt;&lt;/sup&gt;.
In Spain, the average network download speed was 60.12&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:11&#34;&gt;&lt;a href=&#34;#fn:11&#34;&gt;11&lt;/a&gt;&lt;/sup&gt;. The United States seems
to be the highest overall, but is still not cutting it for 8K streaming.&lt;/p&gt;

&lt;p&gt;If we know that the pixel density of televisions is only going to increase over
time, causing streaming services to need more bandwidth, why are we not seeing
a bunch of fiber being laid down or other innovations to get faster internet to
the mass market?&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:12&#34;&gt;&lt;a href=&#34;#fn:12&#34;&gt;12&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;You might remember that Google Fiber was trying to do this exact thing.
The problem with the Google Fiber project was what is known as the “last mile”
problem. The “last mile” is the last bit of cable to get the connection to your
home or business, these are known as drop cables. Most folks have existing cable
lines running to their home, which leaves fiber providers deciding between using
those existing lines causing a decrease in speed or laying new fiber lines which
is very expensive. Google Fiber chose the latter and it paid for that decision.&lt;/p&gt;

&lt;p&gt;Fiber is similar to public infrastructure like a freeway, you need to put in the
investment upfront but it will pay off dividends over time. Most companies do not get
that and want an economic return upfront.&lt;/p&gt;

&lt;p&gt;The solution to the “last mile” problem might be wireless, which leads us to the
current innovations with satellites.&lt;/p&gt;

&lt;h2 id=&#34;what-about-satellites&#34;&gt;What about satellites?&lt;/h2&gt;

&lt;p&gt;Startups like Astranis&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:13&#34;&gt;&lt;a href=&#34;#fn:13&#34;&gt;13&lt;/a&gt;&lt;/sup&gt; claim to be able to provide broadband internet to the
masses through satellites. Astranis’ first satellite will offer 7.5 gigabits per
second of capacity for Pacific Dataport to use&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:14&#34;&gt;&lt;a href=&#34;#fn:14&#34;&gt;14&lt;/a&gt;&lt;/sup&gt;. Elon’s Starlink&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:15&#34;&gt;&lt;a href=&#34;#fn:15&#34;&gt;15&lt;/a&gt;&lt;/sup&gt; has the same
ambitious mission. Starlink claims they will offer plans to consumers with
speeds up to a gigabit per second&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:16&#34;&gt;&lt;a href=&#34;#fn:16&#34;&gt;16&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;While most of the world is spending time at home, video chatting and streaming
services have become a household essential. As the world continues to move from
physical to digital at a rapid pace, we should see high bandwidth internet take
a front and center role. As someone who has wanted the dream of fiber, super
fast internet for everyone, I can’t wait to see what comes of this. Whether by
fiber or satellite, I hope we can reach massive bandwidth speeds across the
world.&lt;/p&gt;
&lt;div class=&#34;footnotes&#34;&gt;

&lt;hr /&gt;

&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;&lt;a href=&#34;https://bebusinessed.com/history/history-of-the-television/&#34;&gt;https://bebusinessed.com/history/history-of-the-television/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:1&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;&lt;a href=&#34;https://www.britannica.com/technology/standard-definition-television&#34;&gt;https://www.britannica.com/technology/standard-definition-television&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:2&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:3&#34;&gt;&lt;a href=&#34;https://ecee.colorado.edu/~ecen4242/marko/TV_History/related%20standards/HDTV_Past.htm&#34;&gt;https://ecee.colorado.edu/~ecen4242/marko/TV_History/related%20standards/HDTV_Past.htm&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:3&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:4&#34;&gt;&lt;a href=&#34;https://en.wikipedia.org/wiki/4K_resolution&#34;&gt;https://en.wikipedia.org/wiki/4K_resolution&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:4&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:5&#34;&gt;&lt;a href=&#34;https://www.businesstoday.in/technology/launch/ces-2013-sharp-showcases-worlds-first-8k-tv/story/191438.html&#34;&gt;https://www.businesstoday.in/technology/launch/ces-2013-sharp-showcases-worlds-first-8k-tv/story/191438.html&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:5&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:6&#34;&gt;&lt;a href=&#34;https://en.wikipedia.org/wiki/8K_resolution&#34;&gt;https://en.wikipedia.org/wiki/8K_resolution&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:6&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:7&#34;&gt;It’s definitely worth understanding &lt;em&gt;why&lt;/em&gt; these numbers are so low. It is hard to know from just the data. It might be overall network capacity or possibly the network backbone is at capacity while the “last mile” is not. The former could be solved by more capacity as noted throughout this article, but the latter could be solved by more and better CDNs. Thanks to Alex Rasmussen for pointing this out!
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:7&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:8&#34;&gt;I tried to find more recent numbers that were not from a sketchy source and couldn’t, would love if anyone knows of any.
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:8&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:9&#34;&gt;&lt;a href=&#34;https://www.speedtest.net/reports/united-states/2018/#fixed&#34;&gt;https://www.speedtest.net/reports/united-states/2018/#fixed&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:9&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:10&#34;&gt;&lt;a href=&#34;https://www.speedtest.net/reports/united-kingdom/#fixed&#34;&gt;https://www.speedtest.net/reports/united-kingdom/#fixed&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:10&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:11&#34;&gt;&lt;a href=&#34;https://www.speedtest.net/reports/spain/#fixed&#34;&gt;https://www.speedtest.net/reports/spain/#fixed&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:11&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:12&#34;&gt;Another great question is &lt;em&gt;if&lt;/em&gt; consumers had 100mbps, would their (typically shitty) wifi setups even let them reap the benefits? Thanks to Scott Andreas for that great question!
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:12&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:13&#34;&gt;&lt;a href=&#34;https://www.astranis.com&#34;&gt;https://www.astranis.com&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:13&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:14&#34;&gt;&lt;a href=&#34;https://spacenews.com/astranis-will-share-a-falcon-9-for-2020-small-geo-launch/&#34;&gt;https://spacenews.com/astranis-will-share-a-falcon-9-for-2020-small-geo-launch/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:14&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:15&#34;&gt;&lt;a href=&#34;https://www.space.com/spacex-starlink-satellites.html&#34;&gt;https://www.space.com/spacex-starlink-satellites.html&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:15&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:16&#34;&gt;&lt;a href=&#34;https://www.fastcompany.com/90458407/spacex-satellite-broadband&#34;&gt;https://www.fastcompany.com/90458407/spacex-satellite-broadband&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:16&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
                </item>
                    
            <item>
                <title>The Art of Automation</title>
                <link>https://blog.jessfraz.com/post/the-art-of-automation/</link>
                <pubDate>Sat, 18 Apr 2020 06:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/the-art-of-automation/</guid>
                    <description>

&lt;p&gt;I am unsure if my love of automation comes from a dislike of doing the same thing twice or an overall desire to be more productive and make everything more efficient. Like a lot of programmers, I often ask myself “can this be scripted” when I find myself doing a manual task.&lt;/p&gt;

&lt;p&gt;I was inspired recently by reading Wolfram’s writing on his personal infrastructure for productivity&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34;&gt;1&lt;/a&gt;&lt;/sup&gt;. I, too, have written about my personal infrastructure&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34;&gt;2&lt;/a&gt;&lt;/sup&gt;, but not at the level of depth or with the same focus on productivity as Wolfram. There is no time like the present to take another swing at it!&lt;/p&gt;

&lt;p&gt;Not only do I want to touch on some of the ways I have automated tasks in my life, I also want to spend some time unpacking how common automation patterns are starting to appear in a lot of things people use day to day. Apple’s Shortcuts, home automation, and IFTTT make automation patterns available to the masses in a way that is unprecedented. Before diving into the details, let’s first answer the question of why.&lt;/p&gt;

&lt;h2 id=&#34;why-automate&#34;&gt;Why automate?&lt;/h2&gt;

&lt;p&gt;Time is one of the most valuable resources in the world. If there was something you could do to free more time for yourself, why wouldn’t you? When I automate myself out of a task I transfer the burden of doing said task to some other script, service, API, or a combination of all of these.&lt;/p&gt;

&lt;p&gt;I, personally, feel at my best and most productive when I am building something, solving a problem, or learning something new. In none of those situations does that include “doing something manual that could otherwise be scripted/automated away.” Or when it does… I automate it away. In any circumstance when I find myself in a position to automate something, I will automate it. Especially if I consider the time spent automating the task to be less than any time spent in the future doing the task manually. This is the ultimate pay off: time.&lt;/p&gt;

&lt;p&gt;I like to think of automation as the following equation.&lt;/p&gt;

&lt;p&gt;$$
\begin{equation}
time \  gained =  (time \  doing \  task \  manually) - (time \  to \  automate \  task)
\end{equation}
$$&lt;/p&gt;

&lt;p&gt;It’s important to note that in the equation above, the time to automate the task also includes any future bugs you might have to fix in the automation itself.&lt;/p&gt;

&lt;p&gt;By automating tasks, I can focus my time on doing the things where I feel at my best and most productive while at the same time being more efficient and getting more done.&lt;/p&gt;

&lt;h2 id=&#34;some-recent-automations&#34;&gt;Some recent automations&lt;/h2&gt;

&lt;p&gt;For myself, when I automate things, I tend to start by making it work and then making it pretty. In our equation above, the goal is to keep the time automating the task to a minimum. By getting the thing to work first without wasting time on making it pretty, I find I can gain the most time and be most productive. Cleaning up whatever mess I made with scripts or APIs after is a lot easier and faster after you get it to work. You can imagine that most of my automations start out looking like a Rube Goldberg machine. Let’s dive into some of the most recent things I have automated.&lt;/p&gt;

&lt;h3 id=&#34;on-boarding-new-hires&#34;&gt;On-boarding new hires&lt;/h3&gt;

&lt;p&gt;For our startup, we have been hiring quite a bit quickly. I wanted to make sure our on-boarding process was streamlined and consistent. Adding new folks to GSuite, Zoom, and GitHub teams manually is a huge waste of time and tends to lead to human error. I automated on-boarding new folks into all our tools with a Rust script. This was also a nice excuse for me to tinker with the Rust programming language. I basically automated the role of CIO in Rust.&lt;/p&gt;

&lt;p&gt;Now when folks join the company, they get added to a config file which then automatically sets up an email account in GSuite, creates them a Zoom account, adds them to all the right GitHub teams, and then sends an email to them outlining all the tools and their accounts. It’s been improved with every new hire’s feedback as well, which makes it even better.&lt;/p&gt;

&lt;p&gt;We open sourced a lot of the libs I used in Rust for doing this at
&lt;a href=&#34;https://github.com/oxidecomputer/cio&#34;&gt;oxidecomputer/cio&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&#34;newsletter-rss-feeds&#34;&gt;Newsletter RSS feeds&lt;/h3&gt;

&lt;p&gt;Another thing I recently automated was clearing out all the newsletters I get to my email inbox everyday. Most of these are subscriptions to people’s blogs, like The Morning Paper&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:3&#34;&gt;&lt;a href=&#34;#fn:3&#34;&gt;3&lt;/a&gt;&lt;/sup&gt;. Since most of these are actually RSS feeds, I instead now pipe the RSS feed updates to Pocket&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:4&#34;&gt;&lt;a href=&#34;#fn:4&#34;&gt;4&lt;/a&gt;&lt;/sup&gt;. This way everyday I can check Pocket for my list of things to read versus my email inbox being used for that. I found that when these newsletters were going to my inbox, I never actually read them since I tend to use my inbox as a TODO list and I would archive the newsletters right away since they aren’t a priority. Now, I keep my inbox clear of clutter &lt;em&gt;and&lt;/em&gt; actually have a place for storing things I want to read later.&lt;/p&gt;

&lt;h3 id=&#34;gmail-filters&#34;&gt;Gmail filters&lt;/h3&gt;

&lt;p&gt;Speaking of email inboxes… I am an absolute stickler about Gmail filters. At the time of writing this article, I have 72 different filters. I constantly improve my labeling and automatic archiving of emails through a configuration file. This management system is a little Go tool I made for Gmail filters called gmailfilters&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:5&#34;&gt;&lt;a href=&#34;#fn:5&#34;&gt;5&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;For mailing lists, I tend to archive the messages unless they are sent directly to me, whether in &lt;code&gt;cc&lt;/code&gt; or &lt;code&gt;to&lt;/code&gt;. This keeps my inbox clean, while also making sure each mailing list gets sorted into its own Gmail label so I can easily view all the messages if I need to. By maintaining Gmail filters in a configuration file, versus the user interface, I save a bunch of time trying to find the filter I want to edit, editing it, and saving it. Also, if I make a mistake and want to revert it, I now have a git history of past filters, so this is as simple as &lt;code&gt;git revert&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;These are just a few of the things I automate for my day to day life. If you are interested in more of these, please refer to my original posts on my personal infrastructure&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:6&#34;&gt;&lt;a href=&#34;#fn:6&#34;&gt;6&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;As developers, automation is not a new concept, we tend to deal with the patterns of automation day to day through continuous integration (CI) and continuous delivery (CD). For the rest of the world, it is interesting to see the patterns of automation are starting to play a role in consumer products.&lt;/p&gt;

&lt;h2 id=&#34;automation-for-the-masses&#34;&gt;Automation for the masses&lt;/h2&gt;

&lt;h3 id=&#34;apple-shortcuts&#34;&gt;Apple Shortcuts&lt;/h3&gt;

&lt;p&gt;Recently, I switched back to an iPhone and got an iPad. I was delighted to play with the new “Shortcuts” feature. A Shortcut allows users to do multiple tasks and streamline them together into one action. For example, you could create a shortcut that on your commute home from the office: gets the latest traffic report, plays your favorite new podcast on the drive home, then turns on your lights when you get home (assuming you have smart lights). You can build anything you like depending on the apps you have installed and your preferences. It’s really quite extensible, while also being approachable by the mass market of iPhone adopters. In the age of COVID-19 and working from home, I&amp;rsquo;m sure you can think of a different example ;)&lt;/p&gt;

&lt;h3 id=&#34;home-automation&#34;&gt;Home automation&lt;/h3&gt;

&lt;p&gt;Speaking of lights that can automatically turn on, home automation is another way that wider audiences can create automation patterns for themselves. Between Google Home, Apple’s Homekit, and Amazon Alexa adoption, more and more folks are seeing the power that technology can unleash and time saved by automating everyday tasks. Most of these devices have a concept of creating and using “routines” to chain multiple tasks together.&lt;/p&gt;

&lt;p&gt;For example, when I leave the house, turn off all the lights, set the temperature so the AC is no longer running, and turn on the security system.  Or, when I say it&amp;rsquo;s time for bed, turn off all the lights and set the security system to &amp;ldquo;on and home.&amp;rdquo; This user experience and ease of use enables consumers to boost their productivity and save time in the same way a developer would through programming and scripting.&lt;/p&gt;

&lt;p&gt;There is, of course, a darker side to IoT devices if consumers are uneducated. Whether it’s your lightbulbs, thermostat, home security system, or refrigerator, it is important to research the security of the IoT devices you buy.&lt;/p&gt;

&lt;h3 id=&#34;ifttt&#34;&gt;IFTTT&lt;/h3&gt;

&lt;p&gt;If-this-then-that (&lt;a href=&#34;https://ifttt.com/&#34;&gt;IFTTT&lt;/a&gt;) has been around for quite some time, but I wanted to take the time to call it out as an early way that automation was brought to a larger audience without people having to program. IFTTT clones are a dime a dozen now. There is &lt;a href=&#34;https://zapier.com/home&#34;&gt;Zapier&lt;/a&gt;, &lt;a href=&#34;https://github.com/huginn/huginn&#34;&gt;Huginn&lt;/a&gt;, and &lt;a href=&#34;https://automate.io/&#34;&gt;automate.io&lt;/a&gt;, just to name a few. All these products have one thing in common: promoting personal productivity through combining and chaining various tasks together into a single, automated workflow.&lt;/p&gt;

&lt;h2 id=&#34;productivity-progress&#34;&gt;Productivity progress&lt;/h2&gt;

&lt;p&gt;I am glad that the patterns of automation have started to make their way mainstream so that mass market consumers can see the same productivity gains without programming that developers achieve through scripting. The user interface might be different but the goal is the same: saving time and eliminating the need to do manual tasks repeatedly. The feeling developers get after writing a script to make their life easier should not be exclusive. I hope in the future we continue to see easier and more creative ways to automate while granting the same automation superpowers to everyone, not just programmers.&lt;/p&gt;
&lt;div class=&#34;footnotes&#34;&gt;

&lt;hr /&gt;

&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;&lt;a href=&#34;https://writings.stephenwolfram.com/2019/02/seeking-the-productive-life-some-details-of-my-personal-infrastructure/&#34;&gt;https://writings.stephenwolfram.com/2019/02/seeking-the-productive-life-some-details-of-my-personal-infrastructure/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:1&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;&lt;a href=&#34;https://blog.jessfraz.com/post/home-lab-is-the-dopest-lab/&#34;&gt;https://blog.jessfraz.com/post/home-lab-is-the-dopest-lab/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:2&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:3&#34;&gt;&lt;a href=&#34;https://blog.acolyer.org/&#34;&gt;https://blog.acolyer.org/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:3&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:4&#34;&gt;&lt;a href=&#34;https://getpocket.com/&#34;&gt;https://getpocket.com/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:4&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:5&#34;&gt;&lt;a href=&#34;https://github.com/jessfraz/gmailfilters&#34;&gt;https://github.com/jessfraz/gmailfilters&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:5&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:6&#34;&gt;&lt;a href=&#34;https://blog.jessfraz.com/post/personal-infrastructure/&#34;&gt;https://blog.jessfraz.com/post/personal-infrastructure/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:6&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
                </item>
                    
            <item>
                <title>The Life of a Data Byte</title>
                <link>https://blog.jessfraz.com/post/the-life-of-a-data-byte/</link>
                <pubDate>Sun, 08 Mar 2020 08:09:26 +0000</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/the-life-of-a-data-byte/</guid>
                    <description>

&lt;p&gt;A byte of data has been stored in a number of different ways as newer, better, and faster mediums of
storage are introduced. A byte is a unit of digital information that most commonly refers to eight bits.
A bit is a unit of information that can be expressed as 0 or 1, representing logical state.&lt;/p&gt;

&lt;p&gt;In the case of paper cards, a bit was stored as the presence or absence of a hole in the card at a specific place.
If we go even further back in time to Babbage&amp;rsquo;s Analytical Engine, a bit was stored as the position of a mechanical gear
or lever. For magnetic storage devices, such as tapes and disks, a bit is represented by the polarity of a certain area
of the magnetic film. In modern dynamic random-access memory (DRAM), a bit is often represented as two levels of
electrical charge stored in a capacitor, a device that stores electrical energy in an electric field.&lt;/p&gt;

&lt;p&gt;In June 1956, Werner Buchholz&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34;&gt;1&lt;/a&gt;&lt;/sup&gt; coined the word byte&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34;&gt;2&lt;/a&gt;&lt;/sup&gt; to refer to a group of bits used to encode a single character of
text&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:3&#34;&gt;&lt;a href=&#34;#fn:3&#34;&gt;3&lt;/a&gt;&lt;/sup&gt;. Let’s go over a bit about character encoding. We will start with American Standard Code for Information
Interchange, or ASCII. ASCII was based on the English alphabet, therefore every letter, digit, and symbol
(a-z, A-Z, 0–9, +, -, /, “, ! etc) were represented as a 7 bit integer between 32 and 127. This wasn’t very
friendly to other languages. In order to support other languages, Unicode extended ASCII. With Unicode,
each character is represented as a code-point, or character, for example a lower case j is U+006A, where
the U stands for Unicode and after that is a hexadecimal number.&lt;/p&gt;

&lt;p&gt;UTF-8 is the standard for representing characters as eight bits, allowing every code-point between 0-127
to be stored in a single byte. If we think back to ASCII this is fine for English characters, but other
language’s characters are often expressed as two or more bytes. UTF-16 is the standard for representing
characters as 16 bits and UTF-32 is the standard for representing characters as 32 bits. In ASCII every
character is a byte, and in Unicode, that’s often not true, a character can be 1, 2, 3, or more bytes.
Throughout this article there will be different sized groupings of bits. The number of bits in a byte
varies based on the design of the storage medium in the past.&lt;/p&gt;

&lt;p&gt;This article is going to travel in time through various mediums of storage as an exercise of diving into
how we have stored data through history. By no means will this include every single storage medium ever
manufactured, sold, or distributed. This article is meant to be fun and informative while not being
encyclopedic. Let’s get started. Let’s assume we have a byte of data to be stored: the letter &lt;code&gt;j&lt;/code&gt;, or
as an encoded byte &lt;code&gt;6a&lt;/code&gt; or in binary &lt;code&gt;01001010&lt;/code&gt;. As we travel through time, our data byte will come into
play in some of the storage technologies we cover. Finally, the article will
wrap up with a look at the current and future technologies for storage.&lt;/p&gt;

&lt;h2 id=&#34;1951&#34;&gt;1951&lt;/h2&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/UNIVAC.jpg&#34; alt=&#34;UNIVAC&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;http://www.ricomputermuseum.org/Home/interesting_computer_items/univac-magnetic-tape&#34;&gt;http://www.ricomputermuseum.org/Home/interesting_computer_items/univac-magnetic-tape&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Our story starts in 1951 with the UNIVAC UNISERVO tape drive for the UNIVAC 1 computer. This was the first
tape drive made for a commercial computer. The tape was three pounds of ½ inch wide thin strip of
nickel-plated phosphor bronze, called Vicalloy, which was 1,200 feet long. Our data byte could be stored
at a rate of 7,200 characters per second&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:4&#34;&gt;&lt;a href=&#34;#fn:4&#34;&gt;4&lt;/a&gt;&lt;/sup&gt; on tape moving at 100 inches per second. At this point in history,
you could measure the speed of a storage algorithm by the distance the tape traveled.&lt;/p&gt;

&lt;h2 id=&#34;1952&#34;&gt;1952&lt;/h2&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/ibm-726.jpg&#34; alt=&#34;IBM 726&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;https://www.ibm.com/ibm/history/exhibits/storage/storage_PH5-24.html&#34;&gt;https://www.ibm.com/ibm/history/exhibits/storage/storage_PH5-24.html&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s fast forward a year to May 21st, 1952 when IBM announced their first magnetic tape unit, the IBM 726.
Our data byte could now be moved off UNISERVO metal tape onto IBM’s magnetic tape. This new home would be
super cozy for our very small data byte since the tape could store up to 2 million digits. This magnetic
7 track tape moved at 75 inches per second with a transfer rate of 12,500 digits&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:5&#34;&gt;&lt;a href=&#34;#fn:5&#34;&gt;5&lt;/a&gt;&lt;/sup&gt; or 7,500 characters&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:6&#34;&gt;&lt;a href=&#34;#fn:6&#34;&gt;6&lt;/a&gt;&lt;/sup&gt;
&amp;nbsp;(called copy groups at the time) per second. For reference, this article has 34,128 characters.&lt;/p&gt;

&lt;p&gt;7 track tapes had six tracks for data and one to maintain parity by ensuring that the total number of
1-bits in the string was even or odd. Data was recorded at 100 bits per linear inch. This system used a
“vacuum channel” method of keeping a loop of tape circulating between two points. This allowed the tape
drive to start and stop the tape in a split second. This was done by placing long vacuum columns between
the tape reels and the read/write heads to absorb sudden increases in tension in the tape, without which
the tape would have typically broken. A removable plastic ring in the back of the tape reel provided write
protection. About 1.1 megabytes could be stored on one reel of tape&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:7&#34;&gt;&lt;a href=&#34;#fn:7&#34;&gt;7&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;If you think back to VHS tapes, what was required before returning a movie to Blockbuster? Rewinding the
tape! The same could be said for tape used for computers. Programs could not hop around a tape, or
randomly access data, they had to read and write in sequential order.&lt;/p&gt;

&lt;h2 id=&#34;1956&#34;&gt;1956&lt;/h2&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/ramac.jpg&#34; alt=&#34;RAMAC&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;https://www.computerhistory.org/revolution/memory-storage/8/233&#34;&gt;https://www.computerhistory.org/revolution/memory-storage/8/233&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If we move ahead a few years to 1956, the era of magnetic disk storage began with IBM’s completion of a
RAMAC 305 computer system to deliver to Zellerbach Paper in San Francisco&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:8&#34;&gt;&lt;a href=&#34;#fn:8&#34;&gt;8&lt;/a&gt;&lt;/sup&gt;. This computer was the first to
use a moving-head hard disk drive. The RAMAC disk drive consisted of fifty magnetically coated 24 inch diameter
metal platters capable of storing about five million characters of data, 7 bits per character, and spinning at
1,200 revolutions per minute. The storage capacity was about 3.75 megabytes.&lt;/p&gt;

&lt;p&gt;RAMAC allowed real-time random access memory to large amounts of data, unlike magnetic tape or punch cards.
IBM advertised the RAMAC as being able to store the equivalent of 64,000 punched cards&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:9&#34;&gt;&lt;a href=&#34;#fn:9&#34;&gt;9&lt;/a&gt;&lt;/sup&gt;. Previously to the
RAMAC, transactions were held until a group of data was accumulated and batch processed. The RAMRAC introduced
the concept of continuously processing transactions as they occurred so data could be retrieved immediately
when it was fresh. Our data byte could now be accessed in the RAMAC at 100,000 bits per second&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:10&#34;&gt;&lt;a href=&#34;#fn:10&#34;&gt;10&lt;/a&gt;&lt;/sup&gt;. Prior to this,
with tapes, we had to write and read sequential data and could not randomly jump to various parts of the tape.
Real-time random access of data was truly revolutionary at this time.&lt;/p&gt;

&lt;h2 id=&#34;1963&#34;&gt;1963&lt;/h2&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/dectape.jpg&#34; alt=&#34;dectape&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;https://www.computerhistory.org/timeline/1963/&#34;&gt;https://www.computerhistory.org/timeline/1963/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s fast forward to 1963 when DECtape was introduced. Its namesake stemmed from the Digital Equipment
Corporation, known as DEC for short. DECtape was inexpensive and reliable so it was used in many generations
of the DEC computers. It was a ¾ inch tape that was laminated and sandwiched between two layers of mylar on a
four inch reel.&lt;/p&gt;

&lt;p&gt;DECtape could be carried by hand, as opposed to its weighty and large predecessors, making it great for personal
computers. In contrast to 7 track tape, DECtape had 6 data tracks, 2 mark tracks, and two clock tracks. Data was
recorded at 350 bits per inch. Our data byte, which is 8 bits but could be expanded to 12, could be transferred
to DECtape at 8,325 12-bit words per second with a tape speed of 93 +/-12 inches per second&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:11&#34;&gt;&lt;a href=&#34;#fn:11&#34;&gt;11&lt;/a&gt;&lt;/sup&gt;. This is 8% more digits
per second than the UNISERVO metal tape in 1952.&lt;/p&gt;

&lt;h2 id=&#34;1967&#34;&gt;1967&lt;/h2&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/minnow.jpg&#34; alt=&#34;minnow&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;https://www.computerhistory.org/revolution/memory-storage/8/261/1080&#34;&gt;https://www.computerhistory.org/revolution/memory-storage/8/261/1080&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Four years later in 1967, a small team at IBM started working on the IBM floppy disk drive, codenamed Minnow&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:12&#34;&gt;&lt;a href=&#34;#fn:12&#34;&gt;12&lt;/a&gt;&lt;/sup&gt;.
At the time, the team was tasked with developing a reliable and inexpensive way to load microcode into the
IBM System/370 mainframes&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:13&#34;&gt;&lt;a href=&#34;#fn:13&#34;&gt;13&lt;/a&gt;&lt;/sup&gt;.  The project then got reassigned and repurposed to load microcode into the controller
for the IBM 3330 Direct Access Storage Facility, codenamed Merlin.&lt;/p&gt;

&lt;p&gt;Our data byte could now be stored on read-only 8-inch flexible Mylar disks coated with magnetic material, which
are today known as floppy disks. At the time of release, the result of the project was named the IBM 23FD Floppy
Disk Drive System. The disks could hold 80 kilobytes of data. Unlike hard drives, a user could easily transfer a
floppy in its protective jacket from one drive to another. Later in 1973, IBM released a read/write floppy disk
drive, which then became an industry standard&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:14&#34;&gt;&lt;a href=&#34;#fn:14&#34;&gt;14&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h2 id=&#34;1969&#34;&gt;1969&lt;/h2&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/apollo-rope-memory.jpg&#34; alt=&#34;apollo-rope-memory.jpg&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;https://spectrum.ieee.org/tech-history/space-age/software-as-hardware-apollos-rope-memory&#34;&gt;https://spectrum.ieee.org/tech-history/space-age/software-as-hardware-apollos-rope-memory&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In 1969, the Apollo Guidance Computer (AGC) read-only rope memory was launched into space aboard the Apollo 11
mission, which carried American astronauts to the Moon and back. This rope memory was made by hand and could
hold 72 kilobytes of data. Manufacturing rope memory was laborious, slow, and required skills analogous to
textile work; it could take months to weave a program into the rope memory&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:15&#34;&gt;&lt;a href=&#34;#fn:15&#34;&gt;15&lt;/a&gt;&lt;/sup&gt;. But it was the right tool for
the job at the time to resist the harsh rigors of space. When a wire went through one of the circular cores
it represented a 1. Wires that went around a core represented a 0. Our data byte would take a human a few
minutes (estimated) to weave into the rope.&lt;/p&gt;

&lt;h2 id=&#34;1977&#34;&gt;1977&lt;/h2&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/datasette.jpg&#34; alt=&#34;datasette&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;https://en.wikipedia.org/wiki/File:Commodore-Datasette-C2N-Mk2-Front.jpg&#34;&gt;https://en.wikipedia.org/wiki/File:Commodore-Datasette-C2N-Mk2-Front.jpg&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s fast forward to 1977 when the Commodore PET, the first (successful) mass-market personal computer, was
released. Built-in to the PET was a Commodore 1530 Datasette, meaning data plus cassette. The PET converted
data into analog sound signals that were then stored on cassettes&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:16&#34;&gt;&lt;a href=&#34;#fn:16&#34;&gt;16&lt;/a&gt;&lt;/sup&gt;. This made for a cost-effective and reliable
storage solution, albeit very slow. Our small databyte could be transferred at a rate of around 60-70 bytes per
second&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:17&#34;&gt;&lt;a href=&#34;#fn:17&#34;&gt;17&lt;/a&gt;&lt;/sup&gt;. The cassettes could hold about 100 kilobytes per 30-minute side, with 2 sides per tape. For example,
you could fit about 2 of &lt;a href=&#34;https://blog.jessfraz.com/img/rick-roll.jpg&#34;&gt;these 55 KB images&lt;/a&gt;&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:18&#34;&gt;&lt;a href=&#34;#fn:18&#34;&gt;18&lt;/a&gt;&lt;/sup&gt; on one side of the cassette. The datasette also appeared in the
Commodore VIC-20 and Commodore 64.&lt;/p&gt;

&lt;h2 id=&#34;1978&#34;&gt;1978&lt;/h2&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/laserdisk.jpg&#34; alt=&#34;laserdisc&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;https://www.youtube.com/watch?v=PRFQm0eUvzs&#34;&gt;https://www.youtube.com/watch?v=PRFQm0eUvzs&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s jump ahead a year to 1978 when the LaserDisc was introduced as “Discovision” by MCA and Philips.
Jaws was the first film sold on a LaserDisc in North America. The audio and video quality on a LaserDisc
was far better than the competitors, but too expensive for most consumers. As opposed to the VHS tape
which consumers could use to record TV programs, the LaserDisc could not be written to. LaserDiscs
used analog video with analog FM stereo sound and pulse-code modulation&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:19&#34;&gt;&lt;a href=&#34;#fn:19&#34;&gt;19&lt;/a&gt;&lt;/sup&gt;, or PCM, digital audio. The
disks were 12 inches in diameter and composed of two single sided aluminum disks layered in plastic. The
LaserDisc is remembered today as being the foundation CDs and DVDs were built upon.&lt;/p&gt;

&lt;h2 id=&#34;1979&#34;&gt;1979&lt;/h2&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/st506.jpg&#34; alt=&#34;st506&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;https://www.computerhistory.org/storageengine/seagate-5-25-inch-hdd-becomes-pc-standard/&#34;&gt;https://www.computerhistory.org/storageengine/seagate-5-25-inch-hdd-becomes-pc-standard/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A year later in 1979, Alan Shugart and Finis Conner founded the company Seagate Technology with the
idea of scaling down a hard disk drive to be the same size as a 5 ¼ inch floppy disk, which at the
time was the standard. Their first product, in 1980, was the Seagate ST506 hard disk drive, the
first hard disk drive for microcomputers. The disk held five megabytes of data, which at the time
was five times more than the standard floppy disk. The founders succeeded in their goal of scaling
down the drive to the size of a floppy disk drive at 5 ¼ inches. It was a rigid, metallic platter
coated on both sides with a thin layer of magnetic material to store data. Our data byte could be
transferred at a speed of 625 kilobytes per second&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:20&#34;&gt;&lt;a href=&#34;#fn:20&#34;&gt;20&lt;/a&gt;&lt;/sup&gt; onto the disk. That’s about &lt;a href=&#34;https://blog.jessfraz.com/img/rick-roll.gif&#34;&gt;a 625KB animated gif&lt;/a&gt;&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:21&#34;&gt;&lt;a href=&#34;#fn:21&#34;&gt;21&lt;/a&gt;&lt;/sup&gt; per second.&lt;/p&gt;

&lt;h2 id=&#34;1981&#34;&gt;1981&lt;/h2&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/3.5-floppy.jpg&#34; alt=&#34;3.5 floppy&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;https://en.wikipedia.org/wiki/History_of_the_floppy_disk#/media/File:Floppy_disk_300_dpi.jpg&#34;&gt;https://en.wikipedia.org/wiki/History_of_the_floppy_disk#/media/File:Floppy_disk_300_dpi.jpg&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s fast forward a couple years to 1981 when Sony introduced the first 3 ½ inch floppy drives.
Hewlett-Packard was the first adopter of the technology in 1982 with their HP-150. This put the
3 ½ inch floppy disk on the map and gave it wide distribution in the industry&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:22&#34;&gt;&lt;a href=&#34;#fn:22&#34;&gt;22&lt;/a&gt;&lt;/sup&gt;. The disks were
single sided with a formatted capacity of 161.2 kilobytes and an unformatted capacity of
218.8 kilobytes. In 1982, the double sided version was made available and the Microfloppy
Industry Committee (MIC), a consortium of 23 media companies, based a spec for a 3 ½ inch
floppy on Sony’s original designs cementing the format into history as we know it&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:23&#34;&gt;&lt;a href=&#34;#fn:23&#34;&gt;23&lt;/a&gt;&lt;/sup&gt;. Our data
byte could now be stored on the early version of one of the most widely distributed storage
mediums: the 3 ½ inch floppy disk. Later a couple of 3 ½ inch floppy disks holding the contents
of The Oregon Trail would be paramount to my childhood.&lt;/p&gt;

&lt;h2 id=&#34;1984&#34;&gt;1984&lt;/h2&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/cd-rom.png&#34; alt=&#34;cd-rom&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;https://en.wikipedia.org/wiki/CD-ROM#/media/File:CD-ROM.png&#34;&gt;https://en.wikipedia.org/wiki/CD-ROM#/media/File:CD-ROM.png&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Shortly thereafter in 1984, the compact disk read-only memory (CD-ROM), holding 550 megabytes of
pre-recorded data, was announced from Sony and Philips. This format grew out of compact disks
digital audio, or CD-DAs, which were used for distributing music. The CD-DA was developed by
Sony and Philips in 1982, which has a capacity of 74 minutes. When Sony and Philips were
negotiating the standard for a CD-DA, legend has it that one of the four people insisted
it be able to hold all of the Ninth Symphony&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:24&#34;&gt;&lt;a href=&#34;#fn:24&#34;&gt;24&lt;/a&gt;&lt;/sup&gt;. The first product released on a CD-ROM was
Grolier’s Electronic Encyclopedia, which came out in 1985. The encyclopedia contained nine
million words which only took up 12% of the disk space available, which was 553 mebibytes&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:25&#34;&gt;&lt;a href=&#34;#fn:25&#34;&gt;25&lt;/a&gt;&lt;/sup&gt;.
We would have more than enough room for the encyclopedia and our data byte. Shortly thereafter
in 1985, computer and electronics companies worked together to create a standard for the disks
so any computer would be able to access the information.&lt;/p&gt;

&lt;h2 id=&#34;1984-1&#34;&gt;1984&lt;/h2&gt;

&lt;p&gt;In 1984, Fujio Masuoka invented a new type of floating-gate memory, called flash memory, that
was capable of being erased and reprogrammed multiple times.&lt;/p&gt;

&lt;p&gt;Let’s go over a bit about floating-gate memory. Transistors are electrical gates that can be
switched on and off individually. Since each transistor can be in two distinct states (on or off),
it can store two different numbers: 0 and 1. Floating-gate refers to the second gate added to the
middle transistor. This second gate is insulated by a thin oxide layer. These transistors use a
small voltage, applied to the gate of the transistor, to denote whether it is on or off, which in
turn translates to a 0 or 1.&lt;/p&gt;

&lt;p&gt;With a floating gate, when a suitable voltage is applied across the oxide layer, the electrons
tunnel through it and get stuck on the floating gate. Therefore even if the power is disconnected,
the electrons remain present on the floating gate. When no electrons are on the floating gate it
represents a 1, and when electrons are trapped on the floating gate it represents a 0. Reversing
this process and applying a suitable voltage across the oxide layer in the opposite direction
causes the electrons to tunnel off the floating gate and restore the transistor back to its
original state. Therefore, the cells are made programmable and non-volatile&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:26&#34;&gt;&lt;a href=&#34;#fn:26&#34;&gt;26&lt;/a&gt;&lt;/sup&gt;. Our data byte
could be programmed into the transistors as &lt;code&gt;01001010&lt;/code&gt;, with electrons trapped in the floating
gates to represent the zeros.&lt;/p&gt;

&lt;p&gt;Masuoka’s design was a bit more affordable but less flexible than electrically erasable PROM (EEPROM)
since it required multiple groups of cells to be erased together, but this also accounted for its speed.
At the time, Masuoka was working for Toshiba. He ended up quitting Toshiba shortly after to become a
professor at Tohoku University because he was displeased with the company not rewarding him for his
work. He sued Toshiba, demanding compensation for his work, which settled in 2006 with a one-time
payment of ¥87m, equivalent to $758,000. This still seems light given how impactful flash memory
has been on the industry.&lt;/p&gt;

&lt;p&gt;While we are on the topic of flash memory, we might as well cover the difference between NOR and
NAND flash. We know by now from Masuoka that flash stores information in memory cells made up of
floating gate transistors. The names of the technologies are tied directly to the way the memory
cells are organized.&lt;/p&gt;

&lt;p&gt;In NOR flash, individual memory cells are connected in parallel allowing the random access. This
architecture enables the short read times required for the random access of microprocessor instructions.
NOR Flash is ideal for lower-density applications that are mostly read only. This is why most CPUs load
their firmware, typically, from NOR flash. Masuoka and colleagues presented the invention of NOR flash
in 1984 and NAND flash in 1987&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:27&#34;&gt;&lt;a href=&#34;#fn:27&#34;&gt;27&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;In contrast, NAND Flash designers gave up the ability for random access in a tradeoff to gain a smaller
memory cell size. This also has the benefits of a smaller chip size and lower cost-per-bit. NAND flash’s
architecture consists of an array of eight memory transistors connected in a series. This leads to
high storage density, smaller memory cell size, and faster write and erase since it can program blocks
of data at a time. This comes at the cost of having to overwrite data when it is not sequentially
written and data already exists in a block&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:28&#34;&gt;&lt;a href=&#34;#fn:28&#34;&gt;28&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h2 id=&#34;1991&#34;&gt;1991&lt;/h2&gt;

&lt;p&gt;Let’s jump ahead to 1991 when a prototype solid state disk (SSD) module was made for evaluation by
IBM from SanDisk, at the time known as SunDisk&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:29&#34;&gt;&lt;a href=&#34;#fn:29&#34;&gt;29&lt;/a&gt;&lt;/sup&gt;. This design combined a flash storage array,
non-volatile memory chips, with an intelligent controller to automatically detect and correct
defective cells. The disk was 20 megabytes in a 2 ½ inch form factor and sold for around $1,000&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:30&#34;&gt;&lt;a href=&#34;#fn:30&#34;&gt;30&lt;/a&gt;&lt;/sup&gt;.
This wound up being used by IBM in the ThinkPad pen computer&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:31&#34;&gt;&lt;a href=&#34;#fn:31&#34;&gt;31&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h2 id=&#34;1994&#34;&gt;1994&lt;/h2&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/zipdisk.jpg&#34; alt=&#34;zipdisk&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;https://www.amazon.com/Iomega-100MB-Zip-Plus-Drive/dp/B003UI8POM&#34;&gt;https://www.amazon.com/Iomega-100MB-Zip-Plus-Drive/dp/B003UI8POM&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One of my personal favorite storage mediums from my childhood was Zip Disks. In 1994, Iomega
released the Zip Disk, a 100 megabyte cartridge in a 3 ½ inch form factor, roughly a bit thicker
than a standard 3 ½ inch disk. Later versions of the disks could store up to 2 gigabytes.
These disks had the convenience of being as small as a floppy disk but with the ability to hold
a larger amount of data, which made them compelling. Our data byte could be written onto a Zip
disk at 1.4 megabytes per second. At the time, a 1.44 megabyte 3 ½ inch floppy would write at
about 16 kilobytes per second. In a Zip drive, heads are non-contact read/write and fly above
the surface, which is similar to a hard drive but unlike other floppies. Due to reliability
problems and the affordability of CDs, Zip disks eventually became obsolete.&lt;/p&gt;

&lt;h2 id=&#34;1994-1&#34;&gt;1994&lt;/h2&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/compactflash.png&#34; alt=&#34;compactflash&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;https://en.wikipedia.org/wiki/CompactFlash#/media/File:CompactFlash_Memory_Card.svg&#34;&gt;https://en.wikipedia.org/wiki/CompactFlash#/media/File:CompactFlash_Memory_Card.svg&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Also in 1994, SanDisk introduced CompactFlash, which was widely adopted into consumer devices like
digital and video cameras. Like CD-ROMs, CompactFlash speed is based on “x”-ratings, such as 8x,
20x, 133x, etc. The maximum transfer rate is calculated based on the original audio CD transfer rate
of 150 kilobytes per second. This winds up looking like R = K ⨉ 150 kB/s, where R is the transfer
rate and K is the speed rating. So for 133x CompactFlash, our data byte would be written at 133 ⨉
150 kB/s or around 19,950 kB/s or 19.95 MB/s. The CompactFlash Association was founded in 1995 to
create an industry standard for flash-based memory cards&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:32&#34;&gt;&lt;a href=&#34;#fn:32&#34;&gt;32&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h2 id=&#34;1997&#34;&gt;1997&lt;/h2&gt;

&lt;p&gt;A few years later in 1997, the compact disc rewritable (CD-RW) was introduced. This optical disc was
used for data storage, as well as backing up and transferring files to various devices. CD-RWs can
only be rewritten about 1,000 times, which, at the time, was not a limiting factor since users
rarely overwrote data that often on one disc.&lt;/p&gt;

&lt;p&gt;CD-RWs are based on phase change technology. During a phase change of a given medium, certain
properties of the medium change. In the case of CD-RWs, phase shifts in a special compound,
composed of silver, tellurium, and indium, cause &amp;ldquo;reflecting lands&amp;rdquo; and &amp;ldquo;non-reflecting bumps&amp;rdquo;,
each representing a 0 or 1. When the compound is in a crystalline state, it is translucent,
which indicates a 1. When the compound is melted into an amorphous state, it becomes opaque and
non-reflective, which indicates a 0&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:33&#34;&gt;&lt;a href=&#34;#fn:33&#34;&gt;33&lt;/a&gt;&lt;/sup&gt;. We could write our data byte &lt;code&gt;01001010&lt;/code&gt; as &amp;ldquo;non-reflecting bumps&amp;rdquo;
and &amp;ldquo;reflecting lands&amp;rdquo; this way.&lt;/p&gt;

&lt;p&gt;DVDs eventually overtook much of the market share from CD-RWs.&lt;/p&gt;

&lt;h2 id=&#34;1999&#34;&gt;1999&lt;/h2&gt;

&lt;p&gt;Let’s fast forward to 1999, when IBM introduced the smallest hard drives in the world at
the time: the IBM microdrive in 170 MB and 340 MB capacities. These were small hard disks,
1 inch in size, designed to fit into CompactFlash Type II slots. The intent was to create a
device to be used like CompactFlash but with more storage capacity. However, these were soon
replaced by USB flash drives, covered next, and larger CompactFlash cards once they became available.
Like other hard drives, microdrives were mechanical and contained small, spinning disk platters.&lt;/p&gt;

&lt;h2 id=&#34;2000&#34;&gt;2000&lt;/h2&gt;

&lt;p&gt;A year later in 2000, USB flash drives were introduced. These drives consisted of flash memory encased
in a small form factor with a USB interface. Depending on the version of the USB interface used the speed
varies. USB 1.1 is limited to 1.5 megabits per second, whereas USB 2.0 can handle 35 megabits per second,
and USB 3.0 can handle 625 megabits per second&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:34&#34;&gt;&lt;a href=&#34;#fn:34&#34;&gt;34&lt;/a&gt;&lt;/sup&gt;. The first USB 3.1 type-C drives were announced in March 2015
and had read/write speeds of 530 megabits per second&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:35&#34;&gt;&lt;a href=&#34;#fn:35&#34;&gt;35&lt;/a&gt;&lt;/sup&gt;.  Unlike floppy and optical disks, USB devices are harder
to scratch but still deliver the same use cases of data storage and transferring and backing up files.
Because of this, drives for floppy and optical disks have since faded out of existence in favor of USB ports.&lt;/p&gt;

&lt;h2 id=&#34;2005&#34;&gt;2005&lt;/h2&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/hard-disk.jpg&#34; alt=&#34;hard-disk&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;https://en.wikipedia.org/wiki/Hard_disk_drive#/media/File:Laptop-hard-drive-exposed.jpg&#34;&gt;https://en.wikipedia.org/wiki/Hard_disk_drive#/media/File:Laptop-hard-drive-exposed.jpg&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In 2005, hard disk drive (HDD) manufacturers started shipping products using
&lt;a href=&#34;https://youtu.be/xb_PyKuI7II&#34;&gt;perpendicular magnetic recording&lt;/a&gt;,
or PMR. Quite interestingly, this happened at the same time the iPod Nano announced using
flash as opposed to the 1 inch hard drives in the iPod Mini, causing a bit of an industry hoohaw&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:36&#34;&gt;&lt;a href=&#34;#fn:36&#34;&gt;36&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;A typical hard drive contains one or more rigid disks coated with a magnetically sensitive film consisting
of tiny magnetic grains. Data is recorded when a magnetic write-head flies just above the spinning disk,
much like a record player and a record except a record needle is in physical contact with the record.
As the platters spin, the air in contact with them creates a slight breeze. Just like air on an airplane
wing generates lift, the air generates lift on the head’s airfoil&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:37&#34;&gt;&lt;a href=&#34;#fn:37&#34;&gt;37&lt;/a&gt;&lt;/sup&gt;. The write-head rapidly flips the
magnetization of one magnetic region of grains so that its magnetic pole points up or down, to denote a 1 or a 0.&lt;/p&gt;

&lt;p&gt;The predecessor to PMR was longitudinal magnetic recording, or LMR. PMR can deliver more than three
times the storage density of LMR. The key difference of PMR versus LMR is that the grain structure
and the magnetic orientation of the stored data of PMR media is columnar instead of longitudinal.
PMR has better thermal stability and improved signal-to-noise ratio (SNR) due to better grain separation
and uniformity. It also benefits from better writability due to stronger head fields and better magnetic
alignment of the media. Like LMR, PMR’s fundamental limitations are based on the thermal stability of
magnetically written bits of data and the need to have sufficient SNR to read back written information.&lt;/p&gt;

&lt;h2 id=&#34;2007&#34;&gt;2007&lt;/h2&gt;

&lt;p&gt;Let’s jump ahead to 2007, when the first 1 TB hard disk drive from Hitachi Global Storage Technologies
was announced. The Hitachi Deskstar 7K1000 used five 3.5 inch 200 gigabytes platters and rotated at
7,200 RPM. This is in stark contrast to the world&amp;rsquo;s first HDD, the IBM RAMAC 350, which had a storage
capacity that was approximately 3.75 megabytes. Oh how far we have come in 51 years! But wait, there&amp;rsquo;s more.&lt;/p&gt;

&lt;h2 id=&#34;2009&#34;&gt;2009&lt;/h2&gt;

&lt;p&gt;In 2009, technical work was beginning on non-volatile memory express, or NVMe&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:38&#34;&gt;&lt;a href=&#34;#fn:38&#34;&gt;38&lt;/a&gt;&lt;/sup&gt;. Non-volatile memory
(NVM) is a type of memory that has persistence, in contrast to volatile memory which needs constant
power to retain data. NVMe filled a need for a scalable host controller interface for peripheral
component interconnect express (PCIe) based solid state drives&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:39&#34;&gt;&lt;a href=&#34;#fn:39&#34;&gt;39&lt;/a&gt;&lt;/sup&gt;, hence the name NVMe. Over 90 companies
were a part of the working group to develop the design. This was all based on prior work to define the
non-volatile memory host controller interface specification (NVMHCIS). Opening up a modern server would
likely result in finding some NVMe drives. The best NVMe drives today can do about 3,500 megabytes per
second read and 3,300 megabytes per second write&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:40&#34;&gt;&lt;a href=&#34;#fn:40&#34;&gt;40&lt;/a&gt;&lt;/sup&gt;. For the data byte we started with, the character &lt;code&gt;j&lt;/code&gt;,
that is extremely fast compared to a couple of minutes to hand weave rope memory for the Apollo Guidance Computer.&lt;/p&gt;

&lt;h2 id=&#34;today-and-the-future&#34;&gt;Today and the future&lt;/h2&gt;

&lt;h3 id=&#34;storage-class-memory-scm&#34;&gt;Storage class memory (SCM)&lt;/h3&gt;

&lt;p&gt;Now that we have traveled through time a bit (ha!), let’s take a look at the state of the art for
storage class memory (SCM) today. SCM, like NVM, is persistent, but SCM goes further by also providing
performance better than or comparable to primary memory as well as byte addressability&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:41&#34;&gt;&lt;a href=&#34;#fn:41&#34;&gt;41&lt;/a&gt;&lt;/sup&gt;. SCM aims to
address some of the problems faced by caches today such as the low density of static random access memory
(SRAM). With dynamic random access memory (DRAM)&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:42&#34;&gt;&lt;a href=&#34;#fn:42&#34;&gt;42&lt;/a&gt;&lt;/sup&gt;, we can get better density, but this comes at a cost of
slower access times. DRAM also suffers from requiring constant power to refresh memory. Let’s break this
down a bit. Power is required since the electric charge on the capacitors leaks off little by little,
meaning without intervention, the data on the chip would soon be lost. To prevent this leakage, DRAM
requires an external memory refresh circuit which periodically rewrites the data in the capacitors,
restoring them to their original charge.&lt;/p&gt;

&lt;p&gt;To solve the problems with density and power leakage, there are a few SCM technologies developing:
phase change memory (PCM), spin-transfer torque random access memory (STT-RAM), and resistive
random access memory (ReRAM). One thing that is nice about all these technologies is their ability
to function as multi-level cells, or MLCs. This means they can store more than one bit of information,
compared to single-level cells (SLCs) which can store only one bit per memory cell, or element.
Typically, a memory cell consists of one metal-oxide-semiconductor field-effect transistor (MOSFET).
MLCs reduce the number of MOSFETs required to store the same amount of data as SLCs, making them more dense or
smaller to deliver the same amount of storage as technologies using SLCs. Let’s go over how each of these SCM
technologies work.&lt;/p&gt;

&lt;h4 id=&#34;phase-change-memory-pcm&#34;&gt;Phase change memory (PCM)&lt;/h4&gt;

&lt;p&gt;Earlier we went over how phase change works for CD-RWs. PCM is similar. It’s phase change material is
typically Ge-Sb-Te, also known as GST, which can exist in two different states: amorphous and crystalline.
The amorphous state has a higher resistance, denoting a 0, than the crystalline state denoting a 1.
By assigning data values to intermediate resistances, PCM can be used to store multiple states as a MLC&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:43&#34;&gt;&lt;a href=&#34;#fn:43&#34;&gt;43&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h4 id=&#34;spin-transfer-torque-random-access-memory-stt-ram&#34;&gt;Spin-transfer torque random access memory (STT-RAM)&lt;/h4&gt;

&lt;p&gt;STT-RAM consists of two ferromagnetic, permanent magnetic, layers separated by a dielectric,
meaning an insulator that can transmit electric force without conduction. It stores bits of data based on
differences in magnetic directions. One magnetic layer, called the reference layer, has a fixed magnetic
direction while the other magnetic layer, called the free layer,  has a magnetic direction that is controlled
by passing current. For a 1, the magnetization direction of the two layers are aligned. For a 0, the two layers
have opposing magnetic directions.&lt;/p&gt;

&lt;h4 id=&#34;resistive-random-access-memory-reram&#34;&gt;Resistive random access memory (ReRAM)&lt;/h4&gt;

&lt;p&gt;A ReRAM cell consists of two metal electrodes separated by a metal oxide layer. We can think of this as slightly
similar to Masuoka’s original flash memory design, where electrons would tunnel through the oxide layer and get
stuck in the floating gate or vice-versa.  However, with ReRAM, the state of the cell is determined based on the
concentration of oxygen vacancy in the metal oxide layer.&lt;/p&gt;

&lt;p&gt;While these technologies are promising, they still have downsides. PCM and STT-RAM have high write latencies.
PCMs latencies are ten times that of DRAM, while STT-RAM has ten times the latencies of SRAM. PCM and ReRAM have
a limit on write endurance before a hard error occurs, meaning a memory element gets stuck at a particular value&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:44&#34;&gt;&lt;a href=&#34;#fn:44&#34;&gt;44&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;In August 2015, Intel announced Optane, their product build on 3DXPoint, pronounced 3D cross-point&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:45&#34;&gt;&lt;a href=&#34;#fn:45&#34;&gt;45&lt;/a&gt;&lt;/sup&gt;. Optane claims
performance 1,000 faster than NAND SSDs with 1,000 times the performance, while being four to five times the price
of flash memory. Optane is proof that storage class memory is not just experimental. It will be interesting to
watch how these technologies evolve.&lt;/p&gt;

&lt;h3 id=&#34;hard-disk-drives-hdds&#34;&gt;Hard disk drives (HDDs)&lt;/h3&gt;

&lt;h4 id=&#34;helium-hard-disk-drive-hhdd&#34;&gt;Helium hard disk drive (HHDD)&lt;/h4&gt;

&lt;p&gt;A helium drive is a high capacity hard disk drive (HDD) that is helium-filled and hermetically sealed during
manufacturing. Like other hard disks, as we covered earlier, it looks much like a record player with a
magnetic-coated platter rotating. Typical hard disk drives would just have air inside the cavity, however
that air is causing an amount of drag on the spin of the platters.&lt;/p&gt;

&lt;p&gt;Helium balloons float so we know helium is lighter than air. Helium is, in fact, 1/7th the density of air,
therefore reducing the amount of drag on the spin of the platters, causing a reduction in the amount of energy
required for the disks to spin. However, this was actually a secondary feature, the primary feature of helium
was to allow for packing 7 platters in the same form factor that would typically only hold 5. When trying to
attempt this with air filled drives, it would cause turbulence. If we remember back to our airplane wing analogy
from earlier this ties in perfectly. Since helium reduces drag, this eliminates the turbulence.&lt;/p&gt;

&lt;p&gt;What we also know about balloons is that after a few days, helium filled balloons start to sink because helium
is escaping the balloons. The same could be said for these drives. It took years before manufacturers had created
a container that prevented the helium from escaping the form factor for the life of the drive. Backblaze
experimented and found that while helium hard drives had a lower annualized error rate of 1.03%, while
standard hard drives resulted in 1.06%. Of course, that is so small a difference it is hard to conclude much from it&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:46&#34;&gt;&lt;a href=&#34;#fn:46&#34;&gt;46&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;A helium filled form factor can have a hard disk drive encapsulated that uses PMR, which we went over
above, or could contain a microwave-assisted magnetic recording (MAMR) or heat-assisted magnetic
recording (HAMR) drive. You can pair any magnetic storage technology with helium instead of air.
In 2014, HGST combined two cutting edge technologies into their 10TB helium hard disk that used host-managed
shingled magnetic recording, or SMR. Let’s go over a bit about SMR then we can cover MAMR and HAMR.&lt;/p&gt;

&lt;h4 id=&#34;shingled-magnetic-recording-smr&#34;&gt;Shingled magnetic recording (SMR)&lt;/h4&gt;

&lt;p&gt;We went over perpendicular magnetic recording (PMR) earlier which was SMR’s predecessor. In contrast to PMR,
SMR writes new tracks that overlap part of the previously written magnetic track, which in turn makes the
previous track narrower, allowing for higher track density. The technology&amp;rsquo;s namesake stems from the fact
that the overlapping tracks are much like that of roof shingles.&lt;/p&gt;

&lt;p&gt;SMR results in a much more complex writing process since writing to one track winds up overwriting an adjacent
track. This doesn&amp;rsquo;t come into play when a disk platter is empty and data is sequential. But once you are
writing to a series of tracks that already contain data, this process is destructive to existing adjacent
data. If an adjacent track contains valid data it must be rewritten. This is quite similar to NAND flash as
we covered earlier.&lt;/p&gt;

&lt;p&gt;Device-managed SMR devices hide this complexity by having the device firmware manage it resulting in an
interface like any other hard disk you might encounter. On the other hand, host-managed SMR devices rely
on the operating system to know how to handle the complexity of the drive.&lt;/p&gt;

&lt;p&gt;Seagate started shipping SMR drives in 2013 claiming a 25% greater density than PMR&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:47&#34;&gt;&lt;a href=&#34;#fn:47&#34;&gt;47&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h4 id=&#34;microwave-assisted-magnetic-recording-mamr&#34;&gt;Microwave-assisted magnetic recording (MAMR)&lt;/h4&gt;

&lt;p&gt;MAMR is an energy-assisted magnetic storage technology, like HAMR which we will cover next, that uses 20-40GHz
frequencies to bombard the disk platter with a circular microwave field, lowering the its coercivity, meaning
the platter has a lower resistance of its magnetic material to changes in magnetization. We learned above that
changes in magnetization of a region of the platter are used to denote a 0 or a 1 so this allows the data to be
written much more densely on the disk since it has a lower resistance to changes in magnetization. The core of
this new technology is the spin torque oscillator used to generate the microwave field without sacrificing
reliability.&lt;/p&gt;

&lt;p&gt;Western Digital, also known as WD, unveiled this technology in 2017&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:48&#34;&gt;&lt;a href=&#34;#fn:48&#34;&gt;48&lt;/a&gt;&lt;/sup&gt;. Toshiba followed shortly after in 2018&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:49&#34;&gt;&lt;a href=&#34;#fn:49&#34;&gt;49&lt;/a&gt;&lt;/sup&gt;.
While WD and Toshiba are busy pursuing MAMR, Seagate is betting on HAMR.&lt;/p&gt;

&lt;h4 id=&#34;heat-assisted-magnetic-recording-hamr&#34;&gt;Heat-assisted magnetic recording (HAMR)&lt;/h4&gt;

&lt;p&gt;HAMR is an energy-assisted magnetic storage technology for greatly increasing the amount of data that can be
stored on a magnetic device, such as a hard disk drive, by using heat delivered by a laser to help write data
onto the surface of a hard disk platter. The heat causes the data bits to be much closer together on the disk
platter, which allows greater data density and capacity.&lt;/p&gt;

&lt;p&gt;This technology is quite difficult to achieve. A 200mW laser heats a teeny area of the region to 750 °F (400 °C)
quickly before writing the data, while also not interfering with or corrupting the rest of the data on the disk&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:50&#34;&gt;&lt;a href=&#34;#fn:50&#34;&gt;50&lt;/a&gt;&lt;/sup&gt;.
The process of heating, writing the data, and cooling  must be completed in less than a nanosecond. These
challenges required the development of nano-scale surface plasmons, also known as a surface guided laser,
instead of direct laser-based heating, as well as new types of glass platters and heat-control coatings to
tolerate rapid spot-heating without damaging the recording head or any nearby data, and various other technical
challenges that needed to be overcome&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:51&#34;&gt;&lt;a href=&#34;#fn:51&#34;&gt;51&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Seagate first demonstrated this technology, despite many skeptics, in 2013&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:52&#34;&gt;&lt;a href=&#34;#fn:52&#34;&gt;52&lt;/a&gt;&lt;/sup&gt;. They started shipping the
first drives in 2018&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:53&#34;&gt;&lt;a href=&#34;#fn:53&#34;&gt;53&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h2 id=&#34;end-of-tape-rewind&#34;&gt;End of tape, rewind&lt;/h2&gt;

&lt;p&gt;We started this article in 1951 and are concluding after looking at the future of storage technology.
Storage has changed a lot over time, from paper tape, to metal tape, magnetic tape, rope memory,
spinning disks, optical disks, flash, and others. Progress has led to faster, smaller, and more
performant devices for storing data.&lt;/p&gt;

&lt;p&gt;If we compare NVMe to the 1951 UNISERVO metal tape, NVMe can read 486,111% more digits per second.
If we compare NVMe to my childhood favorite in 1994, Zip disks, NVMe can read 213,623% more digits per second.&lt;/p&gt;

&lt;p&gt;One thing that remains true is the storing of 0s and 1s. The means by which we do that vary greatly.
I hope the next time you burn a CD-RW with a mix of songs for a friend, or store home videos in an
Optical Disc Archive&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:54&#34;&gt;&lt;a href=&#34;#fn:54&#34;&gt;54&lt;/a&gt;&lt;/sup&gt;, you think about how the non-reflective bumps translate to a 0 and the reflective
lands of the disk translate to a 1. Or if you are creating a mixtape on a cassette, remember that those
are very closely related to the Datasette used in the Commodore PET. Lastly, remember to be kind and rewind&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:55&#34;&gt;&lt;a href=&#34;#fn:55&#34;&gt;55&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Thank you to &lt;a href=&#34;https://twitter.com/rmustacc&#34;&gt;Robert Mustacchi&lt;/a&gt; and &lt;a href=&#34;https://twitter.com/kc8apf&#34;&gt;Rick Altherr&lt;/a&gt; for
tidbits (I can&amp;rsquo;t help myself) throughout this article!&lt;/p&gt;
&lt;div class=&#34;footnotes&#34;&gt;

&lt;hr /&gt;

&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;&lt;a href=&#34;http://archive.computerhistory.org/resources/text/IBM/Stretch/pdfs/06-07/102632284.pdf&#34;&gt;http://archive.computerhistory.org/resources/text/IBM/Stretch/pdfs/06-07/102632284.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:1&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;&lt;a href=&#34;https://archive.org/stream/byte-magazine-1977-02/1977_02_BYTE_02-02_Usable_Systems#page/n145/mode/2up&#34;&gt;https://archive.org/stream/byte-magazine-1977-02/1977_02_BYTE_02-02_Usable_Systems#page/n145/mode/2up&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:2&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:3&#34;&gt;&lt;a href=&#34;https://web.archive.org/web/20170403130829/http://www.bobbemer.com/BYTE.HTM&#34;&gt;https://web.archive.org/web/20170403130829/http://www.bobbemer.com/BYTE.HTM&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:3&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:4&#34;&gt;&lt;a href=&#34;https://www.computerhistory.org/storageengine/tape-unit-developed-for-data-storage/&#34;&gt;https://www.computerhistory.org/storageengine/tape-unit-developed-for-data-storage/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:4&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:5&#34;&gt;&lt;a href=&#34;https://www.ibm.com/ibm/history/exhibits/701/701_1415bx26.html&#34;&gt;https://www.ibm.com/ibm/history/exhibits/701/701_1415bx26.html&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:5&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:6&#34;&gt;&lt;a href=&#34;https://www.ibm.com/ibm/history/exhibits/storage/storage_fifty.html&#34;&gt;https://www.ibm.com/ibm/history/exhibits/storage/storage_fifty.html&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:6&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:7&#34;&gt;&lt;a href=&#34;https://spectrum.ieee.org/computing/hardware/why-the-future-of-data-storage-is-still-magnetic-tape&#34;&gt;https://spectrum.ieee.org/computing/hardware/why-the-future-of-data-storage-is-still-magnetic-tape&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:7&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:8&#34;&gt;&lt;a href=&#34;https://www.ibm.com/ibm/history/exhibits/650/650_pr2.html&#34;&gt;https://www.ibm.com/ibm/history/exhibits/650/650_pr2.html&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:8&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:9&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=zOD1umMX2s8&#34;&gt;https://www.youtube.com/watch?v=zOD1umMX2s8&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:9&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:10&#34;&gt;&lt;a href=&#34;https://www.ibm.com/ibm/history/ibm100/us/en/icons/ramac/&#34;&gt;https://www.ibm.com/ibm/history/ibm100/us/en/icons/ramac/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:10&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:11&#34;&gt;&lt;a href=&#34;https://www.pdp8.net/tu56/tu56.shtml&#34;&gt;https://www.pdp8.net/tu56/tu56.shtml&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:11&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:12&#34;&gt;&lt;a href=&#34;https://www.ibm.com/ibm/history/ibm100/us/en/icons/floppy/&#34;&gt;https://www.ibm.com/ibm/history/ibm100/us/en/icons/floppy/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:12&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:13&#34;&gt;&lt;a href=&#34;https://archive.org/details/ibms360early370s0000pugh/page/513&#34;&gt;https://archive.org/details/ibms360early370s0000pugh/page/513&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:13&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:14&#34;&gt;&lt;a href=&#34;https://web.archive.org/web/20100707221048/http://archive.computerhistory.org/resources/access/text/Oral_History/102657926.05.01.acc.pdf&#34;&gt;https://web.archive.org/web/20100707221048/http://archive.computerhistory.org/resources/access/text/Oral_History/102657926.05.01.acc.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:14&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:15&#34;&gt;&lt;a href=&#34;https://authors.library.caltech.edu/5456/1/hrst.mit.edu/hrs/apollo/public/visual3.htm&#34;&gt;https://authors.library.caltech.edu/5456/1/hrst.mit.edu/hrs/apollo/public/visual3.htm&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:15&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:16&#34;&gt;&lt;a href=&#34;http://wav-prg.sourceforge.net/tape.html&#34;&gt;http://wav-prg.sourceforge.net/tape.html&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:16&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:17&#34;&gt;&lt;a href=&#34;https://www.c64-wiki.com/wiki/Datassette&#34;&gt;https://www.c64-wiki.com/wiki/Datassette&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:17&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:18&#34;&gt;You will be rick rolled by a still photo.
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:18&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:19&#34;&gt;&lt;a href=&#34;https://tools.ietf.org/html/rfc4856#page-17&#34;&gt;https://tools.ietf.org/html/rfc4856#page-17&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:19&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:20&#34;&gt;&lt;a href=&#34;https://www.pcmag.com/encyclopedia/term/st506#:~:text=ST506,using%20the%20MFM%20encoding%20method&#34;&gt;https://www.pcmag.com/encyclopedia/term/st506#:~:text=ST506,using%20the%20MFM%20encoding%20method&lt;/a&gt;.
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:20&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:21&#34;&gt;You will be rick rolled.
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:21&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:22&#34;&gt;&lt;a href=&#34;https://www.jstor.org/stable/24530873?seq=1&#34;&gt;https://www.jstor.org/stable/24530873?seq=1&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:22&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:23&#34;&gt;&lt;a href=&#34;https://www.americanradiohistory.com/hd2/IDX-Consumer/Archive-Byte-IDX/IDX/80s/82-83/Byte-1983-09-OCR-Page-0169.pdf&#34;&gt;https://www.americanradiohistory.com/hd2/IDX-Consumer/Archive-Byte-IDX/IDX/80s/82-83/Byte-1983-09-OCR-Page-0169.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:23&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:24&#34;&gt;&lt;a href=&#34;https://www.wired.com/2010/12/1216beethoven-birthday-cd-length/&#34;&gt;https://www.wired.com/2010/12/1216beethoven-birthday-cd-length/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:24&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:25&#34;&gt;&lt;a href=&#34;https://books.google.co.uk/books?id=RTwQAQAAMAAJ&#34;&gt;https://books.google.co.uk/books?id=RTwQAQAAMAAJ&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:25&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:26&#34;&gt;&lt;a href=&#34;https://www.economist.com/technology-quarterly/2006/03/11/not-just-a-flash-in-the-pan&#34;&gt;https://www.economist.com/technology-quarterly/2006/03/11/not-just-a-flash-in-the-pan&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:26&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:27&#34;&gt;&lt;a href=&#34;https://ieeexplore.ieee.org/document/1487443&#34;&gt;https://ieeexplore.ieee.org/document/1487443&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:27&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:28&#34;&gt;&lt;a href=&#34;http://aturing.umcs.maine.edu/~meadow/courses/cos335/Toshiba%20NAND_vs_NOR_Flash_Memory_Technology_Overviewt.pdf&#34;&gt;http://aturing.umcs.maine.edu/~meadow/courses/cos335/Toshiba%20NAND_vs_NOR_Flash_Memory_Technology_Overviewt.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:28&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:29&#34;&gt;&lt;a href=&#34;https://www.computerhistory.org/storageengine/solid-state-drive-module-demonstrated/&#34;&gt;https://www.computerhistory.org/storageengine/solid-state-drive-module-demonstrated/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:29&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:30&#34;&gt;&lt;a href=&#34;http://meseec.ce.rit.edu/551-projects/spring2017/2-6.pdf&#34;&gt;http://meseec.ce.rit.edu/551-projects/spring2017/2-6.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:30&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:31&#34;&gt;&lt;a href=&#34;https://www.westerndigital.com/company/innovations/history&#34;&gt;https://www.westerndigital.com/company/innovations/history&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:31&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:32&#34;&gt;&lt;a href=&#34;https://www.compactflash.org/&#34;&gt;https://www.compactflash.org/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:32&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:33&#34;&gt;&lt;a href=&#34;https://computer.howstuffworks.com/cd-burner8.htm&#34;&gt;https://computer.howstuffworks.com/cd-burner8.htm&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:33&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:34&#34;&gt;&lt;a href=&#34;https://www.diffen.com/difference/USB_1.0_vs_USB_2.0&#34;&gt;https://www.diffen.com/difference/USB_1.0_vs_USB_2.0&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:34&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:35&#34;&gt;&lt;a href=&#34;https://web.archive.org/web/20161220102924/http://www.usb.org/developers/presentations/USB_DevDays_Hong_Kong_2016_-_USB_Type-C.pdf&#34;&gt;https://web.archive.org/web/20161220102924/http://www.usb.org/developers/presentations/USB_DevDays_Hong_Kong_2016_-_USB_Type-C.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:35&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:36&#34;&gt;&lt;a href=&#34;https://www.eetimes.com/hard-drives-go-perpendicular/#&#34;&gt;https://www.eetimes.com/hard-drives-go-perpendicular/#&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:36&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:37&#34;&gt;&lt;a href=&#34;https://books.google.com/books?id=S90OaKQ-IzMC&amp;amp;pg=PA590&amp;amp;lpg=PA590&amp;amp;dq=heads+disk+airfoils&amp;amp;source=bl&amp;amp;ots=7VVuhw6mgm&amp;amp;sig=ACfU3U0PXCehcs7dKI5IhDGbRMZvqsgeHg&amp;amp;hl=en&amp;amp;sa=X&amp;amp;ved=2ahUKEwi82fm9_onoAhUIr54KHR6-BtUQ6AEwAHoECAgQAQ#v=onepage&amp;amp;q=heads%20disk%20airfoils&amp;amp;f=false&#34;&gt;https://books.google.com/books?id=S90OaKQ-IzMC&amp;amp;pg=PA590&amp;amp;lpg=PA590&amp;amp;dq=heads+disk+airfoils&amp;amp;source=bl&amp;amp;ots=7VVuhw6mgm&amp;amp;sig=ACfU3U0PXCehcs7dKI5IhDGbRMZvqsgeHg&amp;amp;hl=en&amp;amp;sa=X&amp;amp;ved=2ahUKEwi82fm9_onoAhUIr54KHR6-BtUQ6AEwAHoECAgQAQ#v=onepage&amp;amp;q=heads%20disk%20airfoils&amp;amp;f=false&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:37&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:38&#34;&gt;&lt;a href=&#34;https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2013/20130813_A12_Onufryk.pdf&#34;&gt;https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2013/20130813_A12_Onufryk.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:38&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:39&#34;&gt;&lt;a href=&#34;https://nvmexpress.org/wp-content/uploads/2013/04/NVM_whitepaper.pdf&#34;&gt;https://nvmexpress.org/wp-content/uploads/2013/04/NVM_whitepaper.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:39&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:40&#34;&gt;&lt;a href=&#34;https://www.pcgamer.com/best-nvme-ssd/&#34;&gt;https://www.pcgamer.com/best-nvme-ssd/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:40&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:41&#34;&gt;&lt;a href=&#34;https://ieeexplore.ieee.org/document/5388605&#34;&gt;https://ieeexplore.ieee.org/document/5388605&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:41&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:42&#34;&gt;&lt;a href=&#34;https://arxiv.org/pdf/1909.12221.pdf&#34;&gt;https://arxiv.org/pdf/1909.12221.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:42&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:43&#34;&gt;&lt;a href=&#34;https://ieeexplore.ieee.org/document/5388621&#34;&gt;https://ieeexplore.ieee.org/document/5388621&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:43&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:44&#34;&gt;&lt;a href=&#34;https://arxiv.org/pdf/1909.12221.pdf&#34;&gt;https://arxiv.org/pdf/1909.12221.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:44&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:45&#34;&gt;&lt;a href=&#34;https://www.anandtech.com/show/9541/intel-announces-optane-storage-brand-for-3d-xpoint-products&#34;&gt;https://www.anandtech.com/show/9541/intel-announces-optane-storage-brand-for-3d-xpoint-products&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:45&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:46&#34;&gt;&lt;a href=&#34;https://www.backblaze.com/blog/helium-filled-hard-drive-failure-rates/&#34;&gt;https://www.backblaze.com/blog/helium-filled-hard-drive-failure-rates/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:46&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:47&#34;&gt;&lt;a href=&#34;https://www.anandtech.com/show/7290/seagate-to-ship-5tb-hdd-in-2014-using-shingled-magnetic-recording/&#34;&gt;https://www.anandtech.com/show/7290/seagate-to-ship-5tb-hdd-in-2014-using-shingled-magnetic-recording/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:47&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:48&#34;&gt;&lt;a href=&#34;https://www.storagereview.com/news/wd-unveils-its-microwave-assisted-magnetic-recording-technology&#34;&gt;https://www.storagereview.com/news/wd-unveils-its-microwave-assisted-magnetic-recording-technology&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:48&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:49&#34;&gt;&lt;a href=&#34;https://www.theregister.co.uk/2018/12/07/toshiba_goes_to_mamr/&#34;&gt;https://www.theregister.co.uk/2018/12/07/toshiba_goes_to_mamr/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:49&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:50&#34;&gt;&lt;a href=&#34;https://fstoppers.com/originals/hamr-and-mamr-technologies-will-unlock-hard-drive-capacity-year-326328&#34;&gt;https://fstoppers.com/originals/hamr-and-mamr-technologies-will-unlock-hard-drive-capacity-year-326328&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:50&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:51&#34;&gt;&lt;a href=&#34;https://www.seagate.com/www-content/ti-dm/tech-insights/en-us/docs/TP707-1-1712US_HAMR.pdf&#34;&gt;https://www.seagate.com/www-content/ti-dm/tech-insights/en-us/docs/TP707-1-1712US_HAMR.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:51&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:52&#34;&gt;&lt;a href=&#34;https://www.computerworld.com/article/2485341/seagate--tdk-show-off-hamr-to-jam-more-data-into-hard-drives.html&#34;&gt;https://www.computerworld.com/article/2485341/seagate--tdk-show-off-hamr-to-jam-more-data-into-hard-drives.html&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:52&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:53&#34;&gt;&lt;a href=&#34;https://blog.seagate.com/craftsman-ship/hamr-next-leap-forward-now/&#34;&gt;https://blog.seagate.com/craftsman-ship/hamr-next-leap-forward-now/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:53&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:54&#34;&gt;Yup, you heard that right: &lt;a href=&#34;https://pro.sony/en_GB/products/optical-disc&#34;&gt;https://pro.sony/en_GB/products/optical-disc&lt;/a&gt;.
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:54&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:55&#34;&gt;This is a tribute to Blockbuster but there are still open formats for using tape today: &lt;a href=&#34;https://en.wikipedia.org/wiki/Linear_Tape-Open&#34;&gt;https://en.wikipedia.org/wiki/Linear_Tape-Open&lt;/a&gt;.
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:55&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
                </item>
                    
            <item>
                <title>Power to the People</title>
                <link>https://blog.jessfraz.com/post/power-to-the-people/</link>
                <pubDate>Wed, 26 Feb 2020 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/power-to-the-people/</guid>
                    <description>

&lt;p&gt;When you upload photos to Instagram, back up your phone to “the cloud”, send an email through GMail, or save a document in a storage application like Dropbox or Google Drive, your data is being saved in a data center. These data centers are airplane hangar-sized warehouses, packed to the brim with racks of servers and cooling mechanisms. Depending on the application you are using you are likely hitting one of Facebook’s, Google’s, Amazon’s, or Microsoft’s data centers. Aside from those major players, which we will call the “hyperscalers”, many other companies run their own data centers or rent space from a colocation center to house their server racks.&lt;/p&gt;

&lt;p&gt;Most of the hyperscalers have made massive strides to get a “carbon neutral” footprint for their data centers. Google, Amazon, and Microsoft have all pledged to decarbonize completely, however none has succeeded in completely ditching fossil fuels as of yet. If a company claims to be “carbon neutral” it means they are offsetting their use of fossil fuels with renewable energy credits, also known as RECs&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34;&gt;1&lt;/a&gt;&lt;/sup&gt;. A REC represents one megawatt-hour (MWh) of electricity that is generated and delivered to the electricity grid from a renewable energy resource such as solar or wind power. Essentially by purchasing RECs, “carbon neutral” companies are giving back clean energy to prevent someone else from emitting carbon. Most companies become “carbon neutral” by investing in offsets that primarily avoid emissions, such as paying folks to not cut down trees or buying RECs. These offsets do not actually remove the carbon that they are emitting.&lt;/p&gt;

&lt;p&gt;A “net zero” company actually has to remove as much carbon as it emits. This is referred to as “net zero” since a company is still creating carbon emissions, however their emissions are equal to the amount of carbon removed. This differs from “carbon neutral” since a “carbon neutral” company takes a look at their carbon footprint and has to prevent enough other folks from emitting that much carbon, through RECs or otherwise. Whereas a “net zero” company has to find a way to remove the amount of carbon they emit.&lt;/p&gt;

&lt;p&gt;Lastly, if a company calls themselves “carbon negative” it means they are removing more carbon than they emit each year. This should be the gold standard for how companies operate. None of the FAANG (Facebook, Apple, Amazon, Netflix and Google)&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34;&gt;2&lt;/a&gt;&lt;/sup&gt; today claim to be “carbon negative”, but Microsoft issued a press release stating they are going to be carbon negative by 2030&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:3&#34;&gt;&lt;a href=&#34;#fn:3&#34;&gt;3&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Power usage efficiency, also known as PUE, is defined as the total energy required to power a data center (including lights and cooling) divided by the energy used for servers. 1.0 would be perfect PUE since 100% of electricity consumption is used on computation. Conventional data centers have a PUE of about 2.0, while hyperscalers have gotten theirs down to about 1.2. According to a 2019 study from the Uptime Institute, which surveyed 1,600 data centers, the average PUE was 1.67&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:4&#34;&gt;&lt;a href=&#34;#fn:4&#34;&gt;4&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;PUE as a method of measurement is a point of contention. PUE does not account for the location of a data center, which means a data center that is located in a part of the world that can benefit from free cooling from outside air will have a lower PUE than one in a very hot temperature climate. It is most ideal to measure PUE as an annual average since seasons change and affect the cooling needs of a data center over the course of a year. According to a study from the University of Leeds, &amp;ldquo;comparing a PUE value of data centres is somewhat meaningless unless it is known whether it is operating at full capacity or not.”&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:5&#34;&gt;&lt;a href=&#34;#fn:5&#34;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Google claims a PUE of 1.1 on average, yearly, for all its data centers, while individually, some are as low as 1.08&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:6&#34;&gt;&lt;a href=&#34;#fn:6&#34;&gt;6&lt;/a&gt;&lt;/sup&gt;. One of the actions Google has taken for lowering their PUE is using machine learning to cool data centers with inputs from local weather and other factors&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:7&#34;&gt;&lt;a href=&#34;#fn:7&#34;&gt;7&lt;/a&gt;&lt;/sup&gt;, such as if the weather outside is cool enough they can use it without modification as free cold air. They can also predict wind farm output up to 36 hours in advance&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:8&#34;&gt;&lt;a href=&#34;#fn:8&#34;&gt;8&lt;/a&gt;&lt;/sup&gt;. Google took all the data they had from sensors in their facilities monitoring temperature, power, pressure, and other resources to create neural networks to predict future PUE, temperature, and pressure in their data centers. This way they can automate and recommend actions for keeping their data centers operating efficiently from the predictions&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:9&#34;&gt;&lt;a href=&#34;#fn:9&#34;&gt;9&lt;/a&gt;&lt;/sup&gt;. Google also sets the temperature of its data centers to 80°F, versus the usual 68-70°F, saving a lot of power for cooling. Weather local to the data center is a huge factor. For example, Google’s Singapore data center has the highest PUE and is the least efficient of its sites because Singapore is hot and humid year-round.&lt;/p&gt;

&lt;p&gt;Wired conducted an analysis&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:10&#34;&gt;&lt;a href=&#34;#fn:10&#34;&gt;10&lt;/a&gt;&lt;/sup&gt; of how Google, Microsoft, and Amazon stack up when it comes to the carbon footprint of their data centers. Google claims to be “net zero” for carbon emissions&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:11&#34;&gt;&lt;a href=&#34;#fn:11&#34;&gt;11&lt;/a&gt;&lt;/sup&gt; and also publishes a transparency report of their PUE every year&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:12&#34;&gt;&lt;a href=&#34;#fn:12&#34;&gt;12&lt;/a&gt;&lt;/sup&gt;. While Microsoft claims to be “carbon negative” by 2030, they are still “carbon neutral” today&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:13&#34;&gt;&lt;a href=&#34;#fn:13&#34;&gt;13&lt;/a&gt;&lt;/sup&gt;. They also claim to be pursuing 100% renewable energy by 2025.&lt;/p&gt;

&lt;p&gt;On the other hand, Amazon is in the worst position of large tech companies when it comes to carbon footprints. As we went over above, the location of the data center matters and some Amazon regions might be greener than others due to the weather conditions in those areas or having more access to solar or wind energy&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:14&#34;&gt;&lt;a href=&#34;#fn:14&#34;&gt;14&lt;/a&gt;&lt;/sup&gt;. Bezos has pledged to get to “net zero” by 2040&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:15&#34;&gt;&lt;a href=&#34;#fn:15&#34;&gt;15&lt;/a&gt;&lt;/sup&gt;. Greenpeace seems to believe otherwise, claiming that Amazon is not dedicated to that pledge since its Virginia data centers were only at 12% renewable energy&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:16&#34;&gt;&lt;a href=&#34;#fn:16&#34;&gt;16&lt;/a&gt;&lt;/sup&gt;. It’s hard to know, of course, until 2040 comes and either Amazon succeeds in their pledge or doesn’t.&lt;/p&gt;

&lt;p&gt;In 2018, Apple claimed 100% of their energy was from renewable sources&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:17&#34;&gt;&lt;a href=&#34;#fn:17&#34;&gt;17&lt;/a&gt;&lt;/sup&gt;. Facebook claims they will be at 100% renewable energy by the end of 2020&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:18&#34;&gt;&lt;a href=&#34;#fn:18&#34;&gt;18&lt;/a&gt;&lt;/sup&gt;. While US companies have followed suit on pledging to lower their carbon footprint, Chinese Internet giants such as Baidu, Tencent, and Alibaba have not.&lt;/p&gt;

&lt;h2 id=&#34;what-is-using-power-in-a-data-center&#34;&gt;What is using power in a data center?&lt;/h2&gt;

&lt;p&gt;According to a study from Procedia Environmental Sciences, 48% of power in a data center goes to equipment like servers and racks, 33% to heating, ventilation, and air conditioning (HVAC), 8% to uninterrupted power supply (UPS) losses, 3% lighting, and 10% to everything else&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:19&#34;&gt;&lt;a href=&#34;#fn:19&#34;&gt;19&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;HVAC for data centers is a delicate process of making sure hot air from server exhaust doesn’t mix with cool air and raise the temperature of the entire data center. This is why most data centers have hot and cold aisles. The goal is to have the cold air flow into one side of racks while the hot air exhaust comes out the other side of the racks. Optimizing air flow throughout your racks and servers is essential for helping with HVAC efficiency.&lt;/p&gt;

&lt;p&gt;Power comes off the grid as AC power. This can be single-phase power which has two wires, a power wire and a neutral wire, or three-phase power which has three wires, each 120 electrical degrees out of phase with each other. The key difference being that three-phase can handle higher loads than single-phase. The frequency of the power off the grid can be either 50 or 60Hz. Voltage is any of: 208, 240, 277, 400, 415, 480, or 600V&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:20&#34;&gt;&lt;a href=&#34;#fn:20&#34;&gt;20&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Since most equipment in a data center uses DC power, the AC power needs to be converted which results in power losses and wasted energy adding up to around 21-27% of power. Let’s break this down. There is a 2% loss when utility medium voltage, defined as voltage greater than 1000V and less than 100 kV, is transformed to 480VAC. There is a 6-12% loss within a centralized UPS due to conversions from AC-to-DC and DC back to AC. There is a 3% power loss at the power distribution unit (PDU) level due to the transformation from 480VAC to 208VAC. Standard power supplies for servers convert 208VAC to the required DC voltage resulting in a 10% loss, assuming the power supply is 90% efficient&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:21&#34;&gt;&lt;a href=&#34;#fn:21&#34;&gt;21&lt;/a&gt;&lt;/sup&gt;. This is all to say that power is wasted all throughout traditional data centers in transformations and conversions.&lt;/p&gt;

&lt;p&gt;To try to lessen the amount of wasted power from conversions, some folks rely on high-voltage DC power distribution. Lawrence Berkeley National Labs conducted a study in 2008&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:22&#34;&gt;&lt;a href=&#34;#fn:22&#34;&gt;22&lt;/a&gt;&lt;/sup&gt; comparing the use of 380VDC power distribution for a facility to a traditional 480VAC power distribution system. The results showed that the facility using DC power eliminated multiple conversion stages to result in a 7% decrease in energy consumption compared to a typical facility with AC power distribution. However, this is rarely done at hyperscale. Hyperscalers tend to have three-phase AC to the rack, then convert to DC at the rack or server level.&lt;/p&gt;

&lt;h2 id=&#34;more-power-efficient-compute&#34;&gt;More power efficient compute&lt;/h2&gt;

&lt;p&gt;Other than RECs and using 100% renewable energy, there are other ways hyperscalers have made their data centers more power efficient. In 2011, the Open Compute Project started out of a basement lab in Facebook’s Palo Alto headquarters&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:23&#34;&gt;&lt;a href=&#34;#fn:23&#34;&gt;23&lt;/a&gt;&lt;/sup&gt;. Their mission was to design from a clean slate the most efficient and economical way to run compute at scale. This led to using a 480VAC&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:24&#34;&gt;&lt;a href=&#34;#fn:24&#34;&gt;24&lt;/a&gt;&lt;/sup&gt; electrical distribution system to reduce energy loss, removing anything in their servers that didn’t contribute to efficiency, reusing hot aisle air in winter to heat the offices and the outside air flowing into the data center, and removing the need for a central power supply. The Facebook team went ahead and installed the newly designed servers in their Prineville data center which resulted in 38% less energy to do the same work as their existing data centers. It also cost 24% less.&lt;/p&gt;

&lt;p&gt;Let’s dive into some of the details of the Open Compute designs that allow for power efficiency. The Open Rack design&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:25&#34;&gt;&lt;a href=&#34;#fn:25&#34;&gt;25&lt;/a&gt;&lt;/sup&gt; includes a power bus bar with either 12VDC or 48VDC of distributed power to the nodes. The bus bar runs along the back of the rack vertically. It transmits power from the rack level power supply units (PSUs) to the servers in the rack. The bus bar allows the servers to plug in directly to the rack for power so when you are servicing an Open Rack you do not need to unplug power cords, you can just pull out the server from the front of the rack. With the Open Compute designs, network connections to servers are at the front of the rack so the technician never has to go to the back of the rack, i.e. the hot aisle.&lt;/p&gt;

&lt;h3 id=&#34;redundancy&#34;&gt;Redundancy&lt;/h3&gt;

&lt;p&gt;Conventional servers have PSUs in every server. The Open Rack design has centralized PSUs for the rack, which allow for N+M redundancy for the rack, the most common deployment being N+1 redundancy. This means there is an extra PSU per rack of servers. In a conventional system this would be 1+1 since there is one extra PSU in every individual server. By keeping the PSUs centralized to the rack, this results in a reduction in power converting components which increases the efficiency of the system.&lt;/p&gt;

&lt;h3 id=&#34;right-sized-psus&#34;&gt;Right-sized PSUs&lt;/h3&gt;

&lt;p&gt;Server designers tend to choose PSUs that have enough headroom to deliver power for the maximum configuration. Server vendors would rather carry a small number of power supply SKUs that are oversized, than carry a large number of power supply SKUs that are right-sized to purpose since the economies of scale prefer the former. This leads to an oversizing factor of at least 2-3 times&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:26&#34;&gt;&lt;a href=&#34;#fn:26&#34;&gt;26&lt;/a&gt;&lt;/sup&gt; the required capacity for conventional power supplies. In comparison, a rack-level PSU will be less oversized since it is right-sized for purpose. The hyperscalers also have the advantage of economies of scale for their hardware. The typical Open Rack compliant power supply is oversized at only 1.2 times the required capacity, if even that.&lt;/p&gt;

&lt;h3 id=&#34;optimal-efficiency&#34;&gt;Optimal efficiency&lt;/h3&gt;

&lt;p&gt;Every power supply has a sweet spot for load versus efficiency. 80 Plus is a certification program for PSUs to measure efficiency. There are a few different grades: Bronze, Silver, Gold, Platinum, and Titanium. The most power efficient grade of the 80 Plus standard is Titanium. The most common grade of PSU used in data centers is 80 Plus Silver, which has a maximum efficiency of 88%. This means it wastes 12% electric energy as heat at the various load levels. In comparison, the 12V and 48VDC PSUs have data showing maximum efficiencies at 95%&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:27&#34;&gt;&lt;a href=&#34;#fn:27&#34;&gt;27&lt;/a&gt;&lt;/sup&gt; and 98%&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:28&#34;&gt;&lt;a href=&#34;#fn:28&#34;&gt;28&lt;/a&gt;&lt;/sup&gt;, respectively. This means the rack-level PSUs only waste between 5 and 2% of energy.&lt;/p&gt;

&lt;p&gt;While the efficiency of the rack-level PSU is important, we still need to weigh the cost of the number of conversions being made to get the power to each server. For every unnecessary power conversion, you are paying an efficiency cost. For example with a 48VDC rack-level power supply, the server might need to convert the rack provided 48VDC to 12VDC then that 12VDC to V&lt;sub&gt;CORE&lt;/sub&gt;. V&lt;sub&gt;CORE&lt;/sub&gt; is the voltage supplied to the CPU, GPU, or other processing core. With Google’s 48VDC power supply, they advocate for using 48V to point of load (PoL)&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:29&#34;&gt;&lt;a href=&#34;#fn:29&#34;&gt;29&lt;/a&gt;&lt;/sup&gt; to deliver power to the servers. This means placing a DC-to-DC or linear power supply regulator going from the rack-level PSU to the server which would reduce the number of conversions needed to get the power to the processing cores. However, the 48VDC to DC regulators required for Google&amp;rsquo;s implementation are not common and come at a premium cost. It is likely their motivation for opening the specs for the 48VDC rack is to drive more volume to those parts and drive down costs. In contrast, 12VDC to DC regulators are quite common and low cost.&lt;/p&gt;

&lt;h4 id=&#34;reading-a-power-efficiency-graph&#34;&gt;Reading a power efficiency graph&lt;/h4&gt;

&lt;p&gt;Below is an example of a power efficiency graph for a power supply.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/efficiency-curve.png&#34; alt=&#34;efficiency-curve&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source for image: &lt;a href=&#34;https://e2e.ti.com/blogs_/b/industrial_strength/archive/2019/04/04/three-considerations-for-achieving-high-efficiency-and-reliability-in-industrial-ac-dc-power-supplies&#34;&gt;https://e2e.ti.com/blogs_/b/industrial_strength/archive/2019/04/04/three-considerations-for-achieving-high-efficiency-and-reliability-in-industrial-ac-dc-power-supplies&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We can see the peak of the graph is where the PSU is the most efficient. We divide the output power by the input power to calculate efficiency. The x-axis of the graph measures the load of the power supply in Watts, while the y-axis measures efficiency.&lt;/p&gt;

&lt;p&gt;Let’s go through an example of choosing the right power supply for the load. For the example graph above if we know our peak load is 120W and idle is 60W, this power supply would be more than we need since it can handle up to 150W. At our peak load of 120W with 230VAC, this power supply would have a maximum efficiency of around 94% and a minimum efficiency at idle of around 92% with 230VAC. We now know the losses of this specific power supply and can compare it to other supplies to see if they are more efficient for our load.&lt;/p&gt;

&lt;h3 id=&#34;open-compute-servers-without-a-bus-bar&#34;&gt;Open Compute servers without a bus bar&lt;/h3&gt;

&lt;p&gt;Not all Open Compute servers include a power bus bar. Microsoft’s Olympus servers require on AC power&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:30&#34;&gt;&lt;a href=&#34;#fn:30&#34;&gt;30&lt;/a&gt;&lt;/sup&gt;. The Olympus power supply has three 340W power supply modules, one for each phase, with a total maximum output of 1000W&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:31&#34;&gt;&lt;a href=&#34;#fn:31&#34;&gt;31&lt;/a&gt;&lt;/sup&gt;. Therefore, these power supplies assume all deployments are three-phase power. The minimum efficiency of the PSU is 89-94% depending on the load&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:32&#34;&gt;&lt;a href=&#34;#fn:32&#34;&gt;32&lt;/a&gt;&lt;/sup&gt;. This places the grade of the Olympus power supply around an 80 Plus Platinum&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:33&#34;&gt;&lt;a href=&#34;#fn:33&#34;&gt;33&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Like all technical decisions, using per server AC power supplies versus rack level DC is a trade-off. By having separate power supplies, different workloads can balance the power they are consuming individually rather than at a rack level. In turn though, Microsoft needs to build and manufacture multiple power supplies to ensure they are right-sized to run at maximum efficiency for each server configuration. Serviceability also requires technicians to unplug power cables and go to the back of the rack.&lt;/p&gt;

&lt;p&gt;At the time Microsoft made the decision to use individual AC power supplies per server, the Open Rack design was at v1 not v2 like it is today, the cost of the copper for the power bus bar was higher and the loss of efficiency to resistance was a factor. The Open Rack v1 design had an efficiency concern with the power loss due to heating the copper in the bus bar. If a rack holds 24kW of equipment, a 12VDC power bus bar must deliver 2kA of current. This requires a very thick piece of copper which has an amount of power loss that is not insignificant due to resistance in the bus bar.&lt;/p&gt;

&lt;p&gt;Let’s break down how to measure the relationship of power to resistance. Ohm’s law declares electric current (I) is proportional to voltage (V) and inversely proportional to resistance (R), so V=IR. To see the relationship of power to resistance, we combine Ohm’s law (V=IR) with P = IV, which translates to power (P) is the product of current (I) and voltage (V). Substituting I = V/R gives P = (V/R)V = V&lt;sup&gt;2&lt;/sup&gt;/R. Then, substituting V = IR gives P = I(IR) = I&lt;sup&gt;2&lt;/sup&gt;R. So P = I&lt;sup&gt;2&lt;/sup&gt;R is how we can calculate the power loss due to resistance in the bus bar.&lt;/p&gt;

&lt;p&gt;For their decision, Microsoft balanced the conversion efficiency against the material cost of the bus bar and the resistive loss. However, Open Rack v2 changes the trade-offs of their original decision. With a 48VDC bus bar, a rack that holds 24kW of equipment only requires 500A, as opposed to the 2kA required by the 12VDC power bus bar from the v1 spec. This translates into a much cheaper bus bar and lower losses due to resistance. The bus bar still has more loss than 208VAC cables but there is an improved efficiency from the power supply unit at the rack-level, which makes it compelling. However, as we stated earlier you need to be mindful of the number of conversions getting the power to the components on the motherboard. If your existing equipment is 12VDC, you would want to avoid any extra conversions using that with a 48VDC bus bar. Save the 48VDC bus bar for new equipment that has 48V to point of load to avoid any extra conversions.&lt;/p&gt;

&lt;p&gt;The main difference between Microsoft’s design with individual power supplies and the 24VDC and 48VDC Open Rack designs is the way the initial set of power is delivered to the servers. For Microsoft’s design, they distribute three-phase power to the servers individually through power supplies while the 24VDC and 48VDC power bus bar distributes the power delivery to the servers. Once power is delivered to the server, the power is sent through typically a DC-to-DC power supply regulator which in turn powers the components on the motherboard. This step is shared whether the power is coming from a single power bus bar or individual power supplies.&lt;/p&gt;

&lt;p&gt;There is another interesting bit that comes into play with uninterrupted power supplies (UPSes). We talked a bit earlier about the losses in efficiency due to UPSes. Let’s go over a bit about what this means in terms of a DC bus bar or individual AC PSUs. When AC power is going into each individual server you have two choices: a UPS on the AC before it gets distributed to the individual servers or a UPS per server integrated into each server’s PSU. Deploying and servicing individual batteries per server is a nightmare for maintenance. Because of this, most facilities that use AC power to the servers wind up using rack-wide or building-wide UPSes. Since the batteries in a UPS are DC, an AC UPS has an AC-to-DC converter for charging the batteries and a DC-to-AC inverter to provide AC power from the battery. For online UPSes, meaning the battery is always connected, this requires two extra conversions from AC-to-DC and DC back to AC with power efficiency losses for both.&lt;/p&gt;

&lt;p&gt;With a DC rack-level design, battery packs can be attached directly to the bus bar. The rack-level PSUs are the first AC-to-DC conversion state so there is not a need for another conversion since everything from there runs on DC. The downside is that the rack-level PSU needs to adjust the voltage level to act as a battery charger. This means the servers need to accept a fairly wide tolerance on the 48V target, around +/-10V so 40-56V isn&amp;rsquo;t unreasonable. Because DC-to-DC converters are fairly tolerant about input voltage ranges, this is fairly straightforward to deal with without any significant loss in power efficiency. It’s important to note that for hyperscalers UPSes are only present to allow for a generator to kick in which is a few seconds versus around 10-15 minutes for a traditional data center.&lt;/p&gt;

&lt;p&gt;With commodity servers, like Dell&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:34&#34;&gt;&lt;a href=&#34;#fn:34&#34;&gt;34&lt;/a&gt;&lt;/sup&gt; or Supermicro&lt;sup class=&#34;footnote-ref&#34; id=&#34;fnref:35&#34;&gt;&lt;a href=&#34;#fn:35&#34;&gt;35&lt;/a&gt;&lt;/sup&gt;, the cost of individual power supplies is much higher on power efficiency since those PSUs are not as high of a grade and have much more oversizing. They also tend to lack power supply regulators that minimize power conversion losses in supplying power to the components on the board. This would lead to around an 8-12% gain in power efficiency by moving from a bunch of commodity servers in a rack to an OCP design. Not to mention, the serviceability ease of the bus bar would benefit technicians as well.&lt;/p&gt;

&lt;p&gt;By designing rack level architectures, huge improvements can be made for power efficiency over conventional servers since PSUs will be less oversized, more consolidated, and redundant for the rack versus per server. While the hyperscalers have benefitted from these gains in power efficiency, most of the industry is still waiting. The Open Compute project was started as an effort to allow other companies running data centers to benefit from the power efficiencies as well. If more organizations run rack-scale architectures in their data centers, we can lessen the wasted carbon emissions caused by conventional servers.&lt;/p&gt;

&lt;p&gt;Huge thanks to &lt;a href=&#34;https://twitter.com/kc8apf&#34;&gt;Rick Altherr&lt;/a&gt;, &lt;a href=&#34;https://twitter.com/digiamir&#34;&gt;Amir Michael&lt;/a&gt;, &lt;a href=&#34;https://twitter.com/KWF&#34;&gt;Kenneth Finnegan&lt;/a&gt;, &lt;a href=&#34;https://twitter.com/arjenroodselaar&#34;&gt;Arjen Roodselaar&lt;/a&gt;, and &lt;a href=&#34;https://twitter.com/cscotta&#34;&gt;Scott Andreas&lt;/a&gt; for their help with the nuances in this article.&lt;/p&gt;
&lt;div class=&#34;footnotes&#34;&gt;

&lt;hr /&gt;

&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;&lt;a href=&#34;https://www.epa.gov/greenpower/renewable-energy-certificates-recs&#34;&gt;https://www.epa.gov/greenpower/renewable-energy-certificates-recs&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:1&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;&lt;a href=&#34;https://en.wikipedia.org/wiki/Facebook,_Apple,_Amazon,_Netflix_and_Google&#34;&gt;https://en.wikipedia.org/wiki/Facebook,_Apple,_Amazon,_Netflix_and_Google&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:2&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:3&#34;&gt;&lt;a href=&#34;https://blogs.microsoft.com/blog/2020/01/16/microsoft-will-be-carbon-negative-by-2030/&#34;&gt;https://blogs.microsoft.com/blog/2020/01/16/microsoft-will-be-carbon-negative-by-2030/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:3&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:4&#34;&gt;&lt;a href=&#34;https://uptimeinstitute.com/resources/asset/2019-data-center-industry-survey&#34;&gt;https://uptimeinstitute.com/resources/asset/2019-data-center-industry-survey&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:4&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:5&#34;&gt;&lt;a href=&#34;http://eprints.whiterose.ac.uk/79352/1/GBrady%20Case%20Study%20of%20PUE.pdf&#34;&gt;http://eprints.whiterose.ac.uk/79352/1/GBrady%20Case%20Study%20of%20PUE.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:5&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:6&#34;&gt;&lt;a href=&#34;https://www.google.com/about/datacenters/efficiency/&#34;&gt;https://www.google.com/about/datacenters/efficiency/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:6&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:7&#34;&gt;&lt;a href=&#34;https://deepmind.com/blog/article/safety-first-ai-autonomous-data-centre-cooling-and-industrial-control&#34;&gt;https://deepmind.com/blog/article/safety-first-ai-autonomous-data-centre-cooling-and-industrial-control&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:7&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:8&#34;&gt;&lt;a href=&#34;https://deepmind.com/blog/article/machine-learning-can-boost-value-wind-energy&#34;&gt;https://deepmind.com/blog/article/machine-learning-can-boost-value-wind-energy&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:8&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:9&#34;&gt;&lt;a href=&#34;https://deepmind.com/blog/article/deepmind-ai-reduces-google-data-centre-cooling-bill-40&#34;&gt;https://deepmind.com/blog/article/deepmind-ai-reduces-google-data-centre-cooling-bill-40&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:9&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:10&#34;&gt;&lt;a href=&#34;https://www.wired.com/story/amazon-google-microsoft-green-clouds-and-hyperscale-data-centers/&#34;&gt;https://www.wired.com/story/amazon-google-microsoft-green-clouds-and-hyperscale-data-centers/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:10&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:11&#34;&gt;&lt;a href=&#34;https://cloud.google.com/blog/topics/google-cloud-next/our-heads-in-the-cloud-but-were-keeping-the-earth-in-mind&#34;&gt;https://cloud.google.com/blog/topics/google-cloud-next/our-heads-in-the-cloud-but-were-keeping-the-earth-in-mind&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:11&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:12&#34;&gt;&lt;a href=&#34;https://www.google.com/about/datacenters/efficiency/&#34;&gt;https://www.google.com/about/datacenters/efficiency/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:12&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:13&#34;&gt;&lt;a href=&#34;https://www.microsoft.com/en-us/corporate-responsibility/sustainability/operations&#34;&gt;https://www.microsoft.com/en-us/corporate-responsibility/sustainability/operations&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:13&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:14&#34;&gt;&lt;a href=&#34;https://aws.amazon.com/about-aws/sustainability/&#34;&gt;https://aws.amazon.com/about-aws/sustainability/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:14&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:15&#34;&gt;&lt;a href=&#34;https://blog.aboutamazon.com/sustainability/the-climate-pledge&#34;&gt;https://blog.aboutamazon.com/sustainability/the-climate-pledge&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:15&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:16&#34;&gt;&lt;a href=&#34;https://www.greenpeace.org/usa/news/greenpeace-finds-amazon-breaking-commitment-to-power-cloud-with-100-renewable-energy/&#34;&gt;https://www.greenpeace.org/usa/news/greenpeace-finds-amazon-breaking-commitment-to-power-cloud-with-100-renewable-energy/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:16&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:17&#34;&gt;&lt;a href=&#34;https://www.apple.com/newsroom/2018/04/apple-now-globally-powered-by-100-percent-renewable-energy/&#34;&gt;https://www.apple.com/newsroom/2018/04/apple-now-globally-powered-by-100-percent-renewable-energy/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:17&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:18&#34;&gt;&lt;a href=&#34;https://about.fb.com/news/2018/08/renewable-energy/&#34;&gt;https://about.fb.com/news/2018/08/renewable-energy/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:18&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:19&#34;&gt;&lt;a href=&#34;https://www.sciencedirect.com/science/article/pii/S1878029617300956&#34;&gt;https://www.sciencedirect.com/science/article/pii/S1878029617300956&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:19&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:20&#34;&gt;&lt;a href=&#34;https://ctlsys.com/support/electrical_service_types_and_voltages/&#34;&gt;https://ctlsys.com/support/electrical_service_types_and_voltages/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:20&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:21&#34;&gt;&lt;a href=&#34;https://www.facebook.com/notes/facebook-engineering/designing-a-very-efficient-data-center/10150148003778920/&#34;&gt;https://www.facebook.com/notes/facebook-engineering/designing-a-very-efficient-data-center/10150148003778920/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:21&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:22&#34;&gt;&lt;a href=&#34;https://eln.lbl.gov/sites/default/files/lbnl-2001006.pdf&#34;&gt;https://eln.lbl.gov/sites/default/files/lbnl-2001006.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:22&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:23&#34;&gt;&lt;a href=&#34;https://www.facebook.com/notes/facebook-engineering/building-efficient-data-centers-with-the-open-compute-project/10150144039563920/&#34;&gt;https://www.facebook.com/notes/facebook-engineering/building-efficient-data-centers-with-the-open-compute-project/10150144039563920/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:23&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:24&#34;&gt;&lt;a href=&#34;https://www.facebook.com/notes/facebook-engineering/designing-a-very-efficient-data-center/10150148003778920/&#34;&gt;https://www.facebook.com/notes/facebook-engineering/designing-a-very-efficient-data-center/10150148003778920/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:24&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:25&#34;&gt;&lt;a href=&#34;https://www.opencompute.org/wiki/Open_Rack/SpecsAndDesigns&#34;&gt;https://www.opencompute.org/wiki/Open_Rack/SpecsAndDesigns&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:25&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:26&#34;&gt;&lt;a href=&#34;https://blog.se.com/datacenter/2018/05/24/12v-vs-48v-the-rack-power-architecture-efficiency-calculator-illustrates-energy-savings-of-ocp-style-psus/&#34;&gt;https://blog.se.com/datacenter/2018/05/24/12v-vs-48v-the-rack-power-architecture-efficiency-calculator-illustrates-energy-savings-of-ocp-style-psus/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:26&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:27&#34;&gt;&lt;a href=&#34;https://www.opencompute.org/files/External-2018-OCP-Summit-Google-48V-Update-Flatbed-and-STC-20180321.pdf&#34;&gt;https://www.opencompute.org/files/External-2018-OCP-Summit-Google-48V-Update-Flatbed-and-STC-20180321.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:27&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:28&#34;&gt;&lt;a href=&#34;http://apec.dev.itswebs.com/Portals/0/APEC%202017%20Files/Plenary/APEC%20Plenary%20Google.pdf?ver=2017-04-24-091315-930&amp;amp;timestamp=1495563027516&#34;&gt;http://apec.dev.itswebs.com/Portals/0/APEC%202017%20Files/Plenary/APEC%20Plenary%20Google.pdf?ver=2017-04-24-091315-930&amp;amp;timestamp=1495563027516&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:28&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:29&#34;&gt;&lt;a href=&#34;http://apec.dev.itswebs.com/Portals/0/APEC%202017%20Files/Plenary/APEC%20Plenary%20Google.pdf?ver=2017-04-24-091315-930&amp;amp;timestamp=1495563027516&#34;&gt;http://apec.dev.itswebs.com/Portals/0/APEC%202017%20Files/Plenary/APEC%20Plenary%20Google.pdf?ver=2017-04-24-091315-930&amp;amp;timestamp=1495563027516&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:29&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:30&#34;&gt;&lt;a href=&#34;https://www.datacenterdynamics.com/en/news/dcd-zettastructure-why-project-olympus-relies-on-ac-power/&#34;&gt;https://www.datacenterdynamics.com/en/news/dcd-zettastructure-why-project-olympus-relies-on-ac-power/&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:30&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:31&#34;&gt;&lt;a href=&#34;http://files.opencompute.org/oc/public.php?service=files&amp;amp;t=2247ac812c026ea8fa15d29622779fa7&amp;amp;download&#34;&gt;http://files.opencompute.org/oc/public.php?service=files&amp;amp;t=2247ac812c026ea8fa15d29622779fa7&amp;amp;download&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:31&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:32&#34;&gt;&lt;a href=&#34;http://files.opencompute.org/oc/public.php?service=files&amp;amp;t=2247ac812c026ea8fa15d29622779fa7&amp;amp;download&#34;&gt;http://files.opencompute.org/oc/public.php?service=files&amp;amp;t=2247ac812c026ea8fa15d29622779fa7&amp;amp;download&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:32&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:33&#34;&gt;&lt;a href=&#34;https://en.wikipedia.org/wiki/80_Plus&#34;&gt;https://en.wikipedia.org/wiki/80_Plus&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:33&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:34&#34;&gt;&lt;a href=&#34;https://i.dell.com/sites/doccontent/shared-content/data-sheets/en/Documents/power-and-cooling-innovations_030216.pdf&#34;&gt;https://i.dell.com/sites/doccontent/shared-content/data-sheets/en/Documents/power-and-cooling-innovations_030216.pdf&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:34&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li id=&#34;fn:35&#34;&gt;&lt;a href=&#34;https://www.supermicro.com/en/support/resources/pws&#34;&gt;https://www.supermicro.com/en/support/resources/pws&lt;/a&gt;
 &lt;a class=&#34;footnote-return&#34; href=&#34;#fnref:35&#34;&gt;&lt;sup&gt;[return]&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
                </item>
                    
            <item>
                <title>Network booted, home initialized</title>
                <link>https://blog.jessfraz.com/post/network-booted-home-initialized/</link>
                <pubDate>Fri, 17 Jan 2020 11:25:24 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/network-booted-home-initialized/</guid>
                    <description>

&lt;p&gt;I had a lot of fun writing blog posts in the past about my
&lt;a href=&#34;https://blog.jessfraz.com/post/home-lab-is-the-dopest-lab/&#34;&gt;home lab&lt;/a&gt;
and some of my
&lt;a href=&#34;https://blog.jessfraz.com/post/personal-infrastructure/&#34;&gt;personal infrastructure&lt;/a&gt;
so I thought I would do the same as we built out our office. Much like moving
into a new place, the first thing I always plan to have setup on move-in day is
internet. We did the same with our office as well. Before we even had any
real furniture, we made sure that we had a network connection.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/empty-office.jpg&#34; alt=&#34;picture before we had furniture&#34; /&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You may recognize that furniture from the
&lt;a href=&#34;https://blog.jessfraz.com/post/born-in-a-garage/&#34;&gt;garage&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For the office I really wanted our network infrastructure to be off-the-charts
good. Everyone knows shitty internet is a productivity &lt;em&gt;killer&lt;/em&gt;. Since I use
UniFi for my network setup at home, we used the same for the office.&lt;/p&gt;

&lt;p&gt;Here&amp;rsquo;s what we got:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://store.ui.com/collections/routing-switching/products/usw-pro-48-poe&#34;&gt;48 Port Switch&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://store.ui.com/products/unifi-dream-machine&#34;&gt;Dream Machine&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://store.ui.com/collections/wireless/products/unifi-ap-ac-shd&#34;&gt;2 Wifi APs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://store.ui.com/collections/surveillance/products/unifi-protect-g4-pro-camera&#34;&gt;A couple of cameras&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://store.amplifi.com/products/amplifi-alien&#34;&gt;Amplifi Alien&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;the-switch&#34;&gt;The Switch&lt;/h2&gt;

&lt;p&gt;We have a hard line coming from the 48 port switch to each section of desks.
I cabled all this myself. As we grow we will likely segment this off to each
desk having its own little 4- or 8-port network switch but for now this works.&lt;/p&gt;

&lt;h3 id=&#34;the-cables&#34;&gt;The Cables&lt;/h3&gt;

&lt;p&gt;All the cables running to the desks are Cat7s from Monoprice. Every type of
cable has a maximum distance. For ethernet cables, the maximum distance is the
maximum upload/download speed. Cat7 gets praised for its 100 Gbps speed,
but that will only work for distances up to 15 meters (slightly over 49 feet).
From 15 meters up to 50 meters, a Cat7 cable downgrades to 40 Gbps.
Beyond that, it drops to the same 10 Gbps speed of Cat6 and Cat6a, however it
still retains its superior 600 Mhz bandwidth. We use 100ft cables to the desks
and 50ft cables wherever we can reach to maximize speed.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/cabling.jpg&#34; alt=&#34;picture of cabling&#34; /&gt;&lt;/p&gt;

&lt;h2 id=&#34;the-router&#34;&gt;The Router&lt;/h2&gt;

&lt;p&gt;The Dream Machine is acting as our gateway and controller. Before we got the
other 2 APs, it was our only access point and did a great job of that.&lt;/p&gt;

&lt;h3 id=&#34;the-access-points&#34;&gt;The Access Points&lt;/h3&gt;

&lt;p&gt;We have a large warehouse with a lot of square feet, while the Dream Machine
does have coverage to every corner, it&amp;rsquo;s nice to have a strong signal from
anywhere in the office. As we grow we will have more and more devices on
our network, so having some other APs to handle that load is necessary.&lt;/p&gt;

&lt;h2 id=&#34;the-cameras&#34;&gt;The Cameras&lt;/h2&gt;

&lt;p&gt;The cameras will be installed outside our office so we can see what is going on
when we are not there. This is mainly for security.&lt;/p&gt;

&lt;h2 id=&#34;the-isolated-network-router&#34;&gt;The Isolated Network Router&lt;/h2&gt;

&lt;p&gt;Lastly, is the AmpliFi Alien. Since AmpliFi is not a part of
the rest of the UniFi fleet, it exposes its own network. I foresee
this becoming the network for our lab equipment or anything we don&amp;rsquo;t want on
the main network. It&amp;rsquo;s a very, very nice secondary network that is fully
isolated with Wi-Fi 6 capabilities and a max speed of 4804 Mbps. If only all
devices supported Wi-Fi 6!&lt;/p&gt;

&lt;p&gt;It&amp;rsquo;s been fun to build out the network infrastructure in our office and make
sure it scales while we scale out the team. We have hired some of
the brightest folks that I am happy to call coworkers.  This is just one very
small detail of our startup journey, but I am glad I got to share it. Our
previously empty office is now one with 20 desks, 2 kitchen tables and a
large, cozy couch area with whiteboards for brainstorming. Can&amp;rsquo;t wait to see
what the future brings!&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/office-jan.jpg&#34; alt=&#34;picure of office now&#34; /&gt;&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Born in a Garage</title>
                <link>https://blog.jessfraz.com/post/born-in-a-garage/</link>
                <pubDate>Mon, 02 Dec 2019 06:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/born-in-a-garage/</guid>
                    <description>&lt;p&gt;WE STARTED A COMPUTER COMPANY!! You have no idea how long I&amp;rsquo;ve been waiting to
say that! I guess some context would help&amp;hellip;
&lt;a href=&#34;https://www.linkedin.com/in/steve-tuck-02b4313/&#34;&gt;Steve Tuck&lt;/a&gt;,
&lt;a href=&#34;https://twitter.com/bcantrill&#34;&gt;Bryan Cantrill&lt;/a&gt;, and I officially started the
&lt;a href=&#34;https://oxide.computer&#34;&gt;Oxide Computer Company&lt;/a&gt;. Since then, we&amp;rsquo;ve been
working on closing up fundraising, getting an awesome office, and hiring!&lt;/p&gt;

&lt;p&gt;You are probably thinking &amp;ldquo;a computer company? that&amp;rsquo;s outrageous!&amp;rdquo;.. well it
is and it isn&amp;rsquo;t. Over the last year, I had the opportunity to spend a lot of
time talking with folks who are currently running workloads on premises. The
consensus from all my conversations has been that everyone setting up
infrastructure themselves is in a great deal of pain and they have been
largely neglected by any existing vendor. All these folks have very good
reasons for running on premises that include security, strategic reasons like
latency, specialized workloads, or the reality that the unit economics of
running at their scale in the cloud are unsustainable.&lt;/p&gt;

&lt;p&gt;I&amp;rsquo;ve had the privilege of working on projects in my career that have had a
positive impact on a lot of people &amp;ndash; from Docker, to the Go programming
language, to Kubernetes. Over the course of talking to folks, I soon realized
that working on solving the pain for those running on premises would have a
huge amount of impact. Hyperscalers like Facebook, Google, and Microsoft have
what I like to call &amp;ldquo;infrastructure privilege&amp;rdquo; since they long ago decided they
could build their own hardware and software to fulfill their needs better than
commodity vendors. We are working to bring that same infrastructure privilege
to everyone else! This leads to better integration between the hardware and
software stacks, better power distribution, and better density. It&amp;rsquo;s even
better for the environment due to the energy consumption wins!&lt;/p&gt;

&lt;p&gt;I can&amp;rsquo;t think of a better problem space and group of folks to work with to
build a company. I’ve known both Bryan and Steve from the container conference
circuit and couldn’t pass up an opportunity to work with both of them. I&amp;rsquo;ve
also been in love with computers since I was a child… as proof of this (not
that it’s needed) here is a very early diary entry of mine.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/diary.jpg&#34; alt=&#34;diary-entry&#34; /&gt;&lt;/p&gt;

&lt;p&gt;In typical cliche computer company fashion we have been working out of my
garage. Bryan brought over his collection of computer manuals and even added
to my collection of floppy disks. It&amp;rsquo;s basically been a computer nerd’s
nirvana!&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/garage.jpg&#34; alt=&#34;garage&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Since we now have funding, we will be moving to a bigger space. I can&amp;rsquo;t wait;
I fell in love the minute we saw it. It&amp;rsquo;s perfect for a nascent computer
company to grow.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/new-office.jpg&#34; alt=&#34;new-office&#34; /&gt;&lt;/p&gt;

&lt;p&gt;We will still be using the original garage to record episodes of
&lt;a href=&#34;https://oxide.computer/blog/categories/on-the-metal/&#34;&gt;On the Metal&lt;/a&gt;, our
podcast. You will not want to miss episodes of that! We have been lucky to
have some amazing conversations with technologists, the first being
&lt;a href=&#34;https://oxide.computer/blog/on-the-metal-1-jeff-rothschild/&#34;&gt;Jeff Rothschild&lt;/a&gt;,
and I can’t wait to share them with you!&lt;/p&gt;

&lt;p&gt;The past few months have been some of the most fun and rewarding of my entire
career and we are only getting started. If you want to read more about some of
the deep technical problems we will be solving check out my ACM Queue articles:
&lt;a href=&#34;https://queue.acm.org/detail.cfm?id=3349301&#34;&gt;Open Source Firmware&lt;/a&gt; and
&lt;a href=&#34;https://queue.acm.org/detail.cfm?id=3378404&#34;&gt;Opening up the Baseboard Management Controller&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We would love to have you join us in &lt;a href=&#34;https://oxide.computer/principles/&#34;&gt;our mission to “Kick butt, have fun, don&amp;rsquo;t
cheat, love our customers, change computing forever”&lt;/a&gt;
&amp;ndash; head over to &lt;a href=&#34;https://oxide.computer/careers/&#34;&gt;our careers page&lt;/a&gt; or if you
are a designer who codes send us a &lt;a href=&#34;https://design.oxide.computer&#34;&gt;pull request&lt;/a&gt;!
If you are currently running on premises and are interested in what we are
building, &lt;a href=&#34;https://oxide.computer&#34;&gt;join our mailing list&lt;/a&gt; and I will be in
touch!&lt;/p&gt;

&lt;p&gt;Thank you to all of our friends and family for their support of our endeavor.
I, personally, could not have done this without your advice, guidance, and
positivity during our fundraise and after. I&amp;rsquo;ve been waiting for the day I
could share with everyone what we&amp;rsquo;ve been up to and I am beyond excited to
build a company and a product people will love! Stay tuned!&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Tales from Firmware Camp</title>
                <link>https://blog.jessfraz.com/post/tales-from-firmware-camp/</link>
                <pubDate>Tue, 10 Sep 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/tales-from-firmware-camp/</guid>
                    <description>&lt;p&gt;Last week I attended the &lt;a href=&#34;https://osfc.io/&#34;&gt;Open Source Firmware Conference&lt;/a&gt;.
It was amazing!
The talks, people, and overall feel of the conference really left me feeling
inspired and lucky to attend.&lt;/p&gt;

&lt;p&gt;Having been pushed to attend vendor conferences and trade shows through my
career for various jobs, it was so refreshing to have the chance to hang out
with folks from such a genuine community that really just want to help one
another.&lt;/p&gt;

&lt;p&gt;When the talks hit YouTube you should be sure to check them
out (&lt;a href=&#34;https://twitter.com/jessfraz/status/1169361763680210944&#34;&gt;I also&lt;/a&gt;
&lt;a href=&#34;https://twitter.com/jessfraz/status/1168925785211772929&#34;&gt;tweeted&lt;/a&gt;
&lt;a href=&#34;https://twitter.com/jessfraz/status/1168934537415593987&#34;&gt;about&lt;/a&gt;
&lt;a href=&#34;https://twitter.com/jessfraz/status/1168958435288915970&#34;&gt;a few&lt;/a&gt;
&lt;a href=&#34;https://twitter.com/jessfraz/status/1169030969535488002&#34;&gt;of them&lt;/a&gt;). What
I will focus on in this post was the last two days of the conference that were
devoted to the hackathon.&lt;/p&gt;

&lt;p&gt;I had bought a &lt;a href=&#34;https://www.supermicro.com/en/products/motherboard/X10SLM-F&#34;&gt;X10SLM-F Supermicro board&lt;/a&gt;
off of eBay a few months ago that I wanted to run CoreBoot on. If you are
interested in finding a board that will work with CoreBoot, you should check
its &lt;a href=&#34;https://coreboot.org/status/board-status.html&#34;&gt;status on the status page&lt;/a&gt;.
I had
been talking to &lt;a href=&#34;https://twitter.com/_zaolin_&#34;&gt;Zaolin&lt;/a&gt; about wanting a board
to hack on and he recommended this one.&lt;/p&gt;

&lt;p&gt;At the hackathon, we decided to start with the BMC instead of the CPU BIOS.
This made for some
fun problems and definitely a lot of lessons learned. I had a
&lt;a href=&#34;https://www.dediprog.com/product/SF100&#34;&gt;Dediprog SF100&lt;/a&gt; flash programmer I brought to the
hackathon as well. Some people use RaspberryPis as their flash programmer but
the Dediprog was recommended to me and definitely came in handy. However, if
you want a cheaper alternative there are a bunch of ways you can skin that cat.&lt;/p&gt;

&lt;p&gt;To get started, we read the original binary off the SPI flash&amp;hellip; this worked fairly
simply. We used the opensource &lt;a href=&#34;https://github.com/DediProgSW/SF100Linux&#34;&gt;&lt;code&gt;dpcmd&lt;/code&gt;&lt;/a&gt; tool
from dediprog to do it. But you could also use &lt;a href=&#34;https://github.com/flashrom/flashrom&#34;&gt;&lt;code&gt;flashrom&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;While inspecting the original binary, we found the string &lt;code&gt;linux&lt;/code&gt; a few
times&amp;hellip; as well as a MAC adress, boot commands, IP address, and some other
interesting strings.&lt;/p&gt;

&lt;p&gt;Before flashing on new firmware we also made sure the board actually booted the
BMC. We didn&amp;rsquo;t have access to any console so we made due with an IPMI LAN port
and dnsmasq to work with DHCP. It worked and we got into the BMC user interface
over the web. If you&amp;rsquo;ve ever used a Supermicro server I probably don&amp;rsquo;t need to
tell you that it&amp;rsquo;s a piece of shit running a 2.6 linux kernel on the BMC.
Getting to the UI proved the board actually booted with the original BMC firmware
so we began to break it by trying to run &lt;a href=&#34;https://github.com/openbmc/openbmc&#34;&gt;OpenBMC&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Our board has a ASPEED 2400 BMC. We chose a OpenBMC configuration that would
give us a kernel supporting that chip. Thanks &lt;a href=&#34;https://github.com/shenki&#34;&gt;Joel Stanley&lt;/a&gt; for all your work on the kernel patches for all the BMCs. We flashed the SPI flash with our new BMC firmware image and attempted to power on the board.&lt;/p&gt;

&lt;p&gt;I am going to interrupt the story here for a second to explain the pain
involved with this development cycle of writing to firmware to SPI flash.
The SPI flash is 16MB &lt;em&gt;but&lt;/em&gt; requires erasing the
previous contents (4KB per sector) before you can even write.
A delete cycle of a sector is
120ms per sector at the worst. So that&amp;rsquo;s definitely not ideal and anything you can do to
make this faster is very much so ideal. Most flash programmers will not rewrite
a sector if its contents have not changed which helps, but still super
painful coming from the workflow of a software developer.&lt;/p&gt;

&lt;p&gt;Back to our board&amp;hellip; our OpenBMC image we flashed didn&amp;rsquo;t work. Again, a lot of this would have been easier to debug with
a serial console but we didn&amp;rsquo;t have one and we didn&amp;rsquo;t have the spec to get a UART.
Our assumption from this failure was that the IPMI LAN port we
were using was not the same port configured for that specific
configuration.&lt;/p&gt;

&lt;p&gt;So we went to build a custom kernel&amp;hellip;&lt;/p&gt;

&lt;p&gt;With the help of Joel we built a custom kernel completely separate from OpenBMC,
however we flashed the kernel directly to the SPI
flash without even u-boot, LOL&amp;hellip; obviously this didn&amp;rsquo;t work.&lt;/p&gt;

&lt;p&gt;Then we decided to try something easier and had a hunch a different
configuration in the OpenBMC project would have the right port enabled. We built
the image for that and flashed it onto the SPI flash. This was arguably faster
than making our own OpenBMC configuration with our new kernel.&lt;/p&gt;

&lt;p&gt;It also didn&amp;rsquo;t work, but here we got into a bit more trouble. After this point we
could no longer write to the SPI flash. The problem was the BMC was interfacing
with the SPI flash and we couldn&amp;rsquo;t take over the ability to write to it. The
SPI flash only allows one device to interact with it at a time. We also could
not flash the SPI flash without the board powered on because the entire board
was pulling power which was too much for our flash programmer to handle. &lt;em&gt;This&lt;/em&gt;
is a huge pain in the ass. It turns out it is &lt;em&gt;such a pain in the ass&lt;/em&gt; that people
have made solutions for it.&lt;/p&gt;

&lt;p&gt;Fortunately for us, &lt;a href=&#34;https://github.com/felixheld&#34;&gt;Felix Held&lt;/a&gt; had just
given a talk on this pain the day before and he was also in the room. He had one
more prototype of his tool, &lt;a href=&#34;https://github.com/felixheld/qspimux&#34;&gt;qspimux&lt;/a&gt;,
and we got to use it on our board.&lt;/p&gt;

&lt;p&gt;Qspimux allows for the access to a real SPI flash chip to be multiplexed
between the target and a programmer that also controls the multiplexer. This
way we could flash the SPI flash with the board powered off.&lt;/p&gt;

&lt;p&gt;To get his tool installed we had to de-solder the SPI flash and
solder it back on after getting the qspimux parts attached. Props to &lt;a href=&#34;https://github.com/edwin-peer&#34;&gt;Edwin
Peer&lt;/a&gt; for his awesome soldering skills here.
Here is a live action shot&amp;hellip;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;It&amp;#39;s been a journey, desoldered the flash for the BMC now using Felix Held&amp;#39;s qspimux&amp;hellip; so the BMC doesn&amp;#39;t interfere with the flash, so we can actually flash it! &lt;a href=&#34;https://t.co/M2mezEMeLa&#34;&gt;https://t.co/M2mezEMeLa&lt;/a&gt; &lt;a href=&#34;https://t.co/iL1xBQzAwh&#34;&gt;pic.twitter.com/iL1xBQzAwh&lt;/a&gt;&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1170074325895925760?ref_src=twsrc%5Etfw&#34;&gt;September 6, 2019&lt;/a&gt;&lt;/blockquote&gt; &lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;After finishing this, we could write to the SPI flash again. At this point we
were trying to re-flash the original Supermicro flash onto the board, just to
make sure we didn&amp;rsquo;t mess anything up along the way. This
proved to be more difficult than we thought. We got the firmware to write to
the flash but the board still wasn&amp;rsquo;t working. We verified with the oscilliscope
that data was indeed leaving the MOSI (master-out-slave-in) pin and the clock
was working on the flash.&lt;/p&gt;

&lt;p&gt;Then I tried to read the firmware back from the SPI flash chip to make sure it was
indeed our original flash. We suspected that maybe we were writing to the
device too quickly. This was indeed the case. The two firmware images did not
match. I then wrote the firmware to the SPI flash on the slowest setting just
to be sure. Then I could actually verify the image we wrote and the image we
read back matched our original firmware image. At this point everything was
kosher and we knew the image on the SPI flash chip was indeed the same as the
original we pulled off the board the day before.&lt;/p&gt;

&lt;p&gt;At this point the board was still not booting the original firmware image. This
is when we had to go home and firmware camp was over. Overall, this was a great
learning experience. I would have been sad had everything gone smoothly because
we would not have learned as much about how to debug all the components of the
SPI flash and board. I definitely have not given up on this board and will
continue down this rabbit hole until it has open source firmware on the BMC and
open source BIOS for the CPU.&lt;/p&gt;

&lt;p&gt;I would like to thank everyone at the Open Source Firmware Conference for
making this a truly amazing week and specifically those who helped with the
crazy hackathon project: &lt;a href=&#34;https://github.com/kc8apf&#34;&gt;Rick Altherr&lt;/a&gt;,
&lt;a href=&#34;https://github.com/edwin-peer&#34;&gt;Edwin Peer&lt;/a&gt;,
&lt;a href=&#34;https://github.com/shenki&#34;&gt;Joel Stanley&lt;/a&gt;,
&lt;a href=&#34;https://github.com/felixheld/&#34;&gt;Felix Held&lt;/a&gt;,
&lt;a href=&#34;https://github.com/bcantrill&#34;&gt;Bryan Cantrill&lt;/a&gt;,
Jacob Yundt (who I can&amp;rsquo;t seem to find online),
&lt;a href=&#34;https://github.com/jclulow&#34;&gt;Joshua M. Clulow&lt;/a&gt;, and everyone else I am
forgetting who gave us wires, clips, cords, and whatever else we
needed to get this thing going! It truly takes a village.&lt;/p&gt;

&lt;p&gt;I cannot wait for the next OSFC, but until then I will work on playing with
a logic analyzer to see if what the BMC is reading from the SPI flash is even
the right data ;)&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Transactional Memory and Tech Hype Waves</title>
                <link>https://blog.jessfraz.com/post/transactional-memory-and-tech-hype-waves/</link>
                <pubDate>Wed, 14 Aug 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/transactional-memory-and-tech-hype-waves/</guid>
                    <description>

&lt;p&gt;At lunch today I learned about Transactional Synchronization Extensions (TSX)
which is an implementation of transactional memory. The conversation started as a rant
about why transactional memory is bad but then it evolved into how this concept
even came to be and how it even got implemented if it&amp;rsquo;s such a terrible idea.&lt;/p&gt;

&lt;h2 id=&#34;what-is-transactional-memory&#34;&gt;What is transactional memory?&lt;/h2&gt;

&lt;p&gt;First let&amp;rsquo;s start by going over what transactional memory is.&lt;/p&gt;

&lt;p&gt;You might be familiar with a deadlock. A deadlock occurs when a process or thread is waiting
for a specific resource, which is also waiting on a different resource that is
being held by another waiting process. You can think of this as P1 needs R1
and has R2, while in turn P2 needs R2 and has R1. That is a deadlock.&lt;/p&gt;

&lt;p&gt;Transactional memory removes the possibility of getting a deadlock and replaces
it with what is known as a livelock. A livelock happens when processes are constantly
changing with regard to one another but neither of them move forward or
progress in anyway. Imagine you are walking down the street while another
person is heading towards you. You move to the right to avoid running into them
as they also move in that direction to avoid running into you. You both then
move to the other side so as to not run into each other. This repeats over and
over again with no progress forward since both people are moving in the
same direction. That is a livelock. With transactional memory you no longer
have deadlocks but livelocks.&lt;/p&gt;

&lt;p&gt;Why is this? Well, transactional memory works very similarly to database
transactions. A transaction is a group of operations that can execute and
commit changes as long as there are no conflicts. If there is a conflict, it
will start from state zero and try to run again until there are no conflicts.
Therefore, until there is a successful commit of a run, the outcome of any
operation is speculative.&lt;/p&gt;

&lt;p&gt;Intel&amp;rsquo;s implementation of TSX behaves in such a way that when a transaction
aborts due to a hardware exception, it
does not fire typical exceptions. Instead, it invokes a user-specified abort handler
without informing the underlying OS.  This seems like it might lead to some
really bad behavior&amp;hellip; we should probably know wtf is going on
in our system at any given point in time.&lt;/p&gt;

&lt;h2 id=&#34;side-channel-attacks&#34;&gt;Side-Channel Attacks&lt;/h2&gt;

&lt;p&gt;So we know the outcome of any operation in a transaction is speculative.
Hmmm speculative you say&amp;hellip; I am reminded of spectre and meltdown.
The solution in the kernel for defending against spectre and meltdown
was Kernel Page Table Isolation (KPTI). Instead let&amp;rsquo;s focus on what you can break with Spectre and meltdown which is Kernel Address Space Layout Randomization (KASLR). KASLR randomizes
the address layout per each boot. This raises the bar for an exploit  forcing
an attacker to guess where the code and data are located in the address space.
The probability of an attack then becomes the probability of an information
leak multiplied by the probability of a memory corruption vulnerability.&lt;/p&gt;

&lt;p&gt;However, this can be exploited without an information leak but instead using
a translation lookaside buffer (TLB)  and a timing attack. A TLB
is a memory cache that reduces the time taken to access a user memory location.
It keeps recent translations of virtual memory to physical memory.&lt;/p&gt;

&lt;p&gt;In the &lt;a href=&#34;https://gts3.org/assets/papers/2016/jang:drk-ccs.pdf&#34;&gt;DrK paper&lt;/a&gt;, the
authors describe an attack that uses the behavior of TSX as a &lt;em&gt;feature&lt;/em&gt; of the
exploit. As described above, TSX has the behavior of aborting a commit without leaving any trace as
to why it was aborted. So in DrK, the
authors use TSX to create a bunch of access violations of the privileged
address space inside transactions and turn that into knowledge of mapping and executable status
of the address space
&lt;em&gt;without&lt;/em&gt; even generating a page fault.&lt;/p&gt;

&lt;p&gt;The point I am making with this example is that transactional memory and it&amp;rsquo;s
implementation TSX are a bad idea.&lt;/p&gt;

&lt;p&gt;But who could have possibly seen this as a bad idea?&lt;/p&gt;

&lt;h2 id=&#34;rewind-to-2008&#34;&gt;Rewind to 2008&lt;/h2&gt;

&lt;p&gt;Concurrency is the biggest hype in town. This comes from a lot of different
things but can be found in an article, &lt;a href=&#34;https://dl.acm.org/citation.cfm?id=1378724&#34;&gt;Technical perspective: Transactions are
tomorrow&amp;rsquo;s loads and stores&lt;/a&gt;,
in Communications of the ACM (CACM). It seems at the time, this craze was
started out of academia. Some practitioners, Bryan Cantrill and Jeff
Bonwick, wrote rebuttles in the name of &amp;ldquo;please dear god do not make
transactional memory A Thing&amp;rdquo;.
That can be seen in Bryan&amp;rsquo;s blog post,
&lt;a href=&#34;http://dtrace.org/blogs/bmc/2008/11/03/concurrencys-shysters/&#34;&gt;Concurrency’s Shysters&lt;/a&gt;,
and the follow-up ACM Queue article, &lt;a href=&#34;https://queue.acm.org/detail.cfm?id=1454462&#34;&gt;Real-world Concurrency&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Clearly,  in 2008 there was a division between academia and
practitioners.&lt;/p&gt;

&lt;h2 id=&#34;fastforward-to-2012&#34;&gt;Fastforward to 2012&lt;/h2&gt;

&lt;p&gt;Intel shipped TSX in &lt;a href=&#34;https://software.intel.com/en-us/blogs/2012/02/07/transactional-synchronization-in-haswell&#34;&gt;February 2012&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EDIT:&lt;/strong&gt; It was pointed out that &lt;a href=&#34;https://hydraconf.com/2019/talks/2jix5mst7iduyp9linqhfj/&#34;&gt;Azul shipped transactional memory in 2006&lt;/a&gt;. Thanks &lt;a href=&#34;https://twitter.com/davidcrawshaw/status/1161827880608735232&#34;&gt;@davidcrawshaw&lt;/a&gt;!&lt;/p&gt;

&lt;h2 id=&#34;why-is-this-interesting&#34;&gt;Why is this interesting?&lt;/h2&gt;

&lt;p&gt;Hype cycles come and go and if you spend anytime in our industry you tend to
become pretty numb to them. Seeing through the hype has always been a joy of
mine and I find it interesting the vectors through which hype travels have
changed drastically over time.&lt;/p&gt;

&lt;p&gt;With transactional memory, the hype began in academia through academic
conferences and articles in journals. Before the 2000s even, hype might have
spread through magazines like Byte. Today, we have multiple channels for hype
through social networks: Twitter, Reddit, blogging, YouTube, GitHub,
Hacker News (slashdot before
that), and others.&lt;/p&gt;

&lt;p&gt;Hype seems to travel through the unconscious need of people to connect to
others. Being a part of movements, like open source projects and a shared sense
of need, allows people to be a part of something bigger than just themselves.&lt;/p&gt;

&lt;p&gt;Twitter is fascinating due to the way it hosts so many subcultures. One of my
favorite examples of this is Canadian twitter where everyone is polite and nice
to each other. There are also vehement subcultures around the latest technology
trends. The way technology can spread has turned from a place where very few
people have a voice (through getting papers accepted at conferences and in
journals) to social networks where everyone has a voice. My hope is that the
loudest of the voices are the ones used to build technology for the best
causes.&lt;/p&gt;

&lt;p&gt;I&amp;rsquo;ll leave you with that, hope you enjoyed and learned something from
my rather weird example of a technology hype wave.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>The Business Executive&#39;s Guide to Kubernetes</title>
                <link>https://blog.jessfraz.com/post/the-business-executives-guide-to-kubernetes/</link>
                <pubDate>Tue, 23 Jul 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/the-business-executives-guide-to-kubernetes/</guid>
                    <description>

&lt;p&gt;Hello!&lt;/p&gt;

&lt;p&gt;I thought it would be fun to write a post aimed towards business leaders making technology decisions for their
organizations. There is a lot of hype in our field and little truth behind the hype.&lt;/p&gt;

&lt;p&gt;Like most things I write about, this started from an idea I had on Twitter:&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;has anyone ever done technical breakdowns of these products in Gartner reports that are actually just trash, is this something you&amp;#39;d read..?&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1153866738452221952?ref_src=twsrc%5Etfw&#34;&gt;July 24, 2019&lt;/a&gt;&lt;/blockquote&gt; &lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;This post will cover some hard truths of Kubernetes and what it means for your organization and business.
You might have heard the term &amp;ldquo;Kubernetes&amp;rdquo; and you might have been led to believe that this will solve all the
infrastructure pain for your organization. There is some truth to that, which will not be the focus of this post. To get
to the state of enlightenment with Kubernetes, you need to first go through some hard challenges.
Let&amp;rsquo;s dive in to some of these hard truths.&lt;/p&gt;

&lt;h2 id=&#34;stateful-data-is-hard&#34;&gt;Stateful Data is Hard&lt;/h2&gt;

&lt;p&gt;Kubernetes is not to be used for stateful data. There has been a lot of work done in this area
but it is still not sufficent. For the more technical members of our audience I direct you to
&lt;a href=&#34;https://github.com/kubernetes/kubernetes/issues/67250&#34;&gt;exhibit A&lt;/a&gt;. The linked issue goes over
problems when a &amp;ldquo;StatefulSet&amp;rdquo; gets into an error during deploying or upgrading. This can lead to data
loss or corruption since Kubernetes will need manual intervention
to fix the state of the deployment. This could even lead to the point where the only recommended fix is you &lt;em&gt;delete the state&lt;/em&gt;.
What does this mean for your business? Well, if you lose or corrupt your data it could mean a lot of different things depending
on what the data was. If the data was your customer database of new account signups, well you might have just lost the data for
your new customers. If you are an ecommerce site, it might have been your latest sale. If you are in banking or investments,
it might have been data accounting for the movement of capital.&lt;/p&gt;

&lt;p&gt;Databases holding valuable information like the examples above should always have mechanisms for replication which is not
something Kubernetes is going to solve for you. While you might choose to use Kubernetes for stateful data, you should always remember
to handle replicating that data in case there is a failure.&lt;/p&gt;

&lt;h2 id=&#34;exposed-dashboards&#34;&gt;Exposed Dashboards&lt;/h2&gt;

&lt;p&gt;A lot of organizations are dipping their toes into Kubernetes but forgetting to disable or secure the dashboard for the control plane
from the rest of the internet. The control plane dashboard is a website you can navigate to that controls your cluster.
Leaving the dashboard exposed to the public can have huge implications on your business. If your dashboard is exposed, &lt;em&gt;anyone&lt;/em&gt;
could find your dashboard and then control it. Finding an exposed dashboard is not that difficult if you know what you are looking for
and have access to a site like &lt;a href=&#34;https://www.shodan.io/&#34;&gt;shodan&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;What would the finder of the dashboard control? Everything running in Kubernetes. If your website is running in Kubernetes, it
means someone else could make your website go offline, someone else could replicate your website but send all sales and monetary
transactions to their own bank account, someone else can breach your customers&amp;rsquo; data, or someone else could hold your
infrastructure up for ransom and not give you back control of your website
unless you pay what they demand. This is just a few things I thought of off the top of my head but you could probably think of more.&lt;/p&gt;

&lt;p&gt;There is a whole other aspect of this in that if this breach goes public, then you have a huge public relations
problem on your hands. Which for a public company might even have implications on your stock price
if shareholders end up losing trust from the news of your company&amp;rsquo;s technical incompetence and they decide to sell their shares.&lt;/p&gt;

&lt;p&gt;If it&amp;rsquo;s not the dashboard being exposed it might be your API server or another service. There&amp;rsquo;s a few
options for this particular failure mode.&lt;/p&gt;

&lt;h2 id=&#34;upgrading-your-kubernetes-version-seems-to-always-break-something&#34;&gt;Upgrading your Kubernetes version seems to always break something&lt;/h2&gt;

&lt;p&gt;I&amp;rsquo;ve heard from a bunch of people that whenever they need to upgrade their production environment of Kubernetes it always leads to something breaking.
It&amp;rsquo;s recommended that you have &lt;a href=&#34;https://twitter.com/kelseyhightower/status/1138586423978672129&#34;&gt;more than one cluster in production&lt;/a&gt; for this very reason.
Then, if one cluster in production is broken from being upgraded, the other cluster that has not been upgraded is still running the technical parts of
your business. This is very good from a reliability point of view.
It means reaching your website has a &amp;ldquo;plan B&amp;rdquo; where if the &amp;ldquo;plan A&amp;rdquo; infrastructure has a problem, everything
will be redirected to &amp;ldquo;plan B&amp;rdquo; and your customers will not even know the difference. As a downside, your operations teams
now have to figure out ways for managing and maintaining two clusters (more work for them) but your business is
in a better place for it.&lt;/p&gt;

&lt;p&gt;The other option is you just don&amp;rsquo;t upgrade. However, if you don&amp;rsquo;t upgrade, your infrastructure might be vulnerable to
security threats and then we are back in the situation above where you might have data breached by hackers, a hostile takeover of your
website, and then a huge public relations scandal leading to investors and shareholders selling their stock.&lt;/p&gt;

&lt;h2 id=&#34;steep-learning-curve-complexity-is-king-and-operational-pain&#34;&gt;Steep learning curve, complexity is king, and operational pain&lt;/h2&gt;

&lt;p&gt;A lot of the criticism I hear about Kubernetes is how complex it is. For your organization, this means
your staff are going to have to surmount this very steep learning curve. As with learning anything, things only
get worse before they get better. So get ready for a lot of production outages and failovers as your team starts to
learn the ins and outs of this overly complex system. What does this mean for your website and customers? Availability will
be spotty for awhile but we hope &lt;em&gt;eventually&lt;/em&gt; it will even out. Lastly, to quote someone very wise (send a pull request if you know who!), &amp;ldquo;Hope is not a strategy.&amp;rdquo;&lt;/p&gt;

&lt;h2 id=&#34;managed-kubernetes&#34;&gt;Managed Kubernetes&lt;/h2&gt;

&lt;p&gt;Now you are probably thinking, &amp;ldquo;my cloud provider said they&amp;rsquo;d take away all the pain you just described by selling
me their managed Kubernetes.&amp;rdquo; That is indeed the dream. However, it is not reality. Having worked for some cloud providers,
I have seen the pain customers still go through trying to learn the patterns Kubernetes implements and applying
those patterns to their existing applications. This means your teams will still have to handle the steep learning curve. Just
because it&amp;rsquo;s managed does not mean that your application&amp;rsquo;s uptime and availability are covered. That is still on &lt;em&gt;your&lt;/em&gt; team.
Customers being able to use your website on the internet is your team&amp;rsquo;s responsibility and understanding
Kubernetes is still required for that. For every line of YAML written and debugged to get your website running, it is time
that is being taken away from building on what your business actually does. Unless of course you are a business
of selling Kubernetes, then if so, carry on.&lt;/p&gt;

&lt;p&gt;You will also want to be sure your cloud provider did not fall prey to the pitfalls I outlined above as well.
You should make sure your cluster is fully isolated from other customer&amp;rsquo;s clusters. The way the managed Kubernetes offerings
work is by the cloud provider managing the &amp;ldquo;master&amp;rdquo; for your cluster. This means all the data for your cluster is managed by
your cloud provider. If your data is not properly isolated from all the other customer&amp;rsquo;s data, it means that
if the cloud provider gets breached by means of a different customer&amp;rsquo;s cluster then your data has been breached as well.
Then, we are in the scenario where a hacker owns your website, can hold it for ransom, or cause a very public incident
for your company that you will need to handle.&lt;/p&gt;

&lt;p&gt;This was just a brief overview and I am not trying to throw shade. I merely wanted to phrase some of these prevalent problems
in a way that people running a business might be more aware of the impact adopting this technology might have. It should not be understated, if your organization does tackle these difficulties (and others I didn&amp;rsquo;t mention), then you will possibly see
great impact on developer productivity, faster feature releases and deployments (among all the other wins Kubernetes can provide).
Just be aware that with the good, comes some bad.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Linux Observability with BPF</title>
                <link>https://blog.jessfraz.com/post/linux-observability-with-bpf/</link>
                <pubDate>Wed, 10 Jul 2019 11:25:24 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/linux-observability-with-bpf/</guid>
                    <description>&lt;p&gt;Below is the foreward for the new book on
&lt;a href=&#34;http://shop.oreilly.com/product/0636920242581.do&#34;&gt;Linux Observability with BPF&lt;/a&gt;
by two of my favorite programmers,
&lt;a href=&#34;https://twitter.com/calavera&#34;&gt;David Calavera&lt;/a&gt; and &lt;a href=&#34;https://twitter.com/fntlnz&#34;&gt;Lorenzo Fontana&lt;/a&gt;!
I was pretty stoked about getting to write the foreward, I asked
O&amp;rsquo;Reilly if I could publish it on my blog as well and they said yes. I hope you all check out this
book and share what you&amp;rsquo;ve built after!&lt;/p&gt;

&lt;p&gt;As a programmer (and a self confessed dweeb) I like to stay up to date on the latest additions
to various kernels and research in computing. When I first played around with Berkeley Packet
Filters (BPF) and Express Data Path (XDP) in Linux I was in love. This is such a NICE THING
and I am glad this book is putting BPF and XDP on the center stage so more people can start
using it in their projects.&lt;/p&gt;

&lt;p&gt;Let me go into detail about my background and why I fell in love with these kernel interfaces&amp;hellip;
I worked as a Docker core maintainer, along with David (one of the brilliant authors of this book).
Docker, if you are not familiar, shells out to iptables for a lot of the filtering and routing logic for containers.
The first patch I ever made to Docker was fixing a problem where a version of iptables on CentOS didn’t have the same
command-line flags so writing to iptables was failing. There were a lot of weird issues like this and anyone
who has ever shelled out to a tool in their software can likely commiserate. Not only that, having
thousands of rules on a host is not what iptables was built for and has performance side effects because of it.&lt;/p&gt;

&lt;p&gt;Then I heard about BPF and XDP. This was like music to my ears.
No longer would my scars from iptables bleed with another bug! The kernel community
is even working on
&lt;a href=&#34;https://cilium.io/blog/2018/04/17/why-is-the-kernel-community-replacing-iptables/&#34;&gt;replacing iptables with BPF&lt;/a&gt;!
Halleluyah! &lt;a href=&#34;https://cilium.io/&#34;&gt;Cilium&lt;/a&gt;, container networking,
is using BPF and XDP for the internals of their project as well.&lt;/p&gt;

&lt;p&gt;But that’s not all! BPF can do so much more than just fulfilling the iptables use case.
With BPF, you can trace any syscall or kernel function as well as any user-space program.
&lt;a href=&#34;https://github.com/iovisor/bpftrace&#34;&gt;bpftrace&lt;/a&gt; gives users dtrace-like abilities in Linux from their command line.
You can trace all the files that are being opened and the process calling the open,
count the syscalls by the program calling them, trace the OOM killer, and more… the world is your oyster!
XDP and BPF are also used in &lt;a href=&#34;https://blog.cloudflare.com/l4drop-xdp-ebpf-based-ddos-mitigations/&#34;&gt;Cloudflare&lt;/a&gt; and
&lt;a href=&#34;https://cilium.io/blog/2018/11/20/fb-bpf-firewall/&#34;&gt;Facebook’s&lt;/a&gt; load balancer to prevent DDoS attacks. I won’t spoil why
XDP is so great at dropping packets because you will learn about that in the XDP and networking chapters of this book
(&lt;em&gt;cough&lt;/em&gt; you don&amp;rsquo;t even allocate a kernel struct &lt;em&gt;cough&lt;/em&gt;)!&lt;/p&gt;

&lt;p&gt;Lorenzo, another of the authors, I have had the privilege of knowing each other through the
Kubernetes community. His tool, &lt;a href=&#34;https://github.com/iovisor/kubectl-trace&#34;&gt;kubectl-trace&lt;/a&gt;, allows users to run their custom tracing programs
easily inside their kubernetes clusters.&lt;/p&gt;

&lt;p&gt;Personally, my favorite use case for BPF has been writing custom tracers to prove to other
folks that the performance of their software was not up to par or making really expensive
amounts of calls to syscalls. Never underestimate the power of proving someone wrong with hard data.
Don’t fret, this book will walk you through writing your first tracing program so you can do the same ;).
The beauty of BPF lies in the fact that before now other tools used lossy queues to send sample sets to user
space for aggregation whereas, BPF is great for production since it allows for constructing histograms and filtering
right at the source of events.&lt;/p&gt;

&lt;p&gt;I have spent half of my career working on tools for developers. The best tools allow autonomy in their interfaces
for developers like you to use them for things even the authors never imagined. To quote Richard Feynman,
“I learned very early the difference between knowing the name of something and knowing something.”
Until now you might have only known the name BPF and that it might be useful to you. What I love about this book is
that it gives you the knowledge you need to be able to create all new tools using BPF.&lt;/p&gt;

&lt;p&gt;The best books don’t confine readers into a box and that is why I love this one in particular.
After reading and following the exercises, you will be empowered to use BPF like a super power.
You can use this in your toolkit to use on demand when it’s most needed and most useful.
You won’t just learn BPF you will understand it. This book is a path to open your mind
to the possibilities of what you can build with BPF.&lt;/p&gt;

&lt;p&gt;This developing ecosystem is very exciting! I hope it will grow even larger
as more people start wielding BPF&amp;rsquo;s power. I am excited to learn about what the readers of
this book end up building, whether it&amp;rsquo;s a script to track down a crazy software bug or a
custom firewall or even &lt;a href=&#34;https://lwn.net/Articles/759188/&#34;&gt;infrared decoding&lt;/a&gt;! Be sure to let us all know what you built!&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Corollary to the Hard Thing about Hard Things</title>
                <link>https://blog.jessfraz.com/post/corollary-to-the-hard-thing-about-hard-things/</link>
                <pubDate>Wed, 15 May 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/corollary-to-the-hard-thing-about-hard-things/</guid>
                    <description>&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Can I get an encore, do you want more&amp;rdquo; - Jay-Z&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I recently read Ben Horowitz’s book, &lt;a href=&#34;https://www.amazon.com/Hard-Thing-About-Things-Building-ebook/dp/B00DQ845EA/ref=sr_1_1&#34;&gt;The Hard Thing about Hard Things&lt;/a&gt;. It’s really eye opening and creates
a level of empathy in the reader for leaders that make hard decisions every day. It covers everything from how to know
your company is toxic to how to do layoffs. Ben starts each chapter with a rap quote so as did I above ;) obviously I chose Jay-Z but I also love &lt;a href=&#34;https://blog.jessfraz.com/post/what-would-2pac-do/&#34;&gt;Tupac, as is shown by my first blog post ever&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I have a corollary to this: power dynamics. I, personally, have seen and experienced what it is like being a
leader when no one really has a full view of who you are as a person. I try to always be authentic and personable,
but the fact of the matter is: we are all humans and we all have off days.&lt;/p&gt;

&lt;p&gt;Most people only get a view of who I am through Twitter, but that is not fully who I am. I think that is the case
for most people on that website. For executives of companies or leaders of large teams, the same holds true: you only see a small subset,
through very limited communication, of who they really are.&lt;/p&gt;

&lt;p&gt;At work, I like to move fast and get things done. This may result in abrupt communications which is not typical
of how I am on the internet. Even more so, if I was to give feedback or an opinion on something, someone might
feel it with the heat of a thousand suns and think it is aggressive, even if that is not how I intended it.
The best we can do is apologize and grow when we fuck up.&lt;/p&gt;

&lt;p&gt;Another example would be if someone in a position of power asks someone to do something.
The person without the power might think they have to do it a certain way and can&amp;rsquo;t push back.
We can try to solve this by always making an effort to ask for other&amp;rsquo;s opinions and feedback.&lt;/p&gt;

&lt;p&gt;I really do not enjoy when people hero worship me and I do not think people should hero worship anyone.
We are all humans and we are all flawed in our own ways. Anyone who believes someone to be perfect will
soon find that they are not. This holds true for anyone: executives of companies, senior engineers,
tennis champions, and hollywood stars.&lt;/p&gt;

&lt;p&gt;Leave room for people to make mistakes, because they will. What
truly matters is how a person grows after making a mistake. It helps to make it very clear that you will make mistakes
and welcome feedback. When someone discovers a mistake you&amp;rsquo;ve made try to treat it as a gift. Allow for
failure and growth from failure in others and they will do the same for you as well.&lt;/p&gt;

&lt;p&gt;If you are a leader and you empathize with this, I think this problem can also be solved with time.
You need time for people to understand how you work and time to grow trust. As long as you continue
to be transparent about mistakes over time and grow from them, trust will follow.&lt;/p&gt;

&lt;p&gt;It’s hard to see a power dynamic at
play if you are in it and hold the power. Power dynamics are in the eye of the beholder. We can all
try to be conscious of this and patient as the vines of trust grow around us.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Why open source firmware is important for security</title>
                <link>https://blog.jessfraz.com/post/why-open-source-firmware-is-important-for-security/</link>
                <pubDate>Wed, 08 May 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/why-open-source-firmware-is-important-for-security/</guid>
                    <description>

&lt;p&gt;I gave a talk recently at GoTo Chicago on &lt;a href=&#34;https://docs.google.com/presentation/d/1Qees556dT9LNoooEdf6En8V82L3V-_N8LbPuyGihZeI/edit?usp=sharing&#34;&gt;Why open source firmware is important&lt;/a&gt; and I thought it would be nice to also write a blog post with my findings. This post will focus on why open source firmware is important for security.&lt;/p&gt;

&lt;h2 id=&#34;privilege-levels&#34;&gt;Privilege Levels&lt;/h2&gt;

&lt;p&gt;In your typical “stack” today you have the various levels of privileges.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Ring 3 - Userspace:&lt;/strong&gt; has the least amount of privileges, short of there being a sandbox in userspace that is restricted further.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ring 0 - Kernel:&lt;/strong&gt; The operating system kernel, for open source operating systems you get visibility into the code behind this.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ring -1 - Hypervisor:&lt;/strong&gt; The virtual machine monitor (VMM) that creates and runs virtual machines. For open source hypervisors like Xen, KVM, bhyve, etc you have visibility into the code behind this.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ring -2 -&lt;/strong&gt; &lt;strong&gt;System Management Mode (SMM), UEFI kernel:&lt;/strong&gt; Proprietary code, more on this &lt;a href=&#34;#ring-2-smm-uefi-kernel&#34;&gt;below&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ring -3 - Management Engine:&lt;/strong&gt; Proprietary code, more on this &lt;a href=&#34;#ring-3-management-engine&#34;&gt;below&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The negative rings were made up because there was no other way to express something with more privileges.&lt;/p&gt;

&lt;p&gt;From the above, it’s pretty clear that for Rings -1 to 3, we have the option to use open source software and have a large amount of visibility and control over the software we run. For the privilege levels under Ring -1, we have less control but it is getting better with the open source firmware community and projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s counter-intuitive that the code that we have the least visibility into has the most privileges. This is what open source firmware is aiming to fix.&lt;/strong&gt;&lt;/p&gt;

&lt;h3 id=&#34;ring-2-smm-uefi-kernel&#34;&gt;Ring -2: SMM, UEFI kernel&lt;/h3&gt;

&lt;p&gt;This ring controls all CPU resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;System management mode (SMM)&lt;/strong&gt; is invisible to the rest of the stack on top of it. It has half a kernel. It was originally used for power management and system hardware control. It holds a lot of the proprietary designed code and is a place for vendors to add new proprietary features. It handles system events like memory or chipset errors as well as a bunch of other logic.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;UEFI Kernel&lt;/strong&gt; is extremely complex. It has millions of lines of code. UEFI applications are active after boot. It was built with security from obscurity. The &lt;a href=&#34;https://uefi.org/specifications&#34;&gt;specification&lt;/a&gt; is absolutely insane if you want to dig in.&lt;/p&gt;

&lt;h3 id=&#34;ring-3-management-engine&#34;&gt;Ring -3: Management Engine&lt;/h3&gt;

&lt;p&gt;This is the most privileged ring. In the case of Intel (x86) this is the Intel Management Engine. It can turn on nodes and re-image disks invisibly. It has a kernel that runs &lt;a href=&#34;https://itsfoss.com/fact-intel-minix-case/&#34;&gt;Minix 3&lt;/a&gt; as well as a web server and entire networking stack. It turns out Minix is the most widely used operating system because of this. There is a lot of functionality in the Management Engine, it would probably take me all day to list it off but there are &lt;a href=&#34;https://www.intel.com/content/www/us/en/support/articles/000008927/software/chipset-software.html&#34;&gt;many&lt;/a&gt; &lt;a href=&#34;https://files.bitkeks.eu/docs/intelme-report.pdf&#34;&gt;resources&lt;/a&gt; for digging into more detail, should you want to.&lt;/p&gt;

&lt;p&gt;Between Ring -2 and Ring -3 we have at least 2 and a half other kernels in our stack as well as a bunch of proprietary and unnecessary complexity. Each of these kernels have their own networking stacks and web servers. The code can also modify itself and persist across power cycles and re-installs. &lt;strong&gt;We have very little visibility into what the code in these rings is actually doing, which is horrifying considering these rings have the most privileges.&lt;/strong&gt;&lt;/p&gt;

&lt;h3 id=&#34;they-all-have-exploits&#34;&gt;They all have exploits&lt;/h3&gt;

&lt;p&gt;It should be of no surprise to anyone that Rings -2 and -3 have their fair share of vulnerabilities. They are horrifying when they happen though. Just to use one as an example although I will let you find others on your own, &lt;a href=&#34;https://www.wired.com/2017/05/hack-brief-intel-fixes-critical-bug-lingered-7-dang-years/&#34;&gt;there was a bug in the web server of the Intel Management Engine that was there for seven years&lt;/a&gt; without them realizing.&lt;/p&gt;

&lt;h2 id=&#34;how-can-we-make-it-better&#34;&gt;How can we make it better?&lt;/h2&gt;

&lt;h3 id=&#34;nerf-non-extensible-reduced-firmware&#34;&gt;NERF: Non-Extensible Reduced Firmware&lt;/h3&gt;

&lt;p&gt;NERF is what the open source firmware community is working towards. The goals are to make firmware less capable of doing harm and make its actions more visible. They aim to remove all runtime components but currently with the Intel Management Engine, they cannot remove all but they can take away the web server and IP stack. They also remove UEFI IP stack and other drivers, as well as the Intel Management/UEFI self-reflash capability.&lt;/p&gt;

&lt;h3 id=&#34;me-cleaner&#34;&gt;me_cleaner&lt;/h3&gt;

&lt;p&gt;This is the project used to clean the Intel Management Engine to the smallest necessary capabilities. You can check it out on GitHub: &lt;a href=&#34;https://github.com/corna/me_cleaner&#34;&gt;github.com/corna/me_cleaner&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&#34;u-boot-and-coreboot&#34;&gt;u-boot and coreboot&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://www.chromium.org/developers/u-boot&#34;&gt;u-boot&lt;/a&gt; and &lt;a href=&#34;https://www.coreboot.org/&#34;&gt;coreboot&lt;/a&gt; are open source firmware. They handle silicon and DRAM initialization. Chromebooks use both, coreboot on x86, and u-boot for the rest. This is one part of how they &lt;a href=&#34;https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/42038.pdf&#34;&gt;verify boot&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Coreboot’s design philosophy is to &lt;a href=&#34;https://doc.coreboot.org/&#34;&gt;“do the bare minimum necessary to ensure that hardware is usable and then pass control to a different program called the&lt;/a&gt; &lt;a href=&#34;https://doc.coreboot.org/&#34;&gt;&lt;em&gt;payload&lt;/em&gt;&lt;/a&gt;&lt;a href=&#34;https://doc.coreboot.org/&#34;&gt;.”&lt;/a&gt; The payload in this case is linuxboot.&lt;/p&gt;

&lt;h3 id=&#34;linuxboot&#34;&gt;linuxboot&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://www.linuxboot.org/&#34;&gt;Linuxboot&lt;/a&gt; handles device drivers, network stack, and gives the user a multi-user, multi-tasking environment. It is built with Linux so that a single kernel can work for several boards. Linux is already quite vetted and has a lot of eyes on it since it is used quite extensively. Better to use a open kernel with a lot of eyes on it, than the 2½ other kernels that were all different and closed off. This means that we are lessening the attack surface by using less variations of code and we are making an effort to rely on code that is open source. Linux improves boot reliability by replacing lightly-tested firmware drivers with hardened Linux drivers.&lt;/p&gt;

&lt;p&gt;By using a kernel we already have tooling around firmware devs can build in tools they already know. When they need to write logic for signature verification, disk decryption, etc it’s in a language that is modern, easily auditable, maintainable, and readable.&lt;/p&gt;

&lt;h3 id=&#34;u-root&#34;&gt;u-root&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/u-root/u-root&#34;&gt;u-root&lt;/a&gt; is a set of golang userspace tools and bootloader. It is then used as the initramfs for the Linux kernel from linuxboot.&lt;/p&gt;

&lt;p&gt;Through using the NERF stack they saw boot times were 20x faster. But this blog post is on security so let’s get back to that….&lt;/p&gt;

&lt;p&gt;The NERF stack helps improve the visibility into a lot of the components that were previously very proprietary. There is still a lot of other firmware on devices.&lt;/p&gt;

&lt;h2 id=&#34;what-about-all-the-other-firmware&#34;&gt;What about all the other firmware?&lt;/h2&gt;

&lt;p&gt;We need open source firmware for the network interface controller (NIC), solid state drives (SSD), and base management controller (BMC).&lt;/p&gt;

&lt;p&gt;For the NIC, there is some work being done in the open compute project on &lt;a href=&#34;https://www.opencompute.org/documents/ocp-nic-3-0-draft-0v85b-20181213b-tn-temp-no-cb-pdf&#34;&gt;NIC 3.0&lt;/a&gt;. It should be interesting to see where that goes.&lt;/p&gt;

&lt;p&gt;For the BMC, there is both &lt;a href=&#34;https://github.com/openbmc/openbmc&#34;&gt;OpenBMC&lt;/a&gt; and &lt;a href=&#34;https://github.com/u-root/u-bmc&#34;&gt;u-bmc&lt;/a&gt;. I had written a little about them in &lt;a href=&#34;https://blog.jessfraz.com/post/the-firmware-rabbit-hole/&#34;&gt;a previous blog post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We need to have all open source firmware to have all the visibility into the stack but also to actually verify the state of software on a machine.&lt;/p&gt;

&lt;h2 id=&#34;roots-of-trust&#34;&gt;Roots of Trust&lt;/h2&gt;

&lt;p&gt;The goal of the root of trust should be to verify that the software installed in every component of the hardware is the software that was intended. This way you can know without a doubt and verify if hardware has been hacked. Since we have very little to no visibility into the code running in a lot of places in our hardware it is hard to do this. How do we really know that the firmware in a component is not vulnerable or that is doesn’t have any backdoors? Well we can’t. Not unless it was all open source.&lt;/p&gt;

&lt;p&gt;Every cloud and vendor seems to have their own way of doing a root of trust. Microsoft has &lt;a href=&#34;https://github.com/opencomputeproject/Project_Olympus/tree/master/Project_Cerberus&#34;&gt;Cerberus&lt;/a&gt;, Google has &lt;a href=&#34;https://cloud.google.com/blog/products/gcp/titan-in-depth-security-in-plaintext&#34;&gt;Titan&lt;/a&gt;, and Amazon has &lt;a href=&#34;https://perspectives.mvdirona.com/2019/02/aws-nitro-system/&#34;&gt;Nitro&lt;/a&gt;. These seem to assume an explicit amount of trust in the proprietary code (the code we cannot see). This leaves me with not a great feeling. &lt;strong&gt;Wouldn’t it be better to be able to use all open source code? Then we could verify without a doubt that the code you can read and build yourself is the same code running on hardware for all the various places we have firmware. We could then verify that a machine was in a correct state without a doubt of it being vulnerable or with a backdoor.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It makes me wonder what the smaller cloud providers like DigitalOcean or Packet have for a root of trust. Often times we only hear of these projects from the big three or five. I asked this on twitter and didn&amp;rsquo;t get any good answers&amp;hellip;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;I’m surprised how many people are responding that they love DigitalOcean but seem entirely unconcerned there’s no answer here. You should be concerned.&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1126131424095100929?ref_src=twsrc%5Etfw&#34;&gt;May 8, 2019&lt;/a&gt;&lt;/blockquote&gt; &lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;There is a great talk by &lt;a href=&#34;https://twitter.com/PaulM&#34;&gt;Paul McMillan&lt;/a&gt; and Matt
King on &lt;a href=&#34;https://www.youtube.com/watch?v=PEVVRkd-wPM&#34;&gt;Securing Hardware at Scale&lt;/a&gt;. It covers in great detail
how to secure bare metal while also giving customers access to the bare
metal. When they get back the hardware from customers they need to ensure with
consistency and reliability that there is nothing from the customer hiding in
any component of the hardware.&lt;/p&gt;

&lt;p&gt;All clouds need to ensure that the
hardware they are running has not been compromised after a customer has run
compute on it.&lt;/p&gt;

&lt;h2 id=&#34;platform-firmware-resiliency&#34;&gt;Platform Firmware Resiliency&lt;/h2&gt;

&lt;p&gt;As far as chip vendors go, they seem to have a different offering. Intel has &lt;a href=&#34;https://www.intel.com/content/dam/www/public/us/en/documents/solution-briefs/firmware-resilience-blocks-solution-brief.pdf&#34;&gt;Platform Firmware Resilience&lt;/a&gt; and Lattice has &lt;a href=&#34;http://www.latticesemi.com/en/Solutions/Solutions/SolutionsDetails02/PFR&#34;&gt;Platform Firmware Resiliency&lt;/a&gt;. These seem to be more focused on the NIST guidelines for &lt;a href=&#34;https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-193.pdf&#34;&gt;Platform Firmware Resiliency&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I tried to ask the internet who was using this and heard very little back, so if you are using Platform Firmware Resiliency can you let me know!&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;It seems that Intel has some effort called Platform Firmware Resiliency (anyone using this one?!) &lt;a href=&#34;https://t.co/fQq2gdLNOm&#34;&gt;https://t.co/fQq2gdLNOm&lt;/a&gt;&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1126121264819712000?ref_src=twsrc%5Etfw&#34;&gt;May 8, 2019&lt;/a&gt;&lt;/blockquote&gt; &lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;From the &lt;a href=&#34;https://www.opencompute.org/files/Intel-System-Firmware-InnovationsMohanKumar-OCP18.pdf&#34;&gt;OCP talk on Intel&amp;rsquo;s firmware innovations&lt;/a&gt;, it seems Intel&amp;rsquo;s Platform Firmware Resilience (PFR) and Cerberus
go hand in hand.  Intel is using PFR to deliver Cerberus&amp;rsquo; attestation priniciples.
Thanks &lt;a href=&#34;https://twitter.com/_msw_&#34;&gt;@msw&lt;/a&gt; for the clarification.&lt;/p&gt;

&lt;p&gt;It would be
nice if there were not so many tools to do this job. I also wish the code was
open source so we could verify for ourselves.&lt;/p&gt;

&lt;h2 id=&#34;how-to-help&#34;&gt;How to help&lt;/h2&gt;

&lt;p&gt;I hope this gave you some insight into what’s being built with open source firmware and how making firmware open source is important! If you would like to help with this effort, please help spread the word. Please try and use platforms that value open source firmware components. Chromebooks are a great example of this, as well as &lt;a href=&#34;https://puri.sm/&#34;&gt;Purism&lt;/a&gt; computers. You can ask your providers what they are doing for open source firmware or ensuring hardware security with roots of trust. Happy nerding! :)&lt;/p&gt;

&lt;p&gt;Huge thanks to the open source firmware community for helping me along this
journey! Shout out to Ron Minnich, &lt;a href=&#34;https://twitter.com/qrs&#34;&gt;Trammel Hudson&lt;/a&gt;, &lt;a href=&#34;https://twitter.com/hugelgupf&#34;&gt;Chris Koch&lt;/a&gt;,
&lt;a href=&#34;https://twitter.com/kc8apf&#34;&gt;Rick Altherr&lt;/a&gt;, and
&lt;a href=&#34;https://twitter.com/_zaolin_&#34;&gt;Zaolin&lt;/a&gt;. And shout out to &lt;a href=&#34;https://twitter.com/bridgetkromhout&#34;&gt;Bridget Kromhout&lt;/a&gt; for always
finding time to review my posts!&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Challenge Accepted: Transposit</title>
                <link>https://blog.jessfraz.com/post/challenge-accepted-transposit/</link>
                <pubDate>Tue, 23 Apr 2019 00:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/challenge-accepted-transposit/</guid>
                    <description>

&lt;p&gt;Last week, I had the pleasure of meeting with the &lt;a href=&#34;https://www.transposit.com/&#34;&gt;Transposit&lt;/a&gt;
team in San Francisco. Tech is a super small world and it turns out the two
founders and I are separated by one-degree through several different people
we know. In meeting them I closed many loops without even realizing it, but
I digress&amp;hellip;&lt;/p&gt;

&lt;p&gt;Their product is really cool, it exposes a SQL interface for interacting with
numerous APIs at once. For someone like myself who deploys a lot of bots, this
is great. Usually when I have a complex bot I end up writing a lot of
&amp;ldquo;glue code&amp;rdquo; to combine a few different APIs and get the information I want.
Most of my bots have some sort of pagination logic and all have the &lt;code&gt;N+1&lt;/code&gt; problem where
I don&amp;rsquo;t really optimize my queries or use anything fancy like graphQL. Many
APIs don&amp;rsquo;t even have graphQL interfaces but also I am old school and I don&amp;rsquo;t
really want to learn something new. This is why I was super intrigued by
Transposit&amp;rsquo;s SQL interface, because hey, I know SQL!&lt;/p&gt;

&lt;p&gt;Adam, the CEO, challenged me to try it out, give them feedback, and see if
I could break it with something complex. I am not one to back down from
a challenge and I have some super weird ass bots, so I decided to start with
the weirdest.&lt;/p&gt;

&lt;h2 id=&#34;gitable&#34;&gt;Gitable&lt;/h2&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/jessfraz/gitable&#34;&gt;Gitable&lt;/a&gt; is a bot I made for sending all
my open issues and PRs on GitHub to a table in &lt;a href=&#34;https://airtable.com/&#34;&gt;Airtable&lt;/a&gt;.
I fucking love Airtable. It&amp;rsquo;s design just feels right and works the way my
brain works.&lt;/p&gt;

&lt;p&gt;I set out to make this bot work in Transposit because I know it has some
super weird loops and has the &lt;code&gt;N+1&lt;/code&gt; problem where I loop over all my repos,
then make another API call after.&lt;/p&gt;

&lt;p&gt;To reiterate, the goal of the bot is to iterate through all my repos on GitHub
and sync the list of issue and PRs with a table in Airtable.&lt;/p&gt;

&lt;h3 id=&#34;query-all-the-user-s-repos&#34;&gt;Query all the user&amp;rsquo;s repos&lt;/h3&gt;

&lt;p&gt;First, I need to get all my repos that are not forks. So I need
a SQL query for this, in Transposit it looks like this:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sql&#34;&gt;SELECT name, full_name FROM github.list_repos_for_user
WHERE username=@owner
AND type=&#39;owner&#39;
AND fork=false
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;github.list_repos_for_user&lt;/code&gt; table is a built in to Transposit and they
handle all your API keys and authorizations when you choose &amp;ldquo;Github&amp;rdquo; as a data
connection in the UI. It also caches the response which is a huge win because
I am the queen of being rate limited.&lt;/p&gt;

&lt;p&gt;I named that query: &lt;code&gt;list_repos_for_user&lt;/code&gt; so when I want to use it elsewhere in
another query, I can call it by &lt;code&gt;this.list_repos_for_user&lt;/code&gt;.&lt;/p&gt;

&lt;h3 id=&#34;query-all-the-issues-in-all-the-user-s-repos&#34;&gt;Query all the issues in all the user&amp;rsquo;s repos&lt;/h3&gt;

&lt;p&gt;To get all the issues in all my repos I can use a join on that table I just
created. It ends up looking like this:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sql&#34;&gt;SELECT 
    A.created_at AS created, 
    A.updated_at AS updated, 
    B.full_name, A.number, 
    A.html_url AS url, 
    A.state, A.title, 
    A.user.login AS author, 
    A.labels, B.name,
    A.closed_at AS completed, 
    A.comments
FROM github.list_issues_for_repo
AS A 
JOIN this.list_repos_for_user 
AS B 
ON A.repo = B.name
WHERE A.owner=@owner
AND B.owner=@owner
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Okay so I didn&amp;rsquo;t break anything yet and I just joined my table with all my
repos, &lt;code&gt;this.list_repos_for_user&lt;/code&gt;, with the built-in table in Trasnposit
&lt;code&gt;github.list_issues_for_repo&lt;/code&gt;. This has now replaced my &lt;code&gt;N+1&lt;/code&gt; code with just this
one SQL query and Transposit does all the optimizations on their end.&lt;/p&gt;

&lt;p&gt;I called this table &lt;code&gt;list_issues_for_user&lt;/code&gt; and &lt;code&gt;@owner&lt;/code&gt; is a parameter, so
anyone else can fork this app and change it to their own username.&lt;/p&gt;

&lt;h3 id=&#34;query-all-the-records-in-an-airtable-table&#34;&gt;Query all the records in an Airtable table&lt;/h3&gt;

&lt;p&gt;Now I need to get all the existing airtable records in my table so I can know
later on down the road if I need to create a row or update a row with the new
information from the GitHub API.&lt;/p&gt;

&lt;p&gt;In my Airtable table I have a column called &amp;ldquo;reference&amp;rdquo; which stores information
about the issue or PR as &lt;code&gt;owner/repo#num&lt;/code&gt; so for example it looks like
&lt;code&gt;jessfraz/.vim#1&lt;/code&gt;. This is a column defined by me, but I also know it to be
unique. So I want to get the reference of every column and it&amp;rsquo;s airtable record
ID so I can use that to update the record.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sql&#34;&gt;SELECT id, fields.Reference as reference FROM airtable.get_records
WHERE baseId=@baseID
AND table=@table
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;That winds up looking like the query above. &lt;code&gt;@baseID&lt;/code&gt; and &lt;code&gt;@table&lt;/code&gt; are
parameters so anyone can replace those with their own for their table in
Airtable.&lt;/p&gt;

&lt;p&gt;I named this query &lt;code&gt;get_airtable_records&lt;/code&gt; so when I call it later I can do so
with &lt;code&gt;this.get_airtable_records&lt;/code&gt;.&lt;/p&gt;

&lt;h3 id=&#34;update-and-create-rows-in-airtable-for-each-of-the-issues-in-user-s-repos&#34;&gt;Update and create rows in Airtable for each of the issues in user&amp;rsquo;s repos&lt;/h3&gt;

&lt;p&gt;Okay so now&amp;rsquo;s the part where I am thinking&amp;hellip; I&amp;rsquo;m going to break this thing.
(Narrator: I didn&amp;rsquo;t.)&lt;/p&gt;

&lt;p&gt;Transposit has both SQL and Javascript operations and since the next part was
where a lot of the logic was I used Javascript. I haven&amp;rsquo;t written Javascript in
a long time so mind my shitty code. Honestly, SQL is turing complete so
I considered using SQL but I wanted to get this done in an hour. (I will leave
it as an exercise for the reader to fork my app and make it all in SQL.)&lt;/p&gt;

&lt;p&gt;What I needed to do was take our earlier table to &lt;code&gt;list_issues_for_user&lt;/code&gt;,
iterate over them, and update or create an Airtable record for each of them.
This ends up looking like the following:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-js&#34;&gt;function run(params) {
    var results = api.run(&amp;quot;this.list_issues_for_user&amp;quot;, {owner: params.owner});

    for (var i = 0; i &amp;lt; results.length; i++) {   
        // Build the reference for the issue with the full name and number.
        // Winds up looking like &amp;quot;jessfraz/.vim#1&amp;quot;
        var reference = results[i].full_name + &amp;quot;#&amp;quot; + results[i].number;

        // Get the Airtable recordID for the reference if it exists.
        var id = api.query(&amp;quot;select id from this.get_airtable_records where reference=&#39;&amp;quot;+reference+&amp;quot;&#39;&amp;quot;, {baseID: params.baseID, table: params.table});

        // Define the object params for create and update.
        var obj = {
            baseID: params.baseID, 
            table: params.table,
            reference: reference,
            title: results[i].title,
            state: results[i].state,
            author: results[i].author,
            type: &#39;issue&#39;,
            comments: results[i].comments,
            url: results[i].url,
            updated: results[i].updated,
            created: results[i].created,
            completed: results[i].completed,
            repo: results[i].name,
        };

        if (id.length &amp;gt; 0) {
            results[i].airtable_id = id[0].id;
            obj.recordID = id[0].id;

            // Update the result in the table.
            var r = api.run(&amp;quot;this.update_record&amp;quot;, obj);
            api.log(r);
        } else {
            // Create record in the table.
            results[i].airtable_id = 0;
            var r = api.run(&amp;quot;this.create_record&amp;quot;, obj);
            api.log(r);
        }
        
        results[i].reference = reference;
    }
    return {
        results
    };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You might be wondering what &lt;code&gt;this.create_record&lt;/code&gt; and &lt;code&gt;this.update_record&lt;/code&gt; look
like. These are just helper operations so I can use all the fields for the
records as parameters.&lt;/p&gt;

&lt;h3 id=&#34;create-an-airtable-record&#34;&gt;Create an Airtable record&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;create_record&lt;/code&gt; calls the built-in &lt;code&gt;airtable.create_record&lt;/code&gt; which looks like
the following:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sql&#34;&gt;SELECT * FROM airtable.create_record
AND baseId=@baseID
AND table=@table
AND $body=(SELECT {
    &#39;fields&#39; : { 
        &#39;Reference&#39;:  @reference,
        &#39;Title&#39;:      @title,
        &#39;State&#39;:      @state,
        &#39;Author&#39;:     @author,
        &#39;Type&#39;:       @type,
        &#39;Comments&#39;:   @comments,
        &#39;URL&#39;:        @url,
        &#39;Updated&#39;:    @updated,
        &#39;Created&#39;:    @created,
        &#39;Completed&#39;:  @completed,
        &#39;Repository&#39;: @repo, 
    }
})
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Everything starting with an &lt;code&gt;@&lt;/code&gt; is a parameter we can change on the fly in our
Javascript function like you saw above.&lt;/p&gt;

&lt;h3 id=&#34;update-an-airtable-record&#34;&gt;Update an Airtable record&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;update_record&lt;/code&gt; is very similar, it calls the Transposit built-in
&lt;code&gt;airtable.update_record&lt;/code&gt;:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sql&#34;&gt;SELECT * FROM airtable.update_record
WHERE recordId=@recordID
AND baseId=@baseID
AND table=@table
AND $body=(SELECT {
    &#39;fields&#39; : { 
        &#39;Reference&#39;:  @reference,
        &#39;Title&#39;:      @title,
        &#39;State&#39;:      @state,
        &#39;Author&#39;:     @author,
        &#39;Type&#39;:       @type,
        &#39;Comments&#39;:   @comments,
        &#39;URL&#39;:        @url,
        &#39;Updated&#39;:    @updated,
        &#39;Created&#39;:    @created,
        &#39;Completed&#39;:  @completed,
        &#39;Repository&#39;: @repo, 
    }
})
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Doing the above with pull requests rather than issues is the exact same code
but you swap out the query for issues with pull requests.
You can schedule your operations to run at certain times like cron or when you call an API endpoint.&lt;/p&gt;

&lt;p&gt;Sadly, I failed at breaking the thing with one of my most complex bots. But
maybe you will have better luck trying ;) You can fork my app or look at the
queries here:
&lt;a href=&#34;https://console.transposit.com/t/jessfraz/gitable&#34;&gt;console.transposit.com/t/jessfraz/gitable&lt;/a&gt;.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Questions I&#39;d Ask My Cloud Provider</title>
                <link>https://blog.jessfraz.com/post/questions-id-ask-my-cloud-provider/</link>
                <pubDate>Mon, 15 Apr 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/questions-id-ask-my-cloud-provider/</guid>
                    <description>

&lt;p&gt;I came up with a list of questions I would ask my cloud provider if I was
buying a product. They are as follows:&lt;/p&gt;

&lt;h3 id=&#34;1-what-problem-is-this-solving&#34;&gt;1. What problem is this solving?&lt;/h3&gt;

&lt;p&gt;I would ask this to make sure I even need this product. So many people tend to
buy into the hype for &amp;ldquo;shiny&amp;rdquo;, they miss if they even needed the thing in the
first place.&lt;/p&gt;

&lt;h3 id=&#34;2-how-did-you-implement-this-what-is-your-threat-model&#34;&gt;2. How did &lt;em&gt;you&lt;/em&gt; implement this? What is &lt;em&gt;your&lt;/em&gt; threat model?&lt;/h3&gt;

&lt;p&gt;So much of the cloud is built on popsicle sticks and glue. Does that make you
feel safe at night knowing your customer data is being stored in a proof of
concept that was shipped before it should have been? Best to get your security
team to assess if the product is actually built on the &lt;em&gt;providers side&lt;/em&gt; up to
standard. This does not mean what you see as a customer, it means the
proprietary bits you cannot see.&lt;/p&gt;

&lt;p&gt;What does the service license agreement say for what happens if the provider
themselves is hacked? Do they have to tell you or can they just sweep it under
the rug? What if a vulnerability comes out on the open source project they are
using, do they have to give you a risk assessment as to if you were hacked?&lt;/p&gt;

&lt;p&gt;What if they don&amp;rsquo;t know if they were hacked after a vulnerability is public?
Red flag&amp;hellip;&lt;/p&gt;

&lt;p&gt;If they themselves do not know their own threat model, that should be a huge
warning sign.&lt;/p&gt;

&lt;p&gt;Bonus points if their implementation is open source; but I will let you in on
a secret, most aren&amp;rsquo;t. The exception is Joyent :)&lt;/p&gt;

&lt;h3 id=&#34;3-what-customers-did-you-speak-to-before-building-this-feature&#34;&gt;3. What customers did you speak to before building this feature?&lt;/h3&gt;

&lt;p&gt;Ties back to number one, what problem is this solving? So often these features
seem to be built &lt;em&gt;for fun&lt;/em&gt; or based off a &lt;em&gt;feeling&lt;/em&gt; a product manager had.&lt;/p&gt;

&lt;p&gt;Hope this helps! I will probably update over time. :)&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Leadership CI</title>
                <link>https://blog.jessfraz.com/post/leadership-ci/</link>
                <pubDate>Tue, 09 Apr 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/leadership-ci/</guid>
                    <description>&lt;p&gt;This post is co-authored by &lt;a href=&#34;https://github.com/simpsoka&#34;&gt;Kathy Simpson&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“understanding the true nature of instinctive decision making requires us to be forgiving of those people trapped in circumstances where good judgment is imperiled.”
― Malcolm Gladwell, Blink: The Power of Thinking Without Thinking&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As leaders, setting up a structure that helps us navigate decisions under
pressure is of the utmost importance.
When writing and delivering software we rely on our
continuous integration (CI) infrastructure and test suites to tell us when a test is
failing and code should not be merged.&lt;/p&gt;

&lt;p&gt;As leaders, before acting or making decisions it would be nice to have a set of
tests and checks, established ahead of time, to make sure we are in the
right headspace to think, behave and make decisions that are in the best
interest of everyone and our company. There are devastating consequences to
taking actions based on fear and pride;
we hope this set of questions enables taking action based on growth, humility, inclusion,
and soulful reflection.&lt;/p&gt;

&lt;p&gt;The following are the sets of questions we brainstormed, but expect them to
change over time as we experience and deal with new problems. These were
started in &lt;a href=&#34;https://gist.github.com/simpsoka/14da775a63e22e5083141da5c48e6410&#34;&gt;a gist&lt;/a&gt;
and are copied below. The diff of this post and the gist will serve as the
evolution of this thought process.&lt;/p&gt;

&lt;p&gt;It&amp;rsquo;s important to note that in some instances answering all the questions might
take too much time. Perhaps prioritizing the most important ones in the moment
would be more effective.&lt;/p&gt;

&lt;p&gt;Answering all the questions may be a luxury at times, so we suggest breaking
them down based on the situation you find yourself in: prioritize the most
important ones to your role, have a few ‘go to’ questions, or categorize them
based on the situations you find yourself in more often. The important part
of this list is to help us navigate a difficult situation while still
maintaining the integrity we intend for ourselves as leaders.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Do I want to die on this hill?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass:&lt;/strong&gt; This is morally good and if not handled has long term consequences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail:&lt;/strong&gt; This is self serving.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Am I including everyone?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass:&lt;/strong&gt; My ego is not driving this conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail:&lt;/strong&gt; The people in this conversation will only tell me I&amp;rsquo;m right and not push back.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Am I hiding something?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass:&lt;/strong&gt; The information, though painful, is known to all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail&lt;/strong&gt;: Yes.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Is there transparency here?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass:&lt;/strong&gt; The team agrees on context and can repeat it back to me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail:&lt;/strong&gt; Hidden misalignment (test: what do we align on).&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Am I being curious?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass:&lt;/strong&gt; I&amp;rsquo;m asking questions that make me uncomfortable, and I&amp;rsquo;m comfortable being wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail:&lt;/strong&gt; I want my way.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Is my team afraid to tell me things?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass:&lt;/strong&gt; They freely and continually come to me with answers and information that they know I will not like.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail:&lt;/strong&gt; They go to each other or people outside the team with the information, and telling me what they think I want to hear.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Am I only communicating with the same people over and over?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass:&lt;/strong&gt; My sphere of influence is diverse. I feel comfortable talking with anyone on the team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail:&lt;/strong&gt; I continually consult the same individuals (test: do I have entourage?).&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Do I feel insecure?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass:&lt;/strong&gt; I feel empowered and am willing to take feedback and risks regardless of the outcome as it&amp;rsquo;s good for the company and the customer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail:&lt;/strong&gt; I retreat, I am not comfortable, I am not giving up the information because I am scared of what people will think.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Can my team do the job I hired them to do? Is the job they are hired to do
the job that needs to be done?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass:&lt;/strong&gt; The team ships outcomes efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail:&lt;/strong&gt; The team is not empowered and often stalls (test: do I often have to intervene?).&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Are you scratching an itch?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass:&lt;/strong&gt; This is a problem that&amp;rsquo;s bigger than myself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail:&lt;/strong&gt; It may feel good to solve this problem but only for myself and temporarily.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Am I being judgmental?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass:&lt;/strong&gt; Do I trust my team and their decisions?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail:&lt;/strong&gt; Is someone speaking up and telling me that I’m being judgmental?&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Am I taking risks?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass:&lt;/strong&gt; I feel comfortable and confident that this decision will lead to positive and fruitful outcomes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail:&lt;/strong&gt; I am being a pushover, and I am compromising in the wrong ways.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Am I being manipulative?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass:&lt;/strong&gt; I’m being honest, real, straightforward and I’m OK with the outcome and hearing ‘no’.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail:&lt;/strong&gt; I’m intentionally using words that aren’t representative of what I’m trying to communicate.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Am I speaking for people or letting them speak for themselves?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass:&lt;/strong&gt; I am doing the minority of the speaking and people are disagreeing
with my opinions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail:&lt;/strong&gt; I am being quoted back to myself. I am talking the majority of the
time.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Be sure to keep up with the &lt;a href=&#34;https://gist.github.com/simpsoka/14da775a63e22e5083141da5c48e6410&#34;&gt;original gist&lt;/a&gt;
as well to see how this list evolves!&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>The Truth Seekers</title>
                <link>https://blog.jessfraz.com/post/the-truth-seekers/</link>
                <pubDate>Mon, 08 Apr 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/the-truth-seekers/</guid>
                    <description>&lt;p&gt;Last week I got to see what it was like to be an investigative journalist for
a day. It was thrilling. I will get into what I learned but first I waned to
give some background on why I was doing this.&lt;/p&gt;

&lt;p&gt;I have a general curiosity for people. It&amp;rsquo;s interesting to me to uncover what
people are motivated by. Humans are individual snowflakes and no one is exactly
like the next. It is our unique experiences that form the way we think and
behave, as well as what drives us.&lt;/p&gt;

&lt;p&gt;It is in my nature to learn and absorb information. I also recently learned,
although I should have realized this throughout my life, I am well attuned to
absorbing others emotions. I think my deep drive for understanding others and
value of the truth is somewhat perfect for the role of &amp;ldquo;investigative
journalism&amp;rdquo;.&lt;/p&gt;

&lt;p&gt;Researching things for investigative journalism is very similar to that of
research for academia. Investigative journalism seems to be driven by
intuition, while academia might be more driven by novel research.&lt;/p&gt;

&lt;p&gt;I got to see what &lt;a href=&#34;https://twitter.com/jeffykao&#34;&gt;Jeff Kao&lt;/a&gt;&amp;rsquo;s job was like for
a few hours and I learned a lot.&lt;/p&gt;

&lt;p&gt;One of the more interesting things we discussed was diffs. I brought up if
diffs (as in those used by a source control tool) could work as a line of
truth. With a diff, the history of a document is fully transparent, anyone
can see any and all changes to it (of course taking into account, tracking
force pushes as well).&lt;/p&gt;

&lt;p&gt;Jeff pointed out that there is past history of journalism using &amp;ldquo;diffs&amp;rdquo;. One
example was from &lt;a href=&#34;https://www.usatoday.com/in-depth/news/investigations/2019/04/03/abortion-gun-laws-stand-your-ground-model-bills-conservatives-liberal-corporate-influence-lobbyists/3162173002/&#34;&gt;an article&lt;/a&gt;
that uncovered bills and laws being copied and influenced by corporations. They
compared the text of the bills to others and showed the changes,
similarities, and motivations
behind them.&lt;/p&gt;

&lt;p&gt;I then realized that Jeff was the author of the &lt;a href=&#34;https://hackernoon.com/more-than-a-million-pro-repeal-net-neutrality-comments-were-likely-faked-e9f0e3ed36a6&#34;&gt;amazing article&lt;/a&gt; from a couple
years ago on how net neutrality comments were likely faked. He used natural
language processing to find the similarities in the comments.&lt;/p&gt;

&lt;p&gt;Both these articles use comparisons of text to uncover falsifications or
motivations. This is super similar to diffs, which is also a comparison of text! I also started thinking about how
in &lt;a href=&#34;https://blog.jessfraz.com/post/government-medicine-capitalism/&#34;&gt;my previous article&lt;/a&gt;
I mentioned it would be cool if laws were versioned with git. By doing
that, we would get the diff and history of changes to the laws. Changes to laws
or language used over time could be visualized quite easily with the tools for
source control.&lt;/p&gt;

&lt;p&gt;Overall, the day was fascinating. Investigative journalism was really aligned
with my joy of learning new things from a variety of different perspectives and
using intuition and research to try to find truth.&lt;/p&gt;

&lt;p&gt;Another thought I have been thinking on is: how can we separate emotion from
the truth? So much of the news today is trying to trigger an emotional response
for clicks. Or in the worst case, it is trying to trigger an emotional response
for influencing an election. How can we promote the news sources that focus on
the truth versus triggering a reaction? The truth itself should be enough of
a trigger.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Thoughts on Conway&#39;s Law and the software stack</title>
                <link>https://blog.jessfraz.com/post/thoughts-on-conways-law-and-the-software-stack/</link>
                <pubDate>Mon, 25 Mar 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/thoughts-on-conways-law-and-the-software-stack/</guid>
                    <description>&lt;p&gt;I’ve been talking to a lot of people in different layers of the stack during my
funemployment. I wanted to share one of the problems I’ve been thinking about
and maybe you can think of some clever solutions to solve it.&lt;/p&gt;

&lt;p&gt;Conway&amp;rsquo;s Law states &amp;ldquo;organizations which design systems &amp;hellip; are constrained
to produce designs which are copies of the communication structures of these
organizations.&amp;rdquo;&lt;/p&gt;

&lt;p&gt;If you were to apply Conway&amp;rsquo;s Law to all the layers of the software stack and
open source software you’d see a problem: &lt;strong&gt;There is not sufficient
communication between the various layers of software.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s dive in a bit to make the problem super clear.&lt;/p&gt;

&lt;p&gt;I’ve met a bunch of hardware engineers and I’ve made a point about asking each
of them how they feel about using a single chip for multiple users. This
is, of course, the use case of the cloud. All of the hardware engineers either
laugh or are horrified and the resounding reaction is “you’d be crazy to think
hardware was ever intended to be used for isolating multiple users safely.”
Spectre and Meltdown proved this was true as well. Speculative execution was
a feature intended to make processors faster but was never thought about in
terms of the vector of hacking something running multi-tenant compute,
like a cloud provider. Seems like the software and hardware layers should
better communicate&amp;hellip;&lt;/p&gt;

&lt;p&gt;That’s just one example, let’s reverse the interaction. I’ve talked to a bunch
of firmware and kernel engineers and they’d all love if the firmware from chip
vendors did less complexity. For instance, it seems like a unanimous vote among
firmware and kernel engineers that CPU vendors should not  include runtime
services or SMM with their firmware. Open source firmware and kernel developers
would rather handle those problems at their layer of the stack. All the complexity
in the firmware leads to overlooked bugs and odd behavior that can’t be
controlled or debugged from the kernel developers layer and/or user space. Not to mention,
a lot of CPU vendors firmware is proprietary so it’s really hard to know if
a bug is truly a firmware bug.&lt;/p&gt;

&lt;p&gt;Another example would be the &lt;a href=&#34;https://arstechnica.com/information-technology/2019/02/supermicro-hardware-weaknesses-let-researchers-backdoor-an-ibm-cloud-server/&#34;&gt;hack of SoftLayer&lt;/a&gt;. Hackers modified the
firmware on the BMC from a bare metal host the cloud provider was offering.
This shows another mistake in having blinders on and not being conscious
of the other layers of the stack and the entire system.&lt;/p&gt;

&lt;p&gt;Let’s move up the stack a bit to something I personally have experienced.
I worked a lot on container runtimes. I also have worked on kubernetes.
I was horrified to find people are running multi-tenant kubernetes clusters
with multiple customers processes, aka for isolating untrusted processes. The architecture of kubernetes is
just &lt;a href=&#34;https://blog.jessfraz.com/post/secret-design-docs-multi-tenant-orchestrator/#why-not-kubernetes&#34;&gt;not designed for this&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A common miscommunication is the &amp;ldquo;window dressing.&amp;rdquo; For example, there is a
feature in kubernetes that prevents exec-ing into
containers. This is implemented by merely preventing the
API call in kubernetes. If a person has access to a cluster there are about 4 dozen different
ways I can think of to exec into a container and bypass this &amp;ldquo;feature&amp;rdquo; and
kubernetes entirely. Using
said &amp;ldquo;security feature&amp;rdquo; in kubernetes alone is not sufficient for security in any respect.
This is a common pattern.&lt;/p&gt;

&lt;p&gt;All these problems are not small by any means. They are miscommunications
at various layers of the stack. They are people thinking an interface or
feature is secure when it is merely a window dressing that can be bypassed with
just a bit more knowledge about the stack. I really like the advice
&lt;a href=&#34;https://twitter.com/LeaKissner/status/1109259338265165824&#34;&gt;Lea Kissner&lt;/a&gt; gave:
&amp;ldquo;take the long view, not just the broad view.&amp;rdquo; We should do this more often
when building systems.&lt;/p&gt;

&lt;p&gt;The thought I’ve been noodling on is: how do we solve this? Is this something
a code hosting provider like GitHub should fix? But, that excludes all the
projects that are not on that platform. How do we promote better communication
between layers of the stack? How can we automate some of this away? Or is
the answer simply, own all the layers of the stack yourself?&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Digging into RISC-V and how I learn new things</title>
                <link>https://blog.jessfraz.com/post/digging-into-risc-v-and-how-i-learn-new-things/</link>
                <pubDate>Sun, 24 Mar 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/digging-into-risc-v-and-how-i-learn-new-things/</guid>
                    <description>

&lt;p&gt;I recently have started researching and playing around with RISC-V for fun. I thought it might be nice to combine some of what I’ve learned into a blog post. However, I don’t just want to highlight &lt;em&gt;what&lt;/em&gt; I learned. I want to use this as an example of how to go about learning something new.&lt;/p&gt;

&lt;p&gt;Recently, &lt;a href=&#34;https://twitter.com/erikstmartin&#34;&gt;Erik St. Martin&lt;/a&gt;, &lt;a href=&#34;https://twitter.com/ScribblingOn&#34;&gt;Shubheksha Jalan&lt;/a&gt;, and I were discussing how we learn new things and we all thought it might be beneficial to have a way to document this process for others. What better way to document this then by example with my recent research into RISC-V?&lt;/p&gt;

&lt;p&gt;I’ve &lt;a href=&#34;https://blog.jessfraz.com/post/defining-a-distinguished-engineer/&#34;&gt;said it before&lt;/a&gt; and I will say it again, I think anyone is capable of doing or learning anything, they just need the right motivation and to believe in themselves. I also made a point of including the book &lt;a href=&#34;https://www.amazon.com/Super-Brain-Unleashing-Explosive-Well-Being/dp/0307956830&#34;&gt;Super Brain&lt;/a&gt; on &lt;a href=&#34;https://blog.jessfraz.com/post/books/&#34;&gt;my list of recommended books&lt;/a&gt;, because it confirms with science that if you set your sights high you can accomplish great things, but if you set your expectations low it becomes a self-fulfilling prophecy. To put it more bluntly, believe in yourself!&lt;/p&gt;

&lt;p&gt;I became fascinated by what is happening in the RISC-V space just by seeing it pop up every now and then in my Twitter feed. Since I am currently unemployed I have a lot of time and autonomy to dig into whatever I wish.&lt;/p&gt;

&lt;p&gt;RISC-V is a new instruction set architecture. To understand RISC-V, we must first dig into what an instruction set architecture is. This is my learning technique. I bounce from one thing to another, recursively digging deeper as I learn more.&lt;/p&gt;

&lt;h2 id=&#34;what-is-an-instruction-set-architecture-isa&#34;&gt;What is an instruction set architecture (ISA)?&lt;/h2&gt;

&lt;p&gt;An instruction set architecture is the interface between the hardware and the software.&lt;/p&gt;

&lt;p&gt;Models of processors can implement the same instruction set but have different &lt;em&gt;internal&lt;/em&gt; designs for implementing the interface. This leads to various processors having the same instruction set but differing in performance, physical size, and monetary cost. For example, Intel and AMD have processors that both implement the same x86 instruction set but have very different internal designs.&lt;/p&gt;

&lt;p&gt;In order to dig deeper, we should look into what some of the various types of instruction set architectures are.&lt;/p&gt;

&lt;h2 id=&#34;what-are-the-types-of-instruction-set-architectures&#34;&gt;What are the types of instruction set architectures?&lt;/h2&gt;

&lt;p&gt;Most commonly these are described and classified by their complexity.&lt;/p&gt;

&lt;h3 id=&#34;reduced-instruction-set-computer-risc&#34;&gt;Reduced Instruction Set Computer (RISC)&lt;/h3&gt;

&lt;p&gt;This only implements frequently used instructions, less common operations are implemented as subroutines. By using subroutines, there is a trade-off of performance, however it’s only applied to the least common operations.&lt;/p&gt;

&lt;p&gt;RISC uses a load/store architecture; meaning it divides instructions into ones that access memory and  ones that perform arithmetic logic unit (ALU) operations.&lt;/p&gt;

&lt;p&gt;RISC, the name, came out of Berkeley in the 1980s (from a project led by David Patterson) around the same time MIPS (a project led by John L. Hennessy) was going on at Stanford. RISC became commercialized as SPARC by Sun Microsystems and MIPS became commercialized by MIPS Computer Systems. Both are RISC architectures. You might also be familiar with more modern implementations like ARM or PowerPC which are commercialized as well. There are many RISC implementations other than just these, I implore you all to dig further if you so choose.&lt;/p&gt;

&lt;p&gt;RISC architectures can also be traced back to before the name existed as well. Examples include Alan Turing&amp;rsquo;s Automatic Computing Engine (ACE) from 1946 and the CDC 6600 designed by Seymour Cray in 1964.&lt;/p&gt;

&lt;h3 id=&#34;complex-instruction-set-computer-cisc&#34;&gt;Complex Instruction Set Computer (CISC)&lt;/h3&gt;

&lt;p&gt;This has many very specific, specialized instructions, some may never be used in most programs. In CISC, one instruction can denote an execution of several low-level operations or one instruction is capable of multi-step operations and/or addressing modes.&lt;/p&gt;

&lt;p&gt;The term was coined after RISC, so everything that is not RISC tends to get lumped here. It’s become somewhat of a contentious point since some modern CISC designs are in fact less complex than some RISC designs. The main difference is that CISC architectures have arithmetic/computation instructions also perform memory accesses.&lt;/p&gt;

&lt;p&gt;Most architectures were classified after the fact since the term wasn’t around at the time of their birth. Some examples include IBM’s System/360 and System Z, the PDP-11, the VAX architecture, and Data General’s Nova.&lt;/p&gt;

&lt;h3 id=&#34;very-long-instruction-word-vliw-and-explicitly-parallel-instruction-computing-epic&#34;&gt;Very Long Instruction Word (VLIW) and Explicitly Parallel Instruction Computing (EPIC)&lt;/h3&gt;

&lt;p&gt;These were designed to exploit instruction level parallelism, executing multiple instructions in parallel. This requires less hardware than CISC or RISC and leaves the complexity for the compiler.&lt;/p&gt;

&lt;p&gt;Traditionally, processors use a few different ways to improve performance, let’s dig into these.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Pipelining&lt;/strong&gt; divides instructions into substeps so the instructions can be executed partly at the same time.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Superscalar architectures&lt;/strong&gt; dispatch individual instructions to be executed independently in different parts of the processor.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Out-of-order execution&lt;/strong&gt; executes instructions in an order different from the program.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The methods above all complicate hardware by requiring the hardware to perform all this logic. In contrast, VLIW leaves this complexity to the program. As a trade-off the compiler becomes a lot more complex while the hardware is simplified and still performs well computationally.&lt;/p&gt;

&lt;p&gt;VLIW is most commonly found in embedded media processors and graphics processing units (GPU). However, Nvidia and AMD have moved to RISC architectures to improve performance for non-graphics workloads. You can also find VLIW in system-on-a-chip (SoC) designs where customizing a processor for an application is popular.&lt;/p&gt;

&lt;p&gt;EPIC architecture was based on VLIW but made a few changes. One of which allows for groups of instructions, called bundles, to be executed in parallel if they do not depend on any subsequent group of instructions. You can often distinguish EPIC from VLIW because of EPICs focus on full instruction predication. This is used to decrease the occurrence of branches and to increase the speculative execution of instructions. Speculative execution loads data before we know whether or not it will be used.&lt;/p&gt;

&lt;p&gt;You might be familiar with speculative execution from the Spectre and Meltdown attacks. The Spectre and Meltdown attacks are a whole different rabbit hole I won’t go down in this post, but I hope you can understand how your own learning is almost like a choose your own adventure game. You can choose to go further down any path at any time.&lt;/p&gt;

&lt;h3 id=&#34;minimal-instruction-set-computer-misc&#34;&gt;Minimal Instruction Set Computer (MISC)&lt;/h3&gt;

&lt;p&gt;This is more minimal than RISC. It includes a very small number of basic operations and corresponding opcodes. Commonly these are categorized as MISC if they are stack based rather than register based, but can also be defined by the number of instructions (fewer than 32 but greater than one).&lt;/p&gt;

&lt;p&gt;Quite a few of the first computers can be classified as MISC. These include (but are not limited to) the ORDVAC (1951) and the ILLIAC (1952) from the University of Illinois and the EDSAC (1949) from the University of Cambridge.&lt;/p&gt;

&lt;h3 id=&#34;one-instruction-set-computer-oisc&#34;&gt;One Instruction Set Computer (OISC)&lt;/h3&gt;

&lt;p&gt;This describes an abstract machine that uses only one instruction.  It removes the necessity for a machine language opcode. For example, &lt;a href=&#34;https://www.cl.cam.ac.uk/~sd601/papers/mov.pdf&#34;&gt;“mov” is turing complete&lt;/a&gt; which means it’s capable of being an OISC, as well as other instructions using subtract.&lt;/p&gt;

&lt;p&gt;This has not been commercialized, as far as I know, but it is very popular for teaching computer science.&lt;/p&gt;

&lt;p&gt;This leads down a few paths, some can get into all the nitty gritty details of each instruction set and their differences. For the sake of learning more about RISC-V, let&amp;rsquo;s dig more into that specific design.&lt;/p&gt;

&lt;h2 id=&#34;risc-v-design&#34;&gt;RISC-V Design&lt;/h2&gt;

&lt;p&gt;There is a great paper on the &lt;a href=&#34;https://people.eecs.berkeley.edu/~krste/papers/EECS-2016-1.pdf&#34;&gt;RISC-V design from Berkeley&lt;/a&gt;. Chapter 2, “Why Develop a New Instruction Set?”, is my favorite. It goes over the pros and cons of a lot of prior instruction sets, why the authors decided to create a new instruction set, and what lessons they learned and brought over from their knowledge of the past. I will summarize what I thought was interesting but I urge you to dig in for yourself and read the entire paper.&lt;/p&gt;

&lt;p&gt;For one, the authors state the importance of the fact that RISC-V is a completely free and open instruction set architecture. In contrast, all the most widely adopted instruction set architectures are proprietary. They are all also immensely complex. For example, you cannot get a hard copy of the x86 manual anymore and even in PDF form it’s ~5,000 pages and that doesn’t include the extensions. Who has time to read all of that? Although there is no exact number, &lt;a href=&#34;https://stefanheule.com/blog/how-many-x86-64-instructions-are-there-anyway/&#34;&gt;it’s estimated there are around 2,500 instructions in x86&lt;/a&gt;, which is just unwieldy.&lt;/p&gt;

&lt;p&gt;Props to Sun Microsystems for the fact that SPARC V8 is an open standard, but the design decisions are highly reflective of the other instruction sets from that time, leaving it unsuitable as a modern instruction set. “It was designed to be implemented in a single-issue, in-order, five-stage pipeline, and the ISA reflects this assumption.”&lt;/p&gt;

&lt;p&gt;Alpha came out of Digital Equipment Corporation (DEC) in the 1990s so it got to be built with some learning from the earlier eras. However it seems like they over-engineered it. Most interestingly, they also did not think to create any room for extra opcode space for extensions. The authors also point out that ISAs can die and Alpha is a great example of an ISA being pretty obsolete outside of owning an old DEC computer, other than the last implementation by HP in 2004 when the IP changed hands again.&lt;/p&gt;

&lt;p&gt;ARMv7 is widely used and the authors seriously considered it due to the fact of its popularity and ubiquity. However ARMv7 is a closed standard and cannot be extended making it unsuitable for the authors. They also found some technical problems as well, but the biggest determent to me was the fact it has over 600 instructions making it quite complex.&lt;/p&gt;

&lt;p&gt;The authors go over a few more instruction sets but I think you get the point that none of them were suitable for their needs. Of course you are more than welcome to dig in further yourself, I am just not going to take the time to reiterate their work here.&lt;/p&gt;

&lt;h2 id=&#34;recapping-how-i-learn&#34;&gt;Recapping how I learn&lt;/h2&gt;

&lt;p&gt;The paper continues into the details of the design of the RISC-V architecture. Some of this I will cover in my DotGo EU talk. For the sake of showing how I learn things I urge you to read the paper yourself and when you hit a term or concept you don’t know: research that concept. Continue this until you get a general understanding then jump back up into the paper where you left off. This cycle is how I dive into new things.&lt;/p&gt;

&lt;p&gt;At the beginning of this post I said I would take you down the path of how I dug into RISC-V, yet I have not even begun to describe the actual design or features of RISC-V. I did this to make a point (and because I was tired, maybe mostly because I was tired). Look how much I dug into the fundamentals of instruction sets before even digging into the thing I set out to learn. This is commonly what I find happens and I wanted to show an example of my process. Now you can go and continue the rest of the process yourself by continuing to read the &lt;a href=&#34;https://people.eecs.berkeley.edu/~krste/papers/EECS-2016-1.pdf&#34;&gt;RISC-V design paper&lt;/a&gt;, watching other &lt;a href=&#34;https://www.infoq.com/presentations/risc-v-future&#34;&gt;RISC-V talks&lt;/a&gt;, &lt;a href=&#34;https://riscv.org/risc-v-books/&#34;&gt;getting some RISC-V books&lt;/a&gt;, or finding other RISC-V papers and learning from those.&lt;/p&gt;

&lt;p&gt;Then, buy a board and start playing with it. I got the &lt;a href=&#34;https://www.sifive.com/boards/hifive-unleashed&#34;&gt;HiFive Unleashed&lt;/a&gt; and it&amp;rsquo;s awesome!&lt;/p&gt;

&lt;p&gt;I hope this helps open your mind to learning and digging deeper on any topics that interest you. Happy learning!&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Defining a Distinguished Engineer</title>
                <link>https://blog.jessfraz.com/post/defining-a-distinguished-engineer/</link>
                <pubDate>Thu, 21 Mar 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/defining-a-distinguished-engineer/</guid>
                    <description>

&lt;p&gt;I learned a lot about myself and the way big companies are organized over the past year or so. I had mentioned a bit in a &lt;a href=&#34;https://blog.jessfraz.com/post/government-medicine-capitalism/&#34;&gt;previous blog post&lt;/a&gt; and &lt;a href=&#34;https://weirdtrickmafia.fm/post/pilot/&#34;&gt;podcast&lt;/a&gt; about “the N + 1 shithead problem” (from &lt;a href=&#34;https://www.youtube.com/watch?v=1KeYzjILqDo&#34;&gt;Bryan Cantrill’s talk on leadership&lt;/a&gt;). To reiterate, the “N +1 shithead problem” occurs when you are demotivated by seeing people who are a level above you behave poorly, or more bluntly when they behave like a shithead. I know from experience what a huge demotivator this is and after talking to several other folks I realized this is quite common.&lt;/p&gt;

&lt;p&gt;When faced with this demotivator, I found myself thinking “why would I want to be at their level, when once I get there I’ll just be one amongst the dipshits.” It’s a horrible feeling to have and I’d love to have a model that resembles what I think of as a distinguished engineer or technical fellow.&lt;/p&gt;

&lt;p&gt;In this post I will define what it means to me to be a distinguished engineer or technical fellow and maybe others that agree will modify their ladders to incentivize people to resemble these qualities.&lt;/p&gt;

&lt;h2 id=&#34;technical-leader&#34;&gt;Technical Leader&lt;/h2&gt;

&lt;p&gt;The first thing people think of when they think of a distinguished engineer is that they are a technical leader. I fully agree. A technical leader can understand all parts of a system. They can also be dropped into a new system and pick up the way it is architected and designed with relative ease. I think this is an important distinction to make. It’s good to be an expert in a field, but only being an expert is limiting. It’s also important to understand the full picture and that takes general knowledge. I think having a general knowledge of things outside your area of expertise is key if you choose to gain expertise in something.&lt;/p&gt;

&lt;h3 id=&#34;value-learning&#34;&gt;Value learning&lt;/h3&gt;

&lt;p&gt;A technical leader can always realize that there is more to learn. One cannot be an expert in everything and you can have a general knowledge of most things without fully understanding the details within. A technical leader can always strive to continue to learn and persuade others to continue to learn as well.&lt;/p&gt;

&lt;h3 id=&#34;empower-others&#34;&gt;Empower others&lt;/h3&gt;

&lt;p&gt;A technical leader can build up others and empower their colleagues to do things that are more challenging than what they might think they are capable of. This is key for growing other members of an organization. I personally believe you don’t need a high title to take on a hard task, you just need the support and faith that you are capable of handling it. That support can come from the distinguished engineer and be reflected in their behavior towards others.&lt;/p&gt;

&lt;p&gt;A technical leader can also make time for growing and mentoring others.
They can be approachable and communicate with their peers and colleagues in
a way that makes them approachable. They can welcome newcomers to the team
and treat them as peers from day one.&lt;/p&gt;

&lt;h3 id=&#34;give-constructive-technical-criticism&#34;&gt;Give constructive technical criticism&lt;/h3&gt;

&lt;p&gt;A distinguished engineer can never tear others down but they can be capable of giving constructive criticism on technical work. This does not mean finding something wrong just to prove their brilliance; no, that would make them the brilliant jerk. Constructive criticism means teaching others to make their work better when there are problems, while also encouraging them to iterate and empowering them to succeed.&lt;/p&gt;

&lt;h3 id=&#34;have-opinions-loosely-held&#34;&gt;Have opinions loosely held&lt;/h3&gt;

&lt;p&gt;A technical leader can be able to have opinions loosely held on designs and architecture. Making an active effort not to say &amp;ldquo;strong opinions, loosely held&amp;rdquo; because with a power dynamic that could over power the rest of the voices. Technical leaders can make sure all voices are heard and they can fully articulate the &amp;ldquo;why&amp;rdquo; of their opinion for others.&lt;/p&gt;

&lt;p&gt;They do not need to have opinions on everything, that would be pedantic. Technical leaders can be able to use their experience to help others succeed, while also empowering others to own solutions. Technical leaders can not pass down solutions to problems but allow others to learn by letting others come up with solutions themselves. This is where good constructive criticism (from above) can come into play.&lt;/p&gt;

&lt;h3 id=&#34;great-communicator-and-bridge&#34;&gt;Great communicator and bridge&lt;/h3&gt;

&lt;p&gt;A technical leader can have strong communication skills and be able to articulate the “why” of a problem as well as articulate the technical details of designs. They can never communicate in a derogatory manner. They can always communicate to others as peers and colleagues.&lt;/p&gt;

&lt;p&gt;At times, technical leaders will need to act as a bridge between teams. It is really important to be able to clearly communicate then as well as always.&lt;/p&gt;

&lt;h3 id=&#34;humility-and-empathy&#34;&gt;Humility and empathy&lt;/h3&gt;

&lt;p&gt;A technical leader can not be driven by ego but by a constant urge to learn
and grow both themselves and their colleagues. They can have empathy for
others and portray kindness towards their peers and colleagues.&lt;/p&gt;

&lt;h3 id=&#34;prioritize-shipping-and-decisiveness&#34;&gt;Prioritize shipping and decisiveness&lt;/h3&gt;

&lt;p&gt;A technical leader can value shipping and decisiveness. They can not be susceptible to analysis paralysis. At the end of the day most people have jobs to get things out the door and this can be a priority. Of course, shipping can not come with the trade off of burning out a team or setting the company on fire.&lt;/p&gt;

&lt;h3 id=&#34;customer-focused&#34;&gt;Customer focused&lt;/h3&gt;

&lt;p&gt;Technical leaders can always seek feedback from their customers. This might
be the internal customers of their infrastructure or external customers if they
are on a product team. The best technical leaders are capable of empathizing
with customers and iterating quickly on customer feedback.&lt;/p&gt;

&lt;h3 id=&#34;build-resilient-systems&#34;&gt;Build resilient systems&lt;/h3&gt;

&lt;p&gt;A part of being a technical leader is having the experience of building
multiple systems in the past. Distinguished engineers can be able to
anticipate various failures from their past experiences and build systems that
will not create the same failures. Of course no system is perfect so they
can be able to learn from the failures they cannot anticipate as well. This
is a cycle that they can then use when building the next system.&lt;/p&gt;

&lt;h3 id=&#34;value-quality-performance-and-security&#34;&gt;Value quality, performance, and security&lt;/h3&gt;

&lt;p&gt;Great technical leaders value quality, performance, and security in what they build. They
stay up to date on advancements and research in technology so that they might be able to use
new techniques for bettering their solutions. Technical leaders can also build with respect for users and their privacy.&lt;/p&gt;

&lt;h3 id=&#34;value-maintainability&#34;&gt;Value maintainability&lt;/h3&gt;

&lt;p&gt;Technical leaders can value writing code that is easy to maintain and easy
to understand. They can value unit and integration tests as well as making
sure if a bug is fixed it has a test to make sure there is not a regression.
Technical leaders can use code comments, not as a garnish, but to denote
things a reader would need to know. This could be details of a code section
that fixes a specific bug or maybe reasoning behind why something is written
a certain way. Documenting context is super valuable and helpful for maintainability.&lt;/p&gt;

&lt;h2 id=&#34;community&#34;&gt;Community&lt;/h2&gt;

&lt;p&gt;Good technical leaders are also leaders in the outside communities. This can include giving talks on various things they have built as well as mentoring others in the community or the workplace.&lt;/p&gt;

&lt;h3 id=&#34;learn-from-external-community&#34;&gt;Learn from external community&lt;/h3&gt;

&lt;p&gt;If you silo yourself to only learning within your company, you are missing out on a world of experiences and expertise different than yours from the external community. Technical leaders realize this and place importance on learning from the larger world of computing than just their silo.&lt;/p&gt;

&lt;h3 id=&#34;value-listening-and-be-open-to-feedback&#34;&gt;Value listening and be open to feedback&lt;/h3&gt;

&lt;p&gt;By gaining feedback and making yourself visible to an external community, leaders avoid a dunning-kruger like effect of only growing inside an echo chamber. It is always valuable to see where the rest of the industry is focusing and how technical leaders at other companies are solving problems. Technical leaders realize that there is much to learn from people with different experiences than their own. They can always be open to listening to others.&lt;/p&gt;

&lt;h3 id=&#34;humility&#34;&gt;Humility&lt;/h3&gt;

&lt;p&gt;Technical leaders can always remain humble and modest. The best technical leaders know that it’s not possible for them to know &lt;em&gt;everything&lt;/em&gt; and will prioritize keeping an open mind to always be learning.&lt;/p&gt;

&lt;h3 id=&#34;call-upon-other-experts&#34;&gt;Call upon other experts&lt;/h3&gt;

&lt;p&gt;The best technical leaders know when they need to call on experts in specific areas for help or feedback on certain designs or architecture. By participating in the external community, leaders form strong networks and bonds with fellow engineers they can call on when they need them. Technical leaders can always be eager to use these relationships when they need them or introduce others to these folks if they could use their expertise.&lt;/p&gt;

&lt;h3 id=&#34;value-research&#34;&gt;Value research&lt;/h3&gt;

&lt;p&gt;Along with being able to call upon other experts, technical leaders can
value well researched solutions. They can strive to learn from prior art.&lt;/p&gt;

&lt;h2 id=&#34;have-fun&#34;&gt;Have fun&lt;/h2&gt;

&lt;p&gt;Always make sure to have fun and not take yourself too seriously!&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;https://twitter.com/LeaKissner/status/1109259338265165824&#34;&gt;Take the long view, not just the broad view.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These are just a few of the things I think define a strong technical leader and engineer. I am sure I will grow this list as I personally grow myself every day.&lt;/p&gt;

&lt;p&gt;Most importantly you must actually &lt;em&gt;do&lt;/em&gt; these things. Actions speak louder than
words.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>An Enigma, unikernels booting on RISC-V, a rack encased in liquid. OH MY.</title>
                <link>https://blog.jessfraz.com/post/enigma-unikernels-risc-v-oh-my/</link>
                <pubDate>Sun, 17 Mar 2019 11:25:24 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/enigma-unikernels-risc-v-oh-my/</guid>
                    <description>

&lt;p&gt;I have written a bit about how I am spending my time while being unemployed and
I thought I would continue.&lt;/p&gt;

&lt;p&gt;There was one thing I had left out of my &lt;a href=&#34;https://blog.jessfraz.com/post/government-medicine-capitalism/&#34;&gt;previous post on my visit to the Pentagon&lt;/a&gt;.
THEY HAVE A REAL ENIGMA MACHINE THERE. Okay, moving on&amp;hellip;&lt;/p&gt;

&lt;h2 id=&#34;qcon-and-university-of-cambridge&#34;&gt;QCon and University of Cambridge&lt;/h2&gt;

&lt;p&gt;I gave a talk at QCon on SGX and ended up giving the same talk to some really
awesome folks at University of Cambridge. Each time I gave the talk provoked
some really interesting conversations. One of the topics that came up a couple of
times was if RISC-V was going to be supported by any major cloud provider anytime soon.
My honest opinion, which some might disagree with, is this is years away BUT it would certainly help adoption and integration into projects if it was backed by a company with a lot of time to develop integrations. Also I got a bit nerd sniped by some ARM folks and researchers to look more into TrustZone (which is the ARM secure enclave). I haven’t dug in yet but it’s on my list.&lt;/p&gt;

&lt;p&gt;It was awesome spending a day in Cambridge (thanks &lt;a href=&#34;https://twitter.com/avsm&#34;&gt;Anil&lt;/a&gt; for the tour!) and learning about all the awesome things they are doing. The MirageOS team is booting unikernels on baremetal RISC-V!&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;🎉OCaml boots on bare-metal &lt;a href=&#34;https://twitter.com/ShaktiProcessor?ref_src=twsrc%5Etfw&#34;&gt;@ShaktiProcessor&lt;/a&gt; &lt;a href=&#34;https://twitter.com/risc_v?ref_src=twsrc%5Etfw&#34;&gt;@risc_v&lt;/a&gt;! 🎉 An important milestone towards building safer apps using &lt;a href=&#34;https://twitter.com/OpenMirage?ref_src=twsrc%5Etfw&#34;&gt;@OpenMirage&lt;/a&gt; on open source hardware. &lt;a href=&#34;https://t.co/XFosAxPROR&#34;&gt;pic.twitter.com/XFosAxPROR&lt;/a&gt;&lt;/p&gt;&amp;mdash; KC Sivaramakrishnan (@kc_srk) &lt;a href=&#34;https://twitter.com/kc_srk/status/1101479406084583424?ref_src=twsrc%5Etfw&#34;&gt;March 1, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;They use this on boards to power light bulbs (at the University!) super securely since it removes the need for all the shitty firmware most other things ship and has a super minimal environment. I’m sure you can think of a number of different other use cases as well. Honestly, unikernels replacing all the crap firmware in the world would be a huge win.&lt;/p&gt;

&lt;h2 id=&#34;open-compute-summit&#34;&gt;Open Compute Summit&lt;/h2&gt;

&lt;p&gt;Just this past week I spent a day at the Open Compute Summit. What is happening there in the open firmware space is truly awesome. They had demos of hardware they are booting with LinuxBoot and Coreboot. Facebook runs this on their infrastructure as well as with OpenBMC to replace the traditional, proprietary BMC firmware. Trammel Hudson has some &lt;a href=&#34;https://trmm.net/LinuxBoot_34c3&#34;&gt;great posts&lt;/a&gt; on LinuxBoot, which include links to some really great talks by him and Ron Minnich.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;😍 the open systems firmware community is awesome &lt;a href=&#34;https://t.co/DAqudm6M4Z&#34;&gt;pic.twitter.com/DAqudm6M4Z&lt;/a&gt;&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1106301027408465920?ref_src=twsrc%5Etfw&#34;&gt;March 14, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;Facebook’s server racks are gorgeous. They have a power bus which runs down the center and everything gets power from that, with the main power coming out of the power unit towards the middle of the rack (in the first picture below).&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;The Facebook rack and node designs are seriously gorgeous, simple. The power bar &lt;em&gt;chef kiss&lt;/em&gt; &lt;a href=&#34;https://t.co/pGphy9uLLl&#34;&gt;pic.twitter.com/pGphy9uLLl&lt;/a&gt;&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1106336080956018689?ref_src=twsrc%5Etfw&#34;&gt;March 14, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;h3 id=&#34;boot-guard&#34;&gt;Boot Guard&lt;/h3&gt;

&lt;p&gt;One thing I learned that I found fascinating was about Boot Guard for Intel processors and the equivalents on ARM and AMD. Boot Guard is supposed to verify the firmware signatures for the processor. The problem with this, in Intel’s case, is only Intel has the keys for signing firmware packages. This makes it impossible for you to then use Coreboot and LinuxBoot or equivalents as firmware on those processors. If you tried, the firmware would not be signed with Intel’s key and would brick the board. Matthew Garrett wrote &lt;a href=&#34;https://mjg59.dreamwidth.org/33981.html&#34;&gt;a great post&lt;/a&gt; about this as well.&lt;/p&gt;

&lt;p&gt;If a person owns the hardware, they have a right to own the firmware as well. Boot Guard prevents this. In &lt;a href=&#34;https://trmm.net/OSFC_2018_Security_keynote#Boot_Guard&#34;&gt;another great talk&lt;/a&gt; by Trammel, he found a vulnerability to &lt;a href=&#34;https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-12169&#34;&gt;bypass BootGuard&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;CVE-2018-12169 also potentially allows a developer to &amp;quot;jailbreak&amp;quot; their BootGuard protected laptop since the UEFI DXE volume can be replaced with a user provided LinuxBoot ROM image. &lt;a href=&#34;https://t.co/yHwwMOTyx7&#34;&gt;https://t.co/yHwwMOTyx7&lt;/a&gt; &lt;a href=&#34;https://t.co/MeWI0DGUBf&#34;&gt;pic.twitter.com/MeWI0DGUBf&lt;/a&gt;&lt;/p&gt;&amp;mdash; Trammell Hudson ⚙ (@qrs) &lt;a href=&#34;https://twitter.com/qrs/status/1044157473882591233?ref_src=twsrc%5Etfw&#34;&gt;September 24, 2018&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;This &amp;ldquo;feature&amp;rdquo; from hardware vendors is preventing the innovation of this
community and preventing pushing technology to a safer place. If you are
in a position to push back on these hardware vendors, please do so. They need all
the help they can get.&lt;/p&gt;

&lt;h3 id=&#34;server-rack-encased-in-liquid&#34;&gt;Server rack encased in liquid&lt;/h3&gt;

&lt;p&gt;Lastly, I saw something bat shit crazy at Open Compute Summit. It was
something I saw in the Expo Hall. One vendor has encased an entire server rack
in liquid for liquid cooling. I&amp;rsquo;m not sure I could sleep at night using this.
The funniest part about this though was the demo at their booth still had fans
in the rack! I mean&amp;hellip; why would you need fans if you had liquid cooling&amp;hellip;
they claimed it was just &amp;ldquo;left over&amp;rdquo; and you wouldn&amp;rsquo;t need that.
But at a conference where everyone is showing off their custom hardware, you&amp;rsquo;d
think they would have left the fans at home ;).&lt;/p&gt;

&lt;p&gt;That&amp;rsquo;s the end of this update of my adventures. Hope you all enjoyed it. I know
I enjoyed living it!&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Trust and Integrity</title>
                <link>https://blog.jessfraz.com/post/trust-and-integrity/</link>
                <pubDate>Fri, 01 Mar 2019 18:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/trust-and-integrity/</guid>
                    <description>&lt;p&gt;I stated in my first post on my &lt;a href=&#34;https://blog.jessfraz.com/post/government-medicine-capitalism/&#34;&gt;reflections of leadership in other
industries&lt;/a&gt;
that I would write a follow up post after having hung out in the world of
finance for a day. This is pretty easy to do when you live in NYC.
Originally for college, I was a finance major at NYU Stern School of Business
before transferring out, so I have always had a bit of affinity for it.&lt;/p&gt;

&lt;p&gt;I consider myself pretty good at reading people. This, of course, was not
always the case. I became better at reading people after having a few really
bad experiences where I should have known better than to trust someone. I&amp;rsquo;ve
read a bunch of books on how to tell when people are lying and my favorite
I called out in my &lt;a href=&#34;https://blog.jessfraz.com/post/books/&#34;&gt;books post&lt;/a&gt;. This is
not something I wish that I had to learn but it does protect you from people
who might not have the best intentions.&lt;/p&gt;

&lt;p&gt;Most people will tell you to always assume good intentions, and this is true to
an extent. However, having been through some really bad experiences where I did
&amp;ldquo;assume good intentions&amp;rdquo; and should not have, I tend to be less and less willing
to do that.&lt;/p&gt;

&lt;p&gt;I am saying this, not because I think people in finance are shady, they
aren&amp;rsquo;t, but because I believe it is important in any field. I, personally, place a lot of value on trust and
integrity.&lt;/p&gt;

&lt;p&gt;I&amp;rsquo;m not really going to focus this post on what an investment bankers job is
like because honestly it wasn&amp;rsquo;t really anything to write home about. What I did
find interesting was the lack of trust in the workplace. Trust is a huge thing
for me, like I said, and I think having transparency goes hand-in-hand with that.&lt;/p&gt;

&lt;p&gt;To gain trust, I believe a leader must also have integrity and a track record
of doing the right thing. I liked this response to a tweet of mine about using &amp;ldquo;trust
tokens&amp;rdquo; in the case leadership needs to keep something private.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;They are. It gets hard with legal things like SEC filings and acquisitions but that’s where an already good leadership team can use existing trust tokens.&lt;/p&gt;&amp;mdash; Silvia Botros (@dbsmasher) &lt;a href=&#34;https://twitter.com/dbsmasher/status/1098602904838197253?ref_src=twsrc%5Etfw&#34;&gt;February 21, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;I think people tend to under estimate how important it is to be transparent
about things that don&amp;rsquo;t need to be private. I&amp;rsquo;ve seen a lot of people in
positions of power, use their power of keeping information private &lt;em&gt;against&lt;/em&gt;
those under them. They don&amp;rsquo;t fully disclose the &amp;ldquo;why&amp;rdquo; and it leads to people
they manage not fully being able to help solve the problem as well as not fully
understanding the problem. It also doesn&amp;rsquo;t build trust.&lt;/p&gt;

&lt;p&gt;Leaders should try to be cognisant of when something needs to be private and
when they can be transparent about information. I also really enjoyed this
insightful tweet as well:&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;Unlike respect, which can start from a positive value and go up or down depending on behavior, trust starts at 0. You have to earn the trust of your colleagues and reports before you can take loans out on it. &lt;a href=&#34;https://t.co/aWRpdjAtBR&#34;&gt;https://t.co/aWRpdjAtBR&lt;/a&gt;&lt;/p&gt;&amp;mdash; julia ferraioli (@juliaferraioli) &lt;a href=&#34;https://twitter.com/juliaferraioli/status/1101572682863296514?ref_src=twsrc%5Etfw&#34;&gt;March 1, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;Just thought I would put my thoughts in writing since I said I would. This
experience seeing how other industries work has been super fun for me. I might
try to find some other jobs to check out as well in the future.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Government. Medicine. Capitalism?</title>
                <link>https://blog.jessfraz.com/post/government-medicine-capitalism/</link>
                <pubDate>Wed, 27 Feb 2019 18:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/government-medicine-capitalism/</guid>
                    <description>

&lt;p&gt;I’ve had a bit of a crazy week. Tuesday, I got a tour of the Pentagon from a friend that is in the &lt;a href=&#34;https://www.usds.gov/&#34;&gt;US Digital Service&lt;/a&gt; (USDS) for the Department of Defense (DoD), called the Defense Digital Service (DDS). Wednesday (the day of writing this), I shadowed a friend who is a surgical resident during their shift in a hospital. Friday, I have plans to shadow a friend who is an investment banker at a private equity firm and will do a &lt;a href=&#34;https://blog.jessfraz.com/post/trust-and-integrity/&#34;&gt;follow up post&lt;/a&gt;. You can consider this like “Eat. Pray. Love.” except it’s “Government. Medicine. Capitalism?”&lt;/p&gt;

&lt;p&gt;First, I would like to thank everyone for sharing a bit of their life with me and now I get to share what I learned from these experiences with you. When I went into this, I didn’t think much of it. I wanted to go to DC to see some museums and ended up texting my friend on the way down so we made a day of it. My other friend, who is a surgical resident, and I had once gotten into a pretty deep discussion about how weird tech’s culture is compared to theirs so I always had an open offer to see how they work. Then I posted on twitter what I was doing and I guess there was a sort of pattern so it became a thing…&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;Sweet, it’s on, Friday I’m going to be a douchey investment banker at Lehman Brothers, no just kidding some private equity firm, but I nailed the joke I’ll fit right in ;)&lt;a href=&#34;https://t.co/yz3EgKD2Ib&#34;&gt;https://t.co/yz3EgKD2Ib&lt;/a&gt;&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1100566908640874498?ref_src=twsrc%5Etfw&#34;&gt;February 27, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;Let’s dive into what I’ve learned and observed, then I will try to put a nice ribbon on it and tie it all together for you.&lt;/p&gt;

&lt;h2 id=&#34;government&#34;&gt;Government.&lt;/h2&gt;

&lt;p&gt;Let me start by saying, if you ever have a chance to do a stint at the &lt;a href=&#34;https://www.usds.gov/&#34;&gt;US Digital Service&lt;/a&gt; it seems absolutely amazing. The program is great for tech people who want to have an impact on modernizing technology for the government. Having just left a job at Microsoft, I was quite familiar with a very large organizational structure and the power dynamics that exist in people with titles. It was interesting to me to see the parallel between that and the setup of the government.&lt;/p&gt;

&lt;p&gt;When you see someone who has served time in the military they usually have a set of badges showing their accomplishments. This is cool because it is accomplishment based. I love accomplishment based incentive systems since you have to &lt;em&gt;do&lt;/em&gt; something in order to get rewarded. There are badges for everything, one I really liked was the &lt;a href=&#34;https://en.wikipedia.org/wiki/Ranger_tab&#34;&gt;“ranger tab”&lt;/a&gt; which effectively means if that person was ever dropped in the middle of nowhere they could fend for themselves and survive. WOW, what a meaningful item. I found this system really cool since I like decoding things and I personally hate titles that mean nothing. This badge system holds a lot of meaning and defines what the person has &lt;em&gt;done&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Before going to the pentagon, I had watched this &lt;a href=&#34;https://www.youtube.com/watch?v=1KeYzjILqDo&#34;&gt;Bryan Cantrill talk on leadership&lt;/a&gt; where he brought up the “N+1 shithead problem”. The “N+1 shithead problem” happens when there is a person (who acts like a shithead) in a title bump above you and how it is a huge demotivator. What helped Bryan get over this was: instead of looking at the shithead a title above him, he focused on the best person in the title above him and used them for motivation. This works to an extent. I know from experience how demotivating it is seeing a shithead consistently fail up. I believe that most titles are bullshit, climbing a career ladder is bullshit, what really matters is what you &lt;em&gt;do&lt;/em&gt; and what &lt;em&gt;impact you have&lt;/em&gt;. The talk also covers how Bryan set up his team to only have one title, Software Engineer. And that he motivated his team with a purpose and a mission, not with climbing a ladder.&lt;/p&gt;

&lt;p&gt;My friend and I ended up getting into an interesting conversation about this and how they handle authority and titles at the DDS and within the government. The way the DDS program is set up, the individuals who join are at a colonel level rank, which is one below a general, which means they are pretty high up in the pecking order. They also have orders from the Secretary of State to override any authority if need be. They end up not needing to escalate to using those orders though, since just the threat of using it is enough to get bad actors to listen to them.&lt;/p&gt;

&lt;p&gt;The Defense Digital Service (DDS) also recruits internally from inside the government. If there is a truly exceptional individual technically in another role they will recruit them to the DDS. They have had people from the Army, Navy, and other parts join. Since the structure and dynamic of the organizations where they came from is so different, joining the Defense Digital Service (DDS) ends up having an effect on them. Before being a part of the DDS, they typically could not fight back from those with authority over them and the DDS is all about fighting for the truth, and fixing what is broken, even if they are the only ones that wear hoodies and not a uniform.&lt;/p&gt;

&lt;p&gt;When a general or authority comes into the office of the Defense Digital Service they aren’t greeted with coffee and have their feet kissed, they are just asked to sit on the couch side-by-side their peers (the DDS) and to talk about things like colleagues. This is how leadership should work, not with power over someone else but working with others as peers and colleagues.&lt;/p&gt;

&lt;p&gt;Another thing I learned was that for non-confidential code, they use GitHub and try to modernize agencies they work with to do the same and use modern languages and tools. It also reminded me of this awesome article about how &lt;a href=&#34;https://arstechnica.com/tech-policy/2018/11/how-i-changed-the-law-with-a-github-pull-request/&#34;&gt;someone had changed the law via a GitHub pull request&lt;/a&gt; because the District of Columbia’s legal code is hosted on GitHub. &lt;strong&gt;Wouldn’t it be cool if that’s how every part of the government worked?&lt;/strong&gt; &lt;strong&gt;Then, if you wanted to change a process or law you would just send a pull request…&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is just a few of the things I learned in my day at the Pentagon, again I highly recommend applying for the &lt;a href=&#34;https://www.usds.gov/&#34;&gt;USDS&lt;/a&gt; if this is interesting to you. Let’s move on to medicine…&lt;/p&gt;

&lt;h2 id=&#34;medicine&#34;&gt;Medicine&lt;/h2&gt;

&lt;p&gt;My friend had a shift in a hospital here in New York City and I asked to follow along. I got to wear scrubs and everything. I also had to wake up at the crack of dawn for this (5am). This is the fourth year of my friends surgical residency so he’s considered pretty senior since usually that’s a five year thing.&lt;/p&gt;

&lt;p&gt;The “junior residents” report to “senior residents” (my friend) who then report to the “attending physicians”. George Clooney was not their attending physician, I was disappointed since I like the show ER ;). Back to reality, we did rounds and checked on all the patients and then I got to watch a surgery. That was super cool, also not my first surgery since I had shadowed a friends dad who was an anesthesiologist in high school.&lt;/p&gt;

&lt;p&gt;What I really took away from the day was the respect that the attending physicians had towards the senior residents. There was a lot of respect from the attendings and the senior residents seemed to have a lot of autonomy. In the operating room, the surgery was lead by a different senior resident and the attending was mostly passing tools. I thought this was super cute. I even called it “super cute” out loud after…. to which my friend rolled their eyes. But really, the surgery was done “as a team” with no one calling out orders to someone like a “code monkey.”&lt;/p&gt;

&lt;p&gt;I thought this was great and in stark contrast to what I see when technical people are promoted to manager. Ill-trained managers pass down technical work without telling the “why” and already arriving at a solution. In contrast, it is actually &lt;em&gt;the team’s&lt;/em&gt; goal to come to a solution, not the managers. That is not actually a part of being a manager. I wish more managers would focus on managing and growing the people on their team versus using it as a position of power over the technical work. If they still need to do technical work in that role it should be like that of “passing the scalpel” when someone needs it.&lt;/p&gt;

&lt;p&gt;There was also a point in the day where my friend helped a junior resident with some assignments. It was super interesting and I wondered if an open source mindset could help here. I remembered a talk I saw at Linux Conf Australia in 2018 on &lt;a href=&#34;https://archive.org/details/lca2018-Housekeeping_and_Keynote_1_Matthew_Todd&#34;&gt;open source pharma&lt;/a&gt;. The talk focused on how the open sharing of research is leading to innovation in biomedical research.&lt;/p&gt;

&lt;p&gt;What I love about open source is the ability to share knowledge and “ping” an expert when you need it. We did this with docker a couple times, when we “pinged” the kernel namespaces maintainer on features to make sure we had implemented it correctly. It would be pretty cool to be able to learn and collaborate with the best easily in any field.&lt;/p&gt;

&lt;h2 id=&#34;tying-it-all-together&#34;&gt;Tying it all together&lt;/h2&gt;

&lt;p&gt;In both Government and Medicine I found hierarchical structures to learn from. The badges in the military as a system of tracking accomplishments and not power really spoke to me. As well as the attending physician passing the tools to the senior residents and working as a team versus the attending taking charge completely.&lt;/p&gt;

&lt;p&gt;I think both Government and Medicine could be changed in a way that would also change the world if more laws and open knowledge wound up on GitHub.&lt;/p&gt;

&lt;p&gt;We live in a world where Reddit, Twitter, and other social sites are littered with hate and fake news. I wish there was a place for a source of intelligence and knowledge. I think &lt;em&gt;that&lt;/em&gt; would change the world while also allowing the world (or a law) to be changed with just a pull request.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Books I Recommend</title>
                <link>https://blog.jessfraz.com/post/books/</link>
                <pubDate>Mon, 25 Feb 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/books/</guid>
                    <description>

&lt;p&gt;You can find my goodreads account at
&lt;a href=&#34;https://goodreads.com/jessfraz&#34;&gt;goodreads.com/jessfraz&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&#34;romanticized-tech&#34;&gt;Romanticized Tech&lt;/h3&gt;

&lt;p&gt;I call this genre of books &amp;ldquo;romanticized tech&amp;rdquo; because of the way tech is
portrayed in them in a very idealistic and whimsical way. It&amp;rsquo;s nice to pick up
one of these if you are feeling very &amp;ldquo;Black Mirror&amp;rdquo; to remember why you might
have even started in this field.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Soul-New-Machine-Tracy-Kidder/dp/0316491977&#34;&gt;Soul of a New Machine&lt;/a&gt;: &lt;a href=&#34;https://twitter.com/bcantrill&#34;&gt;Bryan Cantrill&lt;/a&gt; recommended this to me and it&amp;rsquo;s amazing. It&amp;rsquo;s about Data General building a new computer and the passion the team building it put into it. I wrote &lt;a href=&#34;https://blog.jessfraz.com/post/new-golden-age-of-building-with-soul/&#34;&gt;a post with some thoughts on it&lt;/a&gt; and &lt;a href=&#34;http://dtrace.org/blogs/bmc/2019/02/10/reflecting-on-the-soul-of-a-new-machine/&#34;&gt;so did he&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Peoples-History-Computing-United-States-ebook/dp/B07DGJ74FV&#34;&gt;A People’s History of Computing in the United States&lt;/a&gt;: This book has a bunch of short stories that chronologically take you through some of the most important moments of computing history. I loved reading this after &amp;ldquo;Soul of a New Machine&amp;rdquo; because it really tied in nicely to the references of Data General and DEC.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/big-machine-Robert-Jungk/dp/B0006BUS1Y&#34;&gt;The Big Machine&lt;/a&gt;: I found this book at the infamous Bell&amp;rsquo;s books in Palo Alto. It was all worn like someone had previously loved it a lot and it makes me love it even more. It&amp;rsquo;s about CERN and how it came to be and has the same sort of romanticized view of tech as &amp;ldquo;Soul of a New Machine&amp;rdquo; and &amp;ldquo;A People’s History of Computing in the United States&amp;rdquo;.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/gp/product/0973864907/ref=ppx_yo_dt_b_asin_title_o05_s00?ie=UTF8&amp;amp;psc=1&#34;&gt;On the Edge&lt;/a&gt;: The story of the Commodore computer company. Goes through the entire history. Super desnse, but over all interesting.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Where-Wizards-Stay-Up-Late/dp/0684832674&#34;&gt;Where Wizards Stay Up Late&lt;/a&gt;: This book details the story of how the Internet came to be!&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&#34;non-fiction&#34;&gt;Non-Fiction&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Spy-Lie-Former-Officers-Deception/dp/1250029627&#34;&gt;Spy the Lie: Former CIA Officers Teach You How to Detect Deception&lt;/a&gt;: I have now read this book twice. It is amazing if you want to be able to read when people are lying to you. It&amp;rsquo;s a good read backed by a lot of experience from the CIA. Honestly, after reading it the world will be a much different place.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Super-Brain-Unleashing-Explosive-Well-Being/dp/0307956830&#34;&gt;Super Brain&lt;/a&gt;: I loved this book. It uses science to describe how the brain processes different emotions and what that does to your overall health. It will leave you with all sorts of good feelings after as well as teaching you quite a bit about misconceptions on how the brain works.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/gp/product/0393355624/&#34;&gt;&amp;ldquo;Surely You&amp;rsquo;re Joking, Mr. Feynman!&amp;rdquo;: Adventures of a Curious Character&lt;/a&gt;: A witty book taken from short stories the notorious professor used to tell. Awesome read, flows quite quickly and is fun. It&amp;rsquo;s filled with fun little physics and life lessons.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/What-Care-Other-People-Think/dp/0393355640&#34;&gt;&amp;ldquo;What Do You Care What Other People Think?&amp;rdquo;: Further Adventures of a Curious Character&lt;/a&gt;: More Feynman stories just like the above.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Quiet-Power-Introverts-World-Talking/dp/0307352153/&#34;&gt;Quiet: The Power of Introverts in a World That Can&amp;rsquo;t Stop Talking&lt;/a&gt;: This is an awesome book and you should watch her &lt;a href=&#34;https://www.ted.com/talks/susan_cain_the_power_of_introverts?language=en&#34;&gt;TED talk&lt;/a&gt; as well. It&amp;rsquo;s about the power of introverts and how being an introvert should not be something to be ashamed of but rather proud of.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Brief-Answers-Questions-Stephen-Hawking/dp/1984819194&#34;&gt;Brief Answers to the Big Questions&lt;/a&gt;: If you have read &amp;ldquo;A Brief History of Time&amp;rdquo;, you will like this follow up answering some of the larger questions of the universe.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/gp/product/1591847818/&#34;&gt;Ego is the Enemy&lt;/a&gt;: This book
is a great reminder in staying modest and humble. Ego so often gets in the
way of great leadership and success and I greatly enjoyed reading a book
that focused on self-confidence without ego.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/gp/product/1681734338/&#34;&gt;The Datacenter As a Computer: Designing Warehouse-scale Machines (Synthesis Lectures on Computer Architecture)&lt;/a&gt;: This is a overview of how Google designs their datacenters. Overall, super valuable if you work in the space of high-scale compute. I only wish it disclosed more of the reasoning behind certain technical decisions.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/gp/product/1727125452&#34;&gt;A Programmer&amp;rsquo;s Introduction to Mathematics&lt;/a&gt;: I was a math major so I have a huge fondness for mathematics. This is a great book about math from the point of view of programming.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/gp/product/0385528752&#34;&gt;Switch: How to Change Things When Change Is Hard&lt;/a&gt;: I got this book as a recommendation from &lt;a href=&#34;https://twitter.com/lara_hogan&#34;&gt;Lara Hogan&lt;/a&gt;. It is a great read if you are trying to change something in a culture that does not embrace change. It really details a great approach for doing so that feels like it could almost be weaponized :).&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Managers-Path-Leaders-Navigating-Growth/dp/1491973897&#34;&gt;The Manager&amp;rsquo;s Path: A Guide for Tech Leaders Navigating Growth and Change&lt;/a&gt;: Every single book list should include &lt;a href=&#34;https://twitter.com/skamille&#34;&gt;Camille&amp;rsquo;s&lt;/a&gt; book. It is a great read for managers and non-managers and has given me the tools for knowing what is normal and what is not.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Accelerate-Software-Performing-Technology-Organizations/dp/1942788339&#34;&gt;Accelerate&lt;/a&gt;: I cannot believe I forgot this book the first time around, great for high-performance teams who want to ship software, based on real data, and written by the badass &lt;a href=&#34;https://twitter.com/nicolefv&#34;&gt;Nicole Forsgren&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Dear-Founder-Letters-Manages-Business/dp/1250195640/ref=sr_1_1?crid=1QHH9O6LT4K4H&amp;amp;keywords=dear+founder&amp;amp;qid=1555541116&amp;amp;s=books&amp;amp;sprefix=dear+founder%2Cstripbooks%2C195&amp;amp;sr=1-1&#34;&gt;Dear Founder&lt;/a&gt;: This book takes you through the evolution and stages of starting a business. Its in the form of letters and a good and fast read. Also seems like an effective reference to flip back to when you need specific advice in a pinch.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Good-Strategy-Bad-Difference-Matters/dp/0307886239&#34;&gt;Good Strategy, Bad Strategy&lt;/a&gt;: &lt;a href=&#34;https://twitter.com/nicolefv&#34;&gt;Nicole&lt;/a&gt; recommended this book to me and it is a very good approach to strategy, real. It really focuses on keeping things transparent and real.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Last-Days-Night-Novel/dp/0812988922/ref=sr_1_1?crid=UJ77WZEFM6G7&amp;amp;keywords=the+last+days+of+night+by+graham+moore&amp;amp;qid=1555541194&amp;amp;s=books&amp;amp;sprefix=the+last+day%2Cstripbooks%2C200&amp;amp;sr=1-1&#34;&gt;The Last Days of Night&lt;/a&gt;: A story about this guy that gets sued by Thomas Edison. Super fun to read, has a lot of history interwoven in. Tesla makes an appearance and there&amp;rsquo;s lots of drama with false motives. Great book :)&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Hard-Thing-About-Things-Building/dp/0062273205&#34;&gt;The Hard Thing about Hard Things&lt;/a&gt;: This book is great when it comes to management and also having empathy for those faced with hard decisions. It puts an emphasis on transparency and saying things &amp;ldquo;like they are&amp;rdquo; and I really appreciate that. Also there are rap quotes.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Finite-Infinite-Games-James-Carse/dp/1476731713&#34;&gt;Finite and Infinte Games&lt;/a&gt;: This book is kinda a mind trip in the best ways. I&amp;rsquo;ll leave it at that.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/gp/product/080187971X/&#34;&gt;Deep Down Things: The Breathtaking Beauty of Particle Physics&lt;/a&gt;: A great introduction to particle physics.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/gp/product/0262533413/&#34;&gt;The Character of Physical Law&lt;/a&gt;: Richard Feynman&amp;rsquo;s breif overview of the laws of physics.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/Feynman-Lectures-Physics-boxed-set/dp/0465023827/&#34;&gt;The Feynman Lectures on Physics&lt;/a&gt;: The complete set of Feynman lectures on physics.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/gp/product/0198504861/&#34;&gt;The Particle Odyssey: A Journey to the Heart of Matter&lt;/a&gt;: Great introduction to particle physics with tons of illustrations.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.amazon.com/gp/product/1455587982/&#34;&gt;The Telomere Effect&lt;/a&gt;: Science
behind aging and how you can change your lifestyle to help you live longer.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&#34;bookshelves&#34;&gt;Bookshelves&lt;/h3&gt;

&lt;p&gt;If you are interested in books and/or bookshelves I started a thread with pictures of bookshelves and there are some great find in here:&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;These are my favorite two shelves of my bookshelf, and yes that’s a slug Jerry. Show me your bookshelves, doesn’t just have to be tech books :)&lt;br&gt;&lt;br&gt;(most of these are from my grandpa :) &lt;a href=&#34;https://t.co/0qiqytAYuL&#34;&gt;pic.twitter.com/0qiqytAYuL&lt;/a&gt;&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1099435856123826176?ref_src=twsrc%5Etfw&#34;&gt;February 23, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>From the File Vault: Pharmy Tales</title>
                <link>https://blog.jessfraz.com/post/pharmy-tales/</link>
                <pubDate>Sat, 23 Feb 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/pharmy-tales/</guid>
                    <description>

&lt;p&gt;A few of you, thank you, have reached out to me saying that you love my writing style. It means a lot to me because I like to think that I write how I speak. This was not always taken well, however. I tend to be a bit of a sarcastic troll.&lt;/p&gt;

&lt;p&gt;The following post is meant to show others who may be like me and hesitant towards their writing style due to feedback they’ve gotten. I’d love to empower them to be comfortable with themselves.&lt;/p&gt;

&lt;p&gt;In high school, I got my first D on an assignment ever in AP English. It was on my senior thesis. Now, if you haven’t been able to tell already, I am a bit of a troll and I also value hearing all perspectives and then finding the truth somewhere in the middle. In this sense, I wrote my senior thesis on viewing the moon landing from the other side and trying to prove it was fake. Today, this is probably equivalent to trying to prove flat-earthers are correct. I had some shady sources as you can imagine and the whole thing was written with a large dose of satire. My English professor, on the other hand, was not one for jokes I soon learned because I landed myself a big fat D on the assignment. Luckily for me, I had already been accepted to NYU early admission so other than a large dose of feeling like shit (I am a perfectionist) I decided to not give any fucks.&lt;/p&gt;

&lt;p&gt;I tried to find that paper and couldn’t, but instead I found my college entrance essay, which is in the same style. I worked at a pharmacy all through high school and on breaks from college and some pretty weird shit happened. My mom thought this was a terrible idea for a college essay and I’d never be accepted anywhere and my dad loved it. Never be ashamed to be yourself and think differently. So here it is&amp;hellip;&lt;/p&gt;

&lt;h2 id=&#34;pharmy-tales&#34;&gt;Pharmy Tales&lt;/h2&gt;

&lt;p&gt;I work in a pharmacy, which sounds like a pretty normal job where nothing of great importance happens. Don’t jump to conclusions, however. Here is a bundle of short stories-using no real names; I like to call them “Pharmy Tales.”&lt;/p&gt;

&lt;p&gt;&amp;hellip;&amp;hellip;&lt;/p&gt;

&lt;p&gt;Every time Mrs. H calls the pharmacy she complains, either about her bill or her latest delivery of medications. Unfortunately, I usually answer the phone.&lt;/p&gt;

&lt;p&gt;“Camelback Village Pharmacy,” I squeaked.&lt;/p&gt;

&lt;p&gt;“Is Dan there?” Mrs. H asked sending a chill down my back. Dan is the owner. Just the sight of him causes even the angriest customer to give up their battle.&lt;/p&gt;

&lt;p&gt;“Tuesday is his day off. Would you like to speak to Laurie?” I answer.&lt;/p&gt;

&lt;p&gt;“No, I would not. I just received my bill for July and it has two delivery charges on it. How do you explain that?”&lt;/p&gt;

&lt;p&gt;“Did you get two deliveries?”&lt;/p&gt;

&lt;p&gt;“That’s not the issue. Every time I get my bill something is wrong. I shouldn’t have to always second check your billing statements.”&lt;/p&gt;

&lt;p&gt;“I’m sorry.”&lt;/p&gt;

&lt;p&gt;“Well ‘sorry’ doesn’t cut it. Make sure Dan calls me tomorrow.” The receiver clicks.&lt;/p&gt;

&lt;p&gt;I felt like I had just run a marathon.&lt;/p&gt;

&lt;p&gt;“I’ll pick up the phone in case she calls again,” offers Ross, another employee.&lt;/p&gt;

&lt;p&gt;The phone rings about 10 minutes later and Ross answers. As it turns out, it’s a call from a different customer thanking us because he received his mail-out order and it was perfect.&lt;/p&gt;

&lt;p&gt;Just my luck.&lt;/p&gt;

&lt;p&gt;&amp;hellip;&amp;hellip;&lt;/p&gt;

&lt;p&gt;Ross hates Mr. W, an infamous customer known for his inappropriate language. One Saturday, he strolls in drunk with his dog, Bella. I love Bella. She looks like Beethoven. Ross immediately ducks down behind the counter.&lt;/p&gt;

&lt;p&gt;“We just had a scotch on the rocks and we’re really feeling it right now,” says Mr. W, referring to himself and his dog. “Did you order my cane, Dan?”&lt;/p&gt;

&lt;p&gt;“Yes, I have it right here,” Dan holds up the cane.&lt;/p&gt;

&lt;p&gt;“That’s not it, that looks like a turd. Send it back or even better, throw it away,” Mr. W yells.
By now Ross’s back is killing him so he is forced to stand up.&lt;/p&gt;

&lt;p&gt;“Is that the sexy boy from Germany?” exclaims Mr. W, pointing at Ross. He then spends the next thirty minutes interrogating Ross about the World Cup and whether or not he lost his virginity. By the end of their conversation, Bella was asleep on the pharmacy floor and drooling.&lt;/p&gt;

&lt;p&gt;&amp;hellip;&amp;hellip;&lt;/p&gt;

&lt;p&gt;In the middle of one routine day at the pharmacy, a woman passed me a prescription, I passed it to the Pharmacist, and I continued into “La-La-Land” for the next twenty minutes.&lt;/p&gt;

&lt;p&gt;The pharmacist realized that the prescription was fraudulent and inconspicuously called the police.&lt;/p&gt;

&lt;p&gt;Ross stalled her with a conversation. She decided to leave and come back when it was ready. Five minutes later, Ross met the police outside the pharmacy and told them what the woman had looked like.&lt;/p&gt;

&lt;p&gt;On her way back to pick up the prescription she identified Ross with the Police and she rolled under numerous cars in the parking lot to escape. The police soon caught and arrested her.&lt;/p&gt;

&lt;p&gt;One officer came into the pharmacy to talk to the rest of us. As the very attractive police officer glided down the allergy and laxative aisle, I snapped out of “La-La-Land” and questioned what was going on. I had no idea all of this action was taking place just yards away from our neighborhood pharmacy.&lt;/p&gt;

&lt;p&gt;&amp;hellip;&amp;hellip;&lt;/p&gt;

&lt;p&gt;Don’t be fooled by the misconception that working in a pharmacy is dull. Everyday, I come home with a new story to tell.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Reflections on SGX</title>
                <link>https://blog.jessfraz.com/post/reflections-on-sgx/</link>
                <pubDate>Tue, 19 Feb 2019 13:16:52 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/reflections-on-sgx/</guid>
                    <description>&lt;p&gt;I like to consider all the variables in a problem space before coming to a conclusion. As humans we have a tendency to jump to conclusions rather quickly. I try not to do this but everyone makes mistakes.&lt;/p&gt;

&lt;p&gt;More information about Intel SGX was brought to my attention after my &lt;a href=&#34;https://blog.jessfraz.com/post/the-firmware-rabbit-hole/&#34;&gt;initial blog post&lt;/a&gt; on it. I’d like to take the time to go through that information and my current thoughts on the technology after having this extended context.&lt;/p&gt;

&lt;p&gt;Trammel Hudson (&lt;a href=&#34;https://twitter.com/qrs&#34;&gt;@qrs&lt;/a&gt;) pointed out to me yesterday that SGX was originally built for the use case of DRM for Netflix, Microsoft, etc. Having this context makes the problems that arise when you try to do code execution inside an enclave seem like a forgivable sin. It was not until the &lt;a href=&#34;https://www.usenix.org/conference/osdi14/technical-sessions/presentation/baumann&#34;&gt;HAVEN paper&lt;/a&gt; that people even considered using enclaves as an execution environment. In that regard, the HAVEN paper was truly novel. I may disagree with shoving an entire operating system in there, but the idea of executing code in an environment with encrypted memory as a way to use the cloud without trusting the cloud is a respectable feat.&lt;/p&gt;

&lt;p&gt;Another person who I truly respect and admire for the thought they put into what they build is Joanna Rutkowska (&lt;a href=&#34;https://twitter.com/rootkovska&#34;&gt;@rootkovska&lt;/a&gt;). She recently started working at &lt;a href=&#34;https://golem.network/&#34;&gt;golem&lt;/a&gt; a shared compute providing company focused on security and privacy. She wrote &lt;a href=&#34;https://blog.invisiblethings.org/2018/06/11/graphene-ng.html&#34;&gt;an awesome blog post&lt;/a&gt; considering all the tradeoffs of a technology such as SGX. The post links to other posts where she really weighs the pros and cons of the technology. This is why I really respect her thoughts on the matter. The solution is pretty cool in that you can run docker containers inside the enclave. It’s better than the SCONE paper, which also runs containers, in my opinion, because it doesn’t do the crazy syscall toss outside the enclave. It’s more aligned with the HAVEN paper in that it includes all the code inside the enclave. Her post is great; it really goes into detail on their thought process and what they designed their solution to prioritize.&lt;/p&gt;

&lt;p&gt;Considering SGX was not built as an execution environment, I think it will be interesting to see where Intel takes this technology in the future now that people are using it as such. It will also be interesting to see how they solve the problems with side-channel attacks. Computing is all about tradeoffs. I learned from experience with everything I’ve worked on that people will use it for things it was not built for. This happened a lot with Docker. It’s always fun to see the new ways people use what you build and then to iterate considering the new use cases.&lt;/p&gt;

&lt;p&gt;I value taking all contexts into consideration when thinking about a problem. I hope you all do the same. Hope you enjoyed my additional learnings and thoughts. Always be learning and open to new thoughts.&lt;/p&gt;

&lt;p&gt;I&amp;rsquo;ll be giving a talk on SGX at QCon London the first week of March :) hope to see some of you there.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>LD_PRELOAD: The Hero We Need and Deserve</title>
                <link>https://blog.jessfraz.com/post/ld_preload/</link>
                <pubDate>Sun, 17 Feb 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/ld_preload/</guid>
                    <description>

&lt;p&gt;I’m a huge, HUGE, fan of &lt;code&gt;LD_PRELOAD&lt;/code&gt; let me tell you… oh wait it’s my blog so I’m going to. Where do I begin…&lt;/p&gt;

&lt;p&gt;About three years ago, I wrote a blog post about the
&lt;a href=&#34;https://blog.jessfraz.com/post/top-10-favorite-ldflags/&#34;&gt;10 &lt;code&gt;LDFLAGS&lt;/code&gt; I love&lt;/a&gt;.
After writing the post, I realized I should have made the number odd because I think that is part
of BuzzFeed’s “click algorithm.” But more seriously, I realized just how many people on the internet you
can upset when you don’t include &lt;code&gt;LD_PRELOAD&lt;/code&gt; in your favorite &lt;code&gt;LDFLAGS&lt;/code&gt; post. I am going to take the time right
now to make one thing very clear, VERY CLEAR, listen closely:  &lt;code&gt;LD_PRELOAD&lt;/code&gt; IS NOT A FLAG.
It is an environment variable. Wake up sheeple! Phew!&lt;/p&gt;

&lt;p&gt;Now that’s out of the way, we can continue… I love &lt;code&gt;LD_PRELOAD&lt;/code&gt;. I love it so much I am devoting this
entire blog post to professing my undying love for it. So here we go…&lt;/p&gt;

&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;

&lt;p&gt;For those who don’t know what &lt;code&gt;LD_PRELOAD&lt;/code&gt; is: &lt;a href=&#34;https://xkcd.com/1053/&#34;&gt;TODAY IS YOUR LUCKY DAY!&lt;/a&gt;
&lt;code&gt;LD_PRELOAD&lt;/code&gt; allows you to override symbols in any library by specifying your new function in a shared object.&lt;/p&gt;

&lt;p&gt;When you run &lt;code&gt;LD_PRELOAD=/path/to/my/free.so /bin/mybinary&lt;/code&gt;, &lt;code&gt;/path/to/my/free.so&lt;/code&gt; is loaded
&lt;em&gt;before&lt;/em&gt; any other library, including libc. When &lt;code&gt;mybinary&lt;/code&gt; is executed, it uses your custom function for &lt;code&gt;free&lt;/code&gt;.
PRETTY FREAKING AWESOME RIGHT!&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/kronk.gif&#34; alt=&#34;kronk&#34; /&gt;&lt;/p&gt;

&lt;p&gt;FEEL THE POWER! Okay, so moving on…&lt;/p&gt;

&lt;h2 id=&#34;fun-times-on-the-internet&#34;&gt;Fun Times on the Internet&lt;/h2&gt;

&lt;p&gt;One night, I’m just hanging around in my apartment, laying on my couch, and I think
“oh I’m going to ask the Internet what they’ve done with &lt;code&gt;LD_PRELOAD&lt;/code&gt;.&amp;rdquo; This is how most of my tweets start
for what it’s worth. So I asked…&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;yo internet nerds, tell me all the ways you&amp;#39;ve done dirty things with LD_PRELOAD&amp;hellip;. I need them&amp;hellip;. for&amp;hellip; science&amp;hellip;&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1087468414707343362?ref_src=twsrc%5Etfw&#34;&gt;January 21, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;This tweet blew up in THE BEST WAY! I got some really cool responses I will highlight below.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-conversation=&#34;none&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;Not mine but my favorite: &lt;a href=&#34;https://t.co/zljcn70pmh&#34;&gt;https://t.co/zljcn70pmh&lt;/a&gt;&lt;/p&gt;&amp;mdash; ダデイさま (@leifwalsh) &lt;a href=&#34;https://twitter.com/leifwalsh/status/1087496833058914304?ref_src=twsrc%5Etfw&#34;&gt;January 21, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-conversation=&#34;none&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;$ FORCE_PID=42 LD_PRELOAD=./getpid.so bash -c &amp;#39;echo $$&amp;#39;&lt;br&gt;42&lt;br&gt;&lt;br&gt;For forcing specific bad ssh key generation when the RNG was busted&amp;hellip;&lt;/p&gt;&amp;mdash; 𝙺𝚎𝚎𝚜 𝙲𝚘𝚘𝚔 (@kees_cook) &lt;a href=&#34;https://twitter.com/kees_cook/status/1094391729422123008?ref_src=twsrc%5Etfw&#34;&gt;February 10, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-conversation=&#34;none&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;i didn&amp;#39;t use this but dropbox recently stopped working on non-ext4 filesystems and there&amp;#39;s this LD_PRELOAD hack to make it work anyway &lt;a href=&#34;https://t.co/DqRL12FNMk&#34;&gt;https://t.co/DqRL12FNMk&lt;/a&gt;&lt;/p&gt;&amp;mdash; 🔎Julia Evans🔍 (@b0rk) &lt;a href=&#34;https://twitter.com/b0rk/status/1087478518534098945?ref_src=twsrc%5Etfw&#34;&gt;January 21, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-conversation=&#34;none&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;Intercept readline calls to add undo to any interpreter that uses readline&lt;a href=&#34;https://t.co/M44lDMaeFy&#34;&gt;https://t.co/M44lDMaeFy&lt;/a&gt;&lt;a href=&#34;https://t.co/aoeldkK4X6&#34;&gt;https://t.co/aoeldkK4X6&lt;/a&gt; &lt;a href=&#34;https://t.co/w84O715eQG&#34;&gt;pic.twitter.com/w84O715eQG&lt;/a&gt;&lt;/p&gt;&amp;mdash; Thomas Ballinger (@ballingt) &lt;a href=&#34;https://twitter.com/ballingt/status/1087473790227951616?ref_src=twsrc%5Etfw&#34;&gt;January 21, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-conversation=&#34;none&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;We actually mention this in an academic paper! &lt;a href=&#34;https://t.co/qg5ac6vXx7&#34;&gt;https://t.co/qg5ac6vXx7&lt;/a&gt; We used LD_PRELOAD to interpose on the OnStar software modem audio interface.&lt;/p&gt;&amp;mdash; Karl (@supersat) &lt;a href=&#34;https://twitter.com/supersat/status/1087472112611282945?ref_src=twsrc%5Etfw&#34;&gt;January 21, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-conversation=&#34;none&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;I wrote a silly hack that let you mount an app’s objc runtime as a filesystem so you could easily browse the class hierarchy.  It could be inserted via dyld. Here is a screenshot of the Finder browsing the runtime. &lt;a href=&#34;https://t.co/zyYxSsGaoS&#34;&gt;https://t.co/zyYxSsGaoS&lt;/a&gt;&lt;/p&gt;&amp;mdash; Bill Bumgarner (@bbum) &lt;a href=&#34;https://twitter.com/bbum/status/1087556645473796096?ref_src=twsrc%5Etfw&#34;&gt;January 22, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-conversation=&#34;none&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;enabling rapid-fire railguns in quake3 rocket arena by hooking gettimeofday() via LD_PRELOAD, enable/disable by hooking strstr() and using console commands&lt;/p&gt;&amp;mdash; HD Moore (@hdmoore) &lt;a href=&#34;https://twitter.com/hdmoore/status/1087470884896628737?ref_src=twsrc%5Etfw&#34;&gt;January 21, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-conversation=&#34;none&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;I made a thing to disable SSL certificate verification in a bunch of popular applications/libraries 😈&lt;a href=&#34;https://t.co/jMWQtbl0Kb&#34;&gt;https://t.co/jMWQtbl0Kb&lt;/a&gt;&lt;/p&gt;&amp;mdash; Dаvіd Вucһаnаn (@David3141593) &lt;a href=&#34;https://twitter.com/David3141593/status/1087469585798959105?ref_src=twsrc%5Etfw&#34;&gt;January 21, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;This isn’t all of them but isn’t the internet utterly awesome! You can poke through the thread
more and find ones you love as well. But let&amp;rsquo;s move on to some mad science&amp;hellip;&lt;/p&gt;

&lt;h2 id=&#34;science&#34;&gt;SCIENCE&lt;/h2&gt;

&lt;p&gt;No, not the &lt;a href=&#34;https://en.wikipedia.org/wiki/S.C.I.E.N.C.E.&#34;&gt;Incubus album&lt;/a&gt;…
but my science experiment that I did with &lt;code&gt;LD_PRELOAD&lt;/code&gt;. My friends, Greg (&lt;a href=&#34;https://twitter.com/grepory&#34;&gt;@grepory&lt;/a&gt;), Aditya (&lt;a href=&#34;https://twitter.com/chimeracoder&#34;&gt;@chimeracoder&lt;/a&gt;),
and I came up with this absolutely insane idea for &amp;ldquo;kernelless&amp;rdquo;. Yeah, it’s a joke making fun of all the other
“-less”s. But ours was special, m’kay. Greg even made a dope website for it, &lt;a href=&#34;http://kernelless.cloud/&#34;&gt;kernelless.cloud&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So the way we were going to implement this in a mad science way would be as “Cloud Native Syscalls.”
Let me tell you about the “Cloud Native Syscalls”…&lt;/p&gt;

&lt;h2 id=&#34;cloud-native-syscalls&#34;&gt;Cloud Native Syscalls&lt;/h2&gt;

&lt;p&gt;The first part of the “Cloud Native Syscalls” architecture consists of a daemon on a cloud VM
which has a network endpoint accepting incoming syscalls and their arguments.
The daemon then performs these syscalls, almost in a code execution as a service type way.&lt;/p&gt;

&lt;p&gt;To use “Cloud Native Syscalls”, you compile your binary with the library as follows:
&lt;code&gt;LD_PRELOAD=/path/to/my/cloudnativesyscalls.so /bin/ls&lt;/code&gt;. This ensures that all your syscalls when you run &lt;code&gt;ls&lt;/code&gt;
on &lt;em&gt;your host&lt;/em&gt; are actually performed in the cloud and sent to the daemon described above.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/nuts.gif&#34; alt=&#34;nuts&#34; /&gt;&lt;/p&gt;

&lt;p&gt;F’king nuts right… I know. We are working on our A-round don’t worry. It’s truly revolutionary.&lt;/p&gt;

&lt;p&gt;Anyways, that was our little science experiment. Hope you liked it, or at least enjoyed all the other people’s
fun hacks. :) Keep &lt;code&gt;LD_PRELOAD&lt;/code&gt;ing.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/everyday-im-ld-preloading.jpg&#34; alt=&#34;everyday-im-ld-preloading&#34; /&gt;&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>The New Golden Age of Building with Soul</title>
                <link>https://blog.jessfraz.com/post/new-golden-age-of-building-with-soul/</link>
                <pubDate>Wed, 13 Feb 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/new-golden-age-of-building-with-soul/</guid>
                    <description>

&lt;p&gt;From the &lt;a href=&#34;https://software.intel.com/sites/default/files/managed/39/c5/325462-sdm-vol-1-2abcd-3abcd.pdf&#34;&gt;Intel x86 Manual&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In the mid-1960s, Intel cofounder and Chairman Emeritus Gordon Moore had this observation: “&amp;hellip; the number of transistors that would be incorporated on a silicon die would double every 18 months for the next several years.” Over the past three and half decades, this prediction known as “Moore&amp;rsquo;s Law” has continued to hold true.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Moore’s Law is coming up a lot lately in the context of coming to an end. It’s kind of been a running joke for quite some time though so I think there is still a bit of skepticism around claiming it’s ending. However, Moore’s Law ending can mean a lot of different things for the future of computing.&lt;/p&gt;

&lt;h2 id=&#34;golden-age-of-garage-computer-builders&#34;&gt;Golden Age of Garage Computer Builders&lt;/h2&gt;

&lt;p&gt;Personally, I look back on the golden age of computers as the time when people were building the first personal computers in their garage. There is a certain whimsy of that time fueled with a mix of hard work and passion for building something crazy with a very small team. In today’s age, at large companies, most engineers take jobs where they work on one teeny aspect of a machine or website or app. Sometimes they are not even aware of the larger goal or vision but just their own little world.&lt;/p&gt;

&lt;p&gt;Back in the garage computer building era (or so I will call it), a very small group of people aligned on a mission could create something bigger than themselves and have immense impact. This is more aligned with how startups work, in my opinion, in that small groups of people with the same end goal build something together.&lt;/p&gt;

&lt;h2 id=&#34;soul-and-passion&#34;&gt;Soul and Passion&lt;/h2&gt;

&lt;p&gt;This break I read &lt;a href=&#34;https://www.amazon.com/Soul-New-Machine-Tracy-Kidder-ebook/dp/B005HG4W9W&#34;&gt;The Soul of a New Machine&lt;/a&gt;, thanks &lt;a href=&#34;https://twitter.com/bcantrill&#34;&gt;@bcantrill&lt;/a&gt; for the recommendation. (He also wrote an &lt;a href=&#34;http://dtrace.org/blogs/bmc/2019/02/10/reflecting-on-the-soul-of-a-new-machine/&#34;&gt;amazing blog post&lt;/a&gt; on it.) In the book, a small team built an entire machine.&lt;/p&gt;

&lt;p&gt;The book really hit home for me on so many levels. The team wasn’t driven by power or greed, but by accomplishment and self-fulfillment. They put a part of themselves in the machine therefore producing a machine with a soul.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;The chapter 15 of The Soul of a new Machine is so powerful. For the engineers it wasn’t about recognition it was about accomplishment and self-fulfillment. And putting a part of them in the machine. &lt;a href=&#34;https://t.co/Lr1T5OVamM&#34;&gt;pic.twitter.com/Lr1T5OVamM&lt;/a&gt;&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1090369010783408131?ref_src=twsrc%5Etfw&#34;&gt;January 29, 2019&lt;/a&gt;&lt;/blockquote&gt; &lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;Not only did the team have a very strong bond, but it was built on trust. The team was made up of programmers with utmost expertise and experience and also with new programmers. I love this detail. One of the stories from the book is about how they thought about creating a simulator for the machine to iterate more quickly. Well West, the most senior, wrote it off as impossible in the given time, but one of the new programmers brought it to life and wrote it. It’s amazing what a person can do when they don’t know something is impossible and are empowered to take on a task.&lt;/p&gt;

&lt;p&gt;I loved this book in the same way I love Halt and Catch Fire. It’s a TV show based in the same time about building computers and gaming software. It’s amazing if you haven’t seen it. Highly recommend. Thanks &lt;a href=&#34;https://twitter.com/dynamicwebpaige&#34;&gt;@dynamicwebpaige&lt;/a&gt; for introducing me to it. It showcases all the same passion and idealism for building as The Soul of a New Machine.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;I think there is a different class of programmer like those in The Soul of a New Machine &amp;amp; Halt and Catch Fire&amp;hellip; the idealists_dreamers? Those who build things w soul, value accomplishments &amp;amp; being a part of something bigger than themselves. I feel like we’ve lost some of that.&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1090389046881386499?ref_src=twsrc%5Etfw&#34;&gt;January 29, 2019&lt;/a&gt;&lt;/blockquote&gt; &lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;I also love thinking about that era in computing because my grandpa was a computer programmer. After attending college, my mom and dad stayed with him for a bit. My mom likes to tell this story about how he had made his computer talk. It was something he had been working on for a long time at his office, but he was also working on it at home. She came to his house after work one day and as she walked in the door the computer said “Hi Debbie” and his face lit up.&lt;/p&gt;

&lt;p&gt;I love the passion of building and to me that time was the golden age of passionate building. So now you might be wondering where I’m going with this and how this fits in with Moore’s Law and today&amp;hellip;.&lt;/p&gt;

&lt;h2 id=&#34;new-golden-age&#34;&gt;New Golden Age&lt;/h2&gt;

&lt;p&gt;In &lt;a href=&#34;https://m-cacm.acm.org/magazines/2019/2/234352-a-new-golden-age-for-computer-architecture/fulltext&#34;&gt;their Turing lecture&lt;/a&gt;, Hennessey and Patterson call today’s age “A New Golden Age for Computer Architecture”.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“The end of Dennard scaling and Moore&amp;rsquo;s Law and the deceleration of performance gains for standard microprocessors are not problems that must be solved but facts that, recognized, offer breathtaking opportunities.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I love this and I believe it. There are so many opportunities today because of the circumstances of computing changing that will be awesome to see unfold.&lt;/p&gt;

&lt;p&gt;I’m not going to play hand-wavy, armchair, “here is the future” with you all. Instead, I will give you a few quotes from &lt;a href=&#34;https://www.sigarch.org/whats-the-future-of-technology-scaling/&#34;&gt;an article on sigarch of the ACM&lt;/a&gt; that I really loved and let you come to your own conclusions and theories.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Prediction #1: Technology scaling will continue to deliver benefits to certain markets&lt;/p&gt;

&lt;p&gt;#2: Beloved computing abstractions will fail, opening new opportunities for innovation&lt;/p&gt;

&lt;p&gt;#3: Democratization of technology will result in a golden age for computer architecture”&lt;/p&gt;

&lt;p&gt;“By 2030, the rise of open source cores, IP, and CAD flows targeting these advanced nodes will mean that designing and fabricating complex chips will be possible by smaller players.”&lt;/p&gt;

&lt;p&gt;“Hardware startups will flourish for the reasons that the open source software ecosystem paired with commoditized cloud compute has unleashed software startups over the past decade.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Thanks for reading my cheese ball post! I truly believe it’s a great time to be alive and a passionate builder!&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>The Firmware and Hardware Rabbit Hole</title>
                <link>https://blog.jessfraz.com/post/the-firmware-rabbit-hole/</link>
                <pubDate>Tue, 12 Feb 2019 18:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/the-firmware-rabbit-hole/</guid>
                    <description>

&lt;p&gt;I started dipping into some firmware and hardware things on my vacation and unemployment and I figured I would take you down my journey as well.&lt;/p&gt;

&lt;h2 id=&#34;baseboard-management-controller&#34;&gt;Baseboard management controller&lt;/h2&gt;

&lt;p&gt;The first thing I dipped into was &lt;a href=&#34;https://github.com/openbmc/openbmc&#34;&gt;openbmc&lt;/a&gt;. This is pretty cool. At face value it has support for a lot of different boards. It uses IPMI (Intelligent Platform Management Interface) to perform tasks for monitoring and operating the components of a computer. The IPMI interface has been around for a super long time. &lt;a href=&#34;https://www.dmtf.org/standards/redfish&#34;&gt;RedFish&lt;/a&gt; is kind of the successor. It’s an HTTP API and is more modern as a thoughtful approach to hardware deployment in a datacenter. The standard doesn’t include every sensor that IPMI has but it does allow for someone to add more sensors types to their implementation.&lt;/p&gt;

&lt;p&gt;So I dug into the openbmc project a bit and tried to lick my wounds of dbus, seeing that was what it was using. I thought hmmm I wonder if there are more projects like this&amp;hellip;&lt;/p&gt;

&lt;p&gt;It turns out there are! &lt;a href=&#34;https://github.com/u-root/u-bmc&#34;&gt;u-bmc&lt;/a&gt; from the same folks that made &lt;a href=&#34;https://github.com/u-root/u-root&#34;&gt;u-root&lt;/a&gt; seemed like a more simple, opinionated solution. However, it only has the support of one board currently, although others seem planned. I thought it was a kinda neat and interesting detail that u-bmc used gRPC instead of IPMI, seems like a cool choice to modernize but I had some naive questions so I headed to the internet for answers.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;Anyone know what the memory overhead for using gRPC for this is&amp;hellip; I would think it’s not insignificant, or you’d want to use one of the “tiny grpc” replacements, or maybe something that didn’t reinvent its own HTTP server perhaps&amp;hellip;? &lt;a href=&#34;https://t.co/gIpW97r7Xw&#34;&gt;https://t.co/gIpW97r7Xw&lt;/a&gt;&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1092588927318249472?ref_src=twsrc%5Etfw&#34;&gt;February 5, 2019&lt;/a&gt;&lt;/blockquote&gt; &lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;That thread is awesome. Thanks to some super awesome and smart friends from the internet I learned a lot more about these two projects. I will let you read the thread and form opinions of your own but there’s a lot of experience and knowledge in there.&lt;/p&gt;

&lt;p&gt;Currently, I’m feeling a bit nerd sniped by the idea of a BMC implemented in Rust to solve some of the problems mentioned in the thread. A girl can dream right? :)&lt;/p&gt;

&lt;p&gt;That was a bit of a rabbit hole so I decided to move on, mostly because of ADHD and my ever growing curiosity about all things computers.&lt;/p&gt;

&lt;h2 id=&#34;intel-management-engine&#34;&gt;Intel Management Engine&lt;/h2&gt;

&lt;p&gt;I started looking into the Intel Management System&amp;hellip; boy does that do a lot of stuff.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;[enters weird rabbit hole]&lt;br&gt;“wow there’s a lot of tunnels in here” &lt;a href=&#34;https://t.co/oHslyJ0TuF&#34;&gt;pic.twitter.com/oHslyJ0TuF&lt;/a&gt;&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1092627483537551360?ref_src=twsrc%5Etfw&#34;&gt;February 5, 2019&lt;/a&gt;&lt;/blockquote&gt; &lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;The craziest part that I found were all the security vulnerabilities and theories of &lt;a href=&#34;https://news.softpedia.com/news/intel-x86-cpus-come-with-a-secret-backdoor-that-nobody-can-touch-or-disable-505347.shtml&#34;&gt;backdoors&lt;/a&gt;. I live for researching things like this so I was intrigued. Intel gave people a way to disable the ME, and vendors have, as well as &lt;a href=&#34;https://www.heise.de/newsticker/meldung/Dell-schaltet-Intel-Management-Engine-in-Spezial-Notebooks-ab-3909860.html&#34;&gt;Dell even selling computers to government contracts with it disabled&lt;/a&gt;. I stumbled across this super dope laptop company, &lt;a href=&#34;https://puri.sm/&#34;&gt;Purism&lt;/a&gt; (thanks &lt;a href=&#34;https://twitter.com/bcantrill&#34;&gt;@bcantrill&lt;/a&gt;), that sells laptops using &lt;a href=&#34;https://www.coreboot.org/&#34;&gt;coreboot&lt;/a&gt; with the ME memory erased. Their approach and blog is super neat and interesting. Also coreboot looks just lovely, I need to play around with it more.&lt;/p&gt;

&lt;h2 id=&#34;intermission&#34;&gt;Intermission&lt;/h2&gt;

&lt;p&gt;So in between bouncing back and forth reading about various forms of firmware and how shitty and sketchy closed source firmware is, I read the book Bad Blood. The book details the absolute cluster-fuck that was the startup Theranos, so everything from here on out is with “paranoid as fuck” goggles on because I was shook.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;Reading Bad Blood &lt;a href=&#34;https://t.co/C1SN7CF91B&#34;&gt;pic.twitter.com/C1SN7CF91B&lt;/a&gt;&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1090912858597150720?ref_src=twsrc%5Etfw&#34;&gt;January 31, 2019&lt;/a&gt;&lt;/blockquote&gt; &lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;Keep that in mind as we head into the next section.&lt;/p&gt;

&lt;h2 id=&#34;sgx&#34;&gt;SGX&lt;/h2&gt;

&lt;p&gt;Intel’s SGX (Software Guard Extension) is just utterly bananas. I went down this tunnel next. Oh it’s a doozy of a tunnel let me tell you.&lt;/p&gt;

&lt;p&gt;In short, SGX provides what is known as a Secure Enclave. You can put keys in here for safe keeping because the memory is isolated and encrypted from everything else in the computer. (Or so they say, but we will get to that.) This creates a way to store data that you don’t want the host computer user to know about. Some cloud providers are using SGX as a way for customers to use the cloud without trusting the cloud provider, only trusting the hardware provider, in this case Intel.&lt;/p&gt;

&lt;h3 id=&#34;existing-knowledge&#34;&gt;Existing Knowledge&lt;/h3&gt;

&lt;p&gt;I had done a &lt;a href=&#34;https://paperswelove.org/2017/video/jessie-frazelle-scone-secure-linux-containers-with-intel-sgx/&#34;&gt;Papers We Love talk&lt;/a&gt; on the &lt;a href=&#34;https://www.usenix.org/system/files/conference/osdi16/osdi16-arnautov.pdf&#34;&gt;SCONE paper&lt;/a&gt; over a year ago. This paper was an experiment in running docker containers in an enclave. You can watch the talk, but the short version is I wasn’t really sold. While being a technological feat, it was slow and it required a bunch of code. Basically you need to reinvent all of computing inside the enclave (the HAVEN paper approach put bluntly). Or if you do what they did in the SCONE paper, run syscalls outside the enclave. If you toss syscalls outside the enclave, you need to deal with encrypting all of I/O and a bunch of other surface area since you are now running both inside and outside the enclave. In that case, your boundary is more like a blurred line.&lt;/p&gt;

&lt;p&gt;My opinion, which I’m sure the readers on Hacker News will call me all sorts of names for, I question what is the point if you need to trust so much base code just to run a damn thing in the enclave and when you run your process it’s slow. AND it won’t even protect you from side channel attacks or timing attacks.&lt;/p&gt;

&lt;p&gt;Anyways, that was my background knowledge going into this rabbit hole once again. But there I was going back for round two thinking I wonder wtf is up in the SGX world&amp;hellip;. TURNS OUT A LOT.&lt;/p&gt;

&lt;h3 id=&#34;round-two&#34;&gt;Round Two&lt;/h3&gt;

&lt;p&gt;Thanks to the awesome internet I stumbled upon a 118 page run down of the technology.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;Here for this shade, thanks &lt;a href=&#34;https://twitter.com/_msw_?ref_src=twsrc%5Etfw&#34;&gt;@&lt;em&gt;msw&lt;/em&gt;&lt;/a&gt; for the link &lt;a href=&#34;https://t.co/WJtgf9vZBc&#34;&gt;https://t.co/WJtgf9vZBc&lt;/a&gt; &lt;a href=&#34;https://t.co/DaoZQunloJ&#34;&gt;pic.twitter.com/DaoZQunloJ&lt;/a&gt;&lt;/p&gt;&amp;mdash; jessie frazelle 👩🏼‍🚀 (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/1093735827719434240?ref_src=twsrc%5Etfw&#34;&gt;February 8, 2019&lt;/a&gt;&lt;/blockquote&gt; &lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;This is a great paper, if you really want to learn about the internals of not only SGX but computer architecture as well, I strongly suggest reading it. It’s wonderfully written and very detail oriented.&lt;/p&gt;

&lt;p&gt;The paper is on the second generation of the technology and outlines the side-channel attacks making the hardware insecure. The interesting thing I took away from the paper, other than a fuck ton of nuance, was the licensing of SGX.&lt;/p&gt;

&lt;h3 id=&#34;launch-control&#34;&gt;Launch Control&lt;/h3&gt;

&lt;p&gt;SGX has this feature called “launch control”. Launch control is the gatekeeper for launching enclaves it requires an Intel license and provides launch tokens for launching other enclaves. You use what’s called a “launch enclave” to create a “launch token”. Anyways, it wasn’t really documented at this time, and the paper makes interesting insights about it. While SGX from the outside is a feature to secure computing, it also has this hidden feature of securing the market for Intel perhaps?&lt;/p&gt;

&lt;p&gt;Well Intel responded and made “&lt;a href=&#34;https://github.com/intel/linux-sgx/blob/master/psw/ae/ref_le/ref_le.md&#34;&gt;Flexible Launch Control&lt;/a&gt;.” This allows a different party, other than Intel, to handle the launch control process. That’s nice, seems like a shit ton of work though and sadly, making the UX better around this got me thinking. Cloud providers couldn’t do launch control for people since that then defeats the purpose of only trusting the hardware vendor and not the cloud. So this is up to the customer and in my opinion it seems like a lot to land on them. Also it seems like the cloud provider would have to somehow even enable this feature&amp;hellip;&lt;/p&gt;

&lt;p&gt;Honestly, I dunno, I&amp;rsquo;m not an expert here.&lt;/p&gt;

&lt;p&gt;Okay so I was basically over launch control at this point and ready to go deeper. Thanks twitter for all the paper links :)&lt;/p&gt;

&lt;h3 id=&#34;attacks&#34;&gt;Attacks&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://www.usenix.org/system/files/conference/usenixsecurity18/sec18-van_bulck.pdf&#34;&gt;Foreshadow&lt;/a&gt; is fucking nuts. It uses the same type of attack as Meltdown but the fixes for Meltdown didn’t prevent the attack since KPTI (kernel page table isolation) doesn’t cover the enclave address space. In the paper they steal secrets from inside an enclave, which honestly would be the end game of a lot of hackers. The authors take it further by getting the private keys for the enclave and creating fake enclaves that appear perfectly fine and attestations. Wow!&lt;/p&gt;

&lt;p&gt;But that’s not all. &lt;a href=&#34;https://foreshadowattack.eu/foreshadow-NG.pdf&#34;&gt;Foreshadow-NG&lt;/a&gt; took it a step further, from the paper:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;At a high level, whereas previous generation Meltdown-type attacks are limited to reading privileged supervisor data within the attacker’s virtual address space, Foreshadow-NG attacks completely bypass the virtual memory abstraction by directly exposing cached physical memory contents to unprivileged applications and guest virtual machines.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With Foreshadow-NG, the hacker can access all cached memory, not just their own virtual memory. Bananas&amp;hellip; right. But there’s more&amp;hellip;&lt;/p&gt;

&lt;p&gt;Do you need a new feature set for your malware? Because you can &lt;a href=&#34;https://arxiv.org/abs/1702.08719&#34;&gt;use SGX to conceal cache attacks&lt;/a&gt; and &lt;a href=&#34;https://arxiv.org/abs/1703.06986&#34;&gt;amplify them&lt;/a&gt;!!!&lt;/p&gt;

&lt;p&gt;Here’s a quote from the second paper linked above:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Our attack tool named CacheZoom is able to virtually track all memory accesses of SGX enclaves with high spatial and temporal precision.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If enclave malware interests you, there’s another paper that just went out &lt;a href=&#34;https://arxiv.org/abs/1902.03256&#34;&gt;detailing that&lt;/a&gt; yesterday.&lt;/p&gt;

&lt;p&gt;I am forgetting a bunch of other details and papers but this should paint a pretty good picture of the state of the SGX world.&lt;/p&gt;

&lt;p&gt;I wrote more about SGX in my &lt;a href=&#34;https://blog.jessfraz.com/post/reflections-on-sgx/&#34;&gt;Reflections on SGX post&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&#34;thank-you&#34;&gt;Thank You&lt;/h2&gt;

&lt;p&gt;Thank you to everyone for linking me to awesome papers and engaging in my nerdery with these things. I’m not done at all with this rabbit hole but I thought I’d sum it up for now.&lt;/p&gt;

&lt;p&gt;Shout out to &lt;a href=&#34;https://twitter.com/_msw_&#34;&gt;@&lt;em&gt;msw&lt;/em&gt;&lt;/a&gt;, &lt;a href=&#34;https://twitter.com/bcantrill&#34;&gt;@bcantrill&lt;/a&gt;, &lt;a href=&#34;https://twitter.com/anliguori&#34;&gt;@anliguori&lt;/a&gt;, &lt;a href=&#34;https://twitter.com/iancoldwater&#34;&gt;@iancoldwater&lt;/a&gt;, &lt;a href=&#34;https://twitter.com/hugelgupf&#34;&gt;@hugelgupf&lt;/a&gt;, &lt;a href=&#34;https://twitter.com/bascule&#34;&gt;@bascule&lt;/a&gt;, &lt;a href=&#34;https://twitter.com/kc8apf&#34;&gt;@kc8apf&lt;/a&gt;, &lt;a href=&#34;https://twitter.com/nasamuffin&#34;&gt;@nasamuffin&lt;/a&gt;, and everyone else I apologize if I forgot.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Secret Design Docs: Multi-Tenant Orchestrator</title>
                <link>https://blog.jessfraz.com/post/secret-design-docs-multi-tenant-orchestrator/</link>
                <pubDate>Tue, 12 Feb 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/secret-design-docs-multi-tenant-orchestrator/</guid>
                    <description>

&lt;p&gt;I thought it would be fun to start a blog post series containing design docs from my personal archive that never saw the light of day. This will be the first of the series. It contains what I thought about in detail for a general multi-tenant secured container orchestrator. The use case would be for running third party code securely isolated from each other. If you would like to see this in google doc form it also lives &lt;a href=&#34;https://docs.google.com/document/d/1qDcDuahakVWSQaJR5tixpNTUFbHIF0DNxFKfI_gwF0I&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&#34;requirements&#34;&gt;Requirements&lt;/h2&gt;

&lt;h3 id=&#34;base&#34;&gt;Base&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;API to run docker images in such a way that each process is isolated entirely from all the others.&lt;/li&gt;
&lt;li&gt;Abusive actions can be terminated immediately.&lt;/li&gt;
&lt;li&gt;The agent should be auto-updateable to handle security issues as they arise.&lt;/li&gt;
&lt;li&gt;Ability to use the entire syscall interface for the processes being run.&lt;/li&gt;
&lt;li&gt;This all assumes that you have some sort of software and hardware level root of trust you can use to ensure security as well.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&#34;other-features&#34;&gt;Other Features&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Disallow and kill any and all bitcoin miners from using the infrastructure, BPF tracers&lt;/li&gt;
&lt;li&gt;Firewall off any existing network endpoints&lt;/li&gt;
&lt;li&gt;Firewall off the container running the process from everything around it on the local links and any reachable internal IP&lt;/li&gt;
&lt;li&gt;If one layer of isolation is compromised, rely on another layer of isolation entirely. If two layers are compromised then we at least tried our best&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;design&#34;&gt;Design&lt;/h2&gt;

&lt;p&gt;The host OS and up needs to be secure.&lt;/p&gt;

&lt;h3 id=&#34;overview&#34;&gt;Overview&lt;/h3&gt;

&lt;p&gt;We require the following per container running:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Block/io cgroups so that disk does not have noisy neighbors&lt;/li&gt;
&lt;li&gt;CPU limit&lt;/li&gt;
&lt;li&gt;Memory limit&lt;/li&gt;
&lt;li&gt;Network/bandwidth limiting&lt;/li&gt;
&lt;li&gt;Isolated network from everything else on the network (BPF or iptables)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&#34;host-os&#34;&gt;Host OS&lt;/h3&gt;

&lt;p&gt;The host OS should be a reduced operating system, minimal distribution (though possibly shared with the OS used inside containers). This is for reasons of security in locking down the available weaknesses in the host environment and lessening the control plane attack surface.&lt;/p&gt;

&lt;h4 id=&#34;operating-systems&#34;&gt;Operating Systems&lt;/h4&gt;

&lt;p&gt;Examples of these Operating Systems include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CoreOS Container Linux&lt;/li&gt;
&lt;li&gt;Container Optimized OS&lt;/li&gt;
&lt;li&gt;Intel Clear Linux&lt;/li&gt;
&lt;li&gt;LinuxKit&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&#34;features&#34;&gt;Features&lt;/h4&gt;

&lt;p&gt;CoreOS Container Linux and Container Optimized OS both have the following features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verified boot&lt;/li&gt;
&lt;li&gt;Read-only /usr

&lt;ul&gt;
&lt;li&gt;Container Optimized OS has root filesystem (&lt;code&gt;/&lt;/code&gt;) mounted as read-only with some portions of it re-mounted as writable, as follows:&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/tmp&lt;/code&gt;, &lt;code&gt;/run&lt;/code&gt;, &lt;code&gt;/media&lt;/code&gt;, &lt;code&gt;/mnt/disks&lt;/code&gt; and &lt;code&gt;/var/lib/cloud&lt;/code&gt; are all mounted using tmpfs and, while they are writable, their contents are not preserved between reboots.&lt;/li&gt;
&lt;li&gt;Directories &lt;code&gt;/mnt/stateful/partition&lt;/code&gt;, &lt;code&gt;/var&lt;/code&gt; and &lt;code&gt;/home&lt;/code&gt; are mounted from a stateful disk partition, which means these locations can be used to store data that persists across reboots. For example, Docker&amp;rsquo;s working directory &lt;code&gt;/var/lib/docker&lt;/code&gt; is stateful across reboots.&lt;/li&gt;
&lt;li&gt;Among the writable locations, only &lt;code&gt;/var/lib/docker&lt;/code&gt; and &lt;code&gt;/var/lib/cloud&lt;/code&gt; are mounted as &amp;ldquo;executable&amp;rdquo; (i.e. without the noexec mount flag)&lt;/li&gt;
&lt;li&gt;CoreOS Container Linux has root filesystem (&lt;code&gt;/&lt;/code&gt;) mounted as read_write and &lt;code&gt;/usr&lt;/code&gt; is read-only.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of the operating systems allow seamless upgrades for security issues.&lt;/p&gt;

&lt;h3 id=&#34;container-runtime&#34;&gt;Container Runtime&lt;/h3&gt;

&lt;p&gt;The container runtime should be a hypervisor to make sure that user configurations of Linux containers do not diminish the security of the cluster.&lt;/p&gt;

&lt;h4 id=&#34;why-not-containers&#34;&gt;Why not containers?&lt;/h4&gt;

&lt;p&gt;It should not go without saying that it is possible to have multi-tenancy with containers as is proven with &lt;a href=&#34;https://contained.af&#34;&gt;contained.af&lt;/a&gt; that no one has managed to break out of.&lt;/p&gt;

&lt;p&gt;To be allowed to use the entire syscall interface though (&lt;a href=&#34;https://queue.acm.org/detail.cfm?id=3301253&#34;&gt;my ACM Queue Research for Practice article&lt;/a&gt;), &lt;a href=&#34;https://firecracker-microvm.github.io/&#34;&gt;Firecracker&lt;/a&gt; seems like the right fit.&lt;/p&gt;

&lt;p&gt;Just using containerd out of the box as a base and building on that should be perfect :)&lt;/p&gt;

&lt;h3 id=&#34;network&#34;&gt;Network&lt;/h3&gt;

&lt;p&gt;The network should be locked down by default with a deny all policy for ingress and egress. This will create a form of security that makes sure all networking between pods or to the rest of the world is explicit.&lt;/p&gt;

&lt;p&gt;This could be done with iptables or directly with BPF (which in my opinion is way more clean).&lt;/p&gt;

&lt;h3 id=&#34;dns&#34;&gt;DNS&lt;/h3&gt;

&lt;p&gt;Do not allow any inter-cluster DNS.&lt;/p&gt;

&lt;h3 id=&#34;no-scheduling-on-master-and-system-nodes&#34;&gt;No Scheduling on Master and System Nodes&lt;/h3&gt;

&lt;p&gt;Make sure that the master and system nodes in the cluster cannot be scheduled on.&lt;/p&gt;

&lt;p&gt;This allows a separation of concerns from system processes to anything else.&lt;/p&gt;

&lt;h4 id=&#34;the-scheduler&#34;&gt;The Scheduler&lt;/h4&gt;

&lt;p&gt;The scheduler should &lt;strong&gt;not&lt;/strong&gt; do bin packing. Seen this fail in a lot of scenarios with transient workloads where the first few nodes get burned out while all the other nodes are not being used. Because the workloads are constantly completing freeing up resources on those first few nodes (in the case of batch jobs).&lt;/p&gt;

&lt;p&gt;There is knowledge in: &lt;a href=&#34;https://github.com/kubernetes-sigs/kube-batch&#34;&gt;kube-batch scheduler&lt;/a&gt;. It is built on years of experience from HPC clusters at IBM. We can use the same type of logic. This is more meant for batch jobs though, so if we plan on supporting long term applications we would need to modify.&lt;/p&gt;

&lt;p&gt;We should also account for proximity to the docker image being pulled. The largest constraint on time for running a container is pulling an image so let’s optimize for making that as short as possible.&lt;/p&gt;

&lt;p&gt;If we are running on bare metal we need to account for power management, BIOS updates, hardware failures and more. These are all things the orchestration tools of today completely ignore.&lt;/p&gt;

&lt;h3 id=&#34;resource-constraints&#34;&gt;Resource Constraints&lt;/h3&gt;

&lt;p&gt;Manage resources and set limits with cgroups.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Disk IO&lt;/li&gt;
&lt;li&gt;Network Bandwidth&lt;/li&gt;
&lt;li&gt;Memory&lt;/li&gt;
&lt;li&gt;CPU&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&#34;preventing-miners&#34;&gt;Preventing Miners&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;CPU Tracers with eBPF: monitor cpu usage so if it&amp;rsquo;s not fluctuating it might be a miner, most other processes fluctuate&lt;/li&gt;
&lt;li&gt;Binary tracers: look for binaries/ processes with a certain name, miners can rename but block the lazy ones&lt;/li&gt;
&lt;li&gt;Network tracers: look for processes reaching out to known miner endpoints&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;other&#34;&gt;Other&lt;/h2&gt;

&lt;h3 id=&#34;why-not-kubernetes&#34;&gt;Why not kubernetes?&lt;/h3&gt;

&lt;p&gt;I’m super pragmatic about these things and don’t want to reinvent the world for nothing but I have now seen this go terribly wrong, as in people turning off firewalls accidentally…. And I don’t want the security of something that allows arbitrary code execution to have only one layer of security which someone might inadvertently turn off.&lt;/p&gt;

&lt;p&gt;We don’t need 90% of the features of kubernetes.&lt;/p&gt;

&lt;p&gt;Kubernetes is hard to secure… there are a lot of components and there is no isolation between etcd, and the kubelet to apiserver communication cannot be isolated either.&lt;/p&gt;

&lt;p&gt;I wrote a blog post on secure k8s and we are a long ways off. It’s too complex and has too many third party drivers.
(&lt;a href=&#34;https://blog.jessfraz.com/post/hard-multi-tenancy-in-kubernetes/&#34;&gt;my hard multi-tenancy in kubernetes blog post&lt;/a&gt;). All in all the surface area is just too big and we don’t need all the feature set anyways.&lt;/p&gt;

&lt;p&gt;By keeping our implementation more simple it is easier to keep track of the components&amp;rsquo; communication and ensure it is secure. The surface area is WAYYY smaller. The only downside is we operational knowledge of k8s but the concepts and patterns are the same.&lt;/p&gt;

&lt;p&gt;The biggest Kubernetes cluster is 5000 nodes and they hit a lot of issues: &lt;a href=&#34;https://blog.openai.com/scaling-kubernetes-to-2500-nodes/&#34;&gt;blog.openai.com/scaling-kubernetes-to-2500-nodes/&lt;/a&gt;. We might need a different key value store and multiple clusters. And, like I noted above,  I would not be confident considering it “secure”.&lt;/p&gt;

&lt;p&gt;Kubernetes will by default schedule at most 110 pods per node. This is something you can change but it is also important to note that the default scheduler in kubernetes is extremely not resource aware and we would have to fix that as well. See above in “&lt;a href=&#34;#the-scheduler&#34;&gt;The Scheduler.&lt;/a&gt;” And the first few nodes in a cluster get burned through quickly due to the logic of the default scheduler.&lt;/p&gt;

&lt;p&gt;Even Google doesn’t use Kubernetes internally to schedule VMs, that is a whole separate thing.&lt;/p&gt;

&lt;p&gt;Kubernetes inserts a bunch of extra env variables into the containers we would have to take care of as well… as seen here: &lt;a href=&#34;https://docs.google.com/document/d/1PjlsBmZw6Jb3XZeVyZ0781m6PV7-nSUvQrwObkvz7jg/edit&#34;&gt;Kubernetes Hard Multi-Tenancy Design Doc&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&#34;what-do-we-do-if-there-is-a-kernel-0day-that-effects-the-isolation&#34;&gt;What do we do if there is a kernel 0day that effects the isolation?&lt;/h3&gt;

&lt;p&gt;For one, update the kernel, but if that is not possible we can trap the kernel function that is vulnerable using eBPF and kill any container trying to exploit the vulnerability. This has a trade off of jobs failing but we can try to get it as close as possible to have no false positives.&lt;/p&gt;

&lt;p&gt;This assumes we have systems in place to continually build kernels and apply patches.&lt;/p&gt;

&lt;h3 id=&#34;how-secure-is-this&#34;&gt;How secure is this?&lt;/h3&gt;

&lt;p&gt;Well let’s think about the threat model. Mostly it would be someone attacking our infrastructure itself so we should make sure all these servers are isolated on the network from the rest of the stack.&lt;/p&gt;

&lt;p&gt;The next threat would be the users&amp;rsquo; code and secrets that we are running. After breaking out of a container it would leave the hacker still in the firecracker VM so they will still need to break out of the VM. This would be the case in the event of a container runtime bug.&lt;/p&gt;

&lt;h4 id=&#34;monitoring-monitoring-monitoring&#34;&gt;Monitoring, monitoring, monitoring.&lt;/h4&gt;

&lt;p&gt;We should detect using eBPF or otherwise any rogue process on the host that is not that of our containers or of our agents_infrastructure and kill_alert immediately.&lt;/p&gt;

&lt;p&gt;Any file that is touched that is outside the scope of the given container should have the container killed and alerted on.&lt;/p&gt;

&lt;p&gt;Additionally we can even hide the fact that it is running in a specific container runtime etc. So there is less knowledge of the environment, unless of course they read this doc.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>For the Love of Pipes</title>
                <link>https://blog.jessfraz.com/post/for-the-love-of-pipes/</link>
                <pubDate>Mon, 21 Jan 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/for-the-love-of-pipes/</guid>
                    <description>&lt;p&gt;My top used shell command is &lt;code&gt;|&lt;/code&gt;. This is called a pipe.&lt;/p&gt;

&lt;p&gt;In brief, the &lt;code&gt;|&lt;/code&gt; allows for the output of one program (on the left) to become
the input of another program (on the right). It is a way of connecting two
commands together.&lt;/p&gt;

&lt;p&gt;For example, if I were to run the following:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;echo &amp;quot;hello&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I get the output &lt;code&gt;hello&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;But if I run:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;echo &amp;quot;hello&amp;quot; | figlet
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;figlet&lt;/code&gt; program, changes the letters in &lt;code&gt;hello&lt;/code&gt; to look all bubbly and
cartoony.&lt;/p&gt;

&lt;p&gt;This is a really blunt way of describing something that, in my
opinion, is brilliant software design, but I will get into that in a second.&lt;/p&gt;

&lt;p&gt;Let&amp;rsquo;s go back to the origin of pipes.&lt;/p&gt;

&lt;p&gt;According to &lt;a href=&#34;http://doc.cat-v.org/unix/pipes/&#34;&gt;doc.cat-v.org/unix/pipes/&lt;/a&gt;, the
origin of pipes came long before Unix. Pipes can be traced back to this note from
Doug McIlroy in 1964:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;          - 10 -
    Summary--what&#39;s most important.

To put my strongest concerns into a nutshell:

1. We should have some ways of coupling programs like
garden hose--screw in another segment when it becomes when
it becomes necessary to massage data in another way.
This is the way of IO also.

2. Our loader should be able to do link-loading and
controlled establishment.

3. Our library filing scheme should allow for rather
general indexing, responsibility, generations, data path
switching.

4. It should be possible to get private system components
(all routines are system components) for buggering around with.

                                                M. D. McIlroy
                                                October 11, 1964 
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The Unix philosophy is documented by Doug McIlroy as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Make each program do one thing well. To do a new job, build afresh rather
than complicate old programs by adding new &amp;ldquo;features&amp;rdquo;.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Expect the output of every program to become the input to another,
as yet unknown, program. Don&amp;rsquo;t clutter output with extraneous information.
Avoid stringently columnar or binary input formats.
Don&amp;rsquo;t insist on interactive input.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Design and build software, even operating systems, to be tried early,
ideally within weeks.
Don&amp;rsquo;t hesitate to throw away the clumsy parts and rebuild them.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Use tools in preference to unskilled help to lighten a programming task,
even if you have to detour to build the tools and expect to throw some of
them out after you&amp;rsquo;ve finished using them.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;From the &lt;a href=&#34;http://emulator.pdp-11.org.ru/misc/1978.07_-_Bell_System_Technical_Journal.pdf&#34;&gt;Bell Systems Technical Journal&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What I love about Unix is the philosophy of &amp;ldquo;do one thing well&amp;rdquo; and &amp;ldquo;expect the output of every
program to become the input to another&amp;rdquo;. This
philosophy is built on the use of tools. These tools can be used separately or
combined to get a job done. This is in stark contrast to monolithic programs that do
everything or one-off programs used to solve a specific problem.&lt;/p&gt;

&lt;p&gt;System programs and commands like &lt;code&gt;echo&lt;/code&gt;, which we saw above, output information to your terminal by
default. For example, &lt;code&gt;cat&lt;/code&gt; will &amp;ldquo;concatenate&amp;rdquo; (its namesake)
files and print the result to your terminal.
While reading &lt;a href=&#34;http://harmful.cat-v.org/cat-v/unix_prog_design.pdf&#34;&gt;Program design in Unix&lt;/a&gt;,
I realized that printing the output of the tool to the user&amp;rsquo;s terminal was actually the
special case.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Perhaps surprisingly, in practice it turns
out that the special case is the main use of the program.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When a user redirects the output of &lt;code&gt;cat&lt;/code&gt; via a &lt;code&gt;|&lt;/code&gt; to some other program,
&lt;code&gt;cat&lt;/code&gt; becomes so much more than what
the original author intended. This is one of the most brilliant design
patterns, in my opinion.  For one, programs being simple and doing one thing
well makes them easy to grok. The beautiful part, though, is the fact that in
combination with a operator like
&lt;code&gt;|&lt;/code&gt; the program becomes one step in a much larger plan. The original author of
&lt;code&gt;cat&lt;/code&gt; does not even need to know about the larger plan. That is the beauty of
the &lt;code&gt;|&lt;/code&gt; it allows for solving problems by combining small,
simple programs together.&lt;/p&gt;

&lt;p&gt;I love software design that enables creativity, values simplicity, and doesn&amp;rsquo;t put users in a box.
The pipe, is a key element for keeping programs simple while enabling
extensibility. A simple program in combination with a &lt;code&gt;|&lt;/code&gt; becomes so much more than what the
original author could have dreamed of.&lt;/p&gt;

&lt;p&gt;I hope this post helped you learn something, if not, just pipe it to
&lt;code&gt;/dev/null&lt;/code&gt;.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>The Life of a GitHub Action</title>
                <link>https://blog.jessfraz.com/post/the-life-of-a-github-action/</link>
                <pubDate>Sun, 13 Jan 2019 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/the-life-of-a-github-action/</guid>
                    <description>

&lt;p&gt;I thought it might be fun to write a blog post on &amp;ldquo;The Life of a GitHub Action.&amp;rdquo; When you go through
orientation at Google they walk you through &amp;ldquo;The Life of a Query&amp;rdquo; and it was one of my favorite things.
So I am re-applying the same for a GitHub Action.&lt;/p&gt;

&lt;p&gt;For those unfamiliar Actions was a feature launched at GitHub&amp;rsquo;s conference Universe last year.
You can sign up for the beta &lt;a href=&#34;https://github.com/features/actions/&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The overall idea is scriptable GitHub but rather than do all that hand-wavy crap to try and explain I will
take you through what happens when you run an Action.&lt;/p&gt;

&lt;h2 id=&#34;the-problem&#34;&gt;The Problem&lt;/h2&gt;

&lt;p&gt;Here is a typical workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I create a pull request on a repository.&lt;/li&gt;
&lt;li&gt;The pull request is merged.&lt;/li&gt;
&lt;li&gt;The branch lingers around until the end of time and eats away at the part of my soul that likes everything to be clean.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let&amp;rsquo;s focus on my pain of the lingering branches. This is totally a problem right? So let&amp;rsquo;s solve it by creating an Action to delete branches after the pull request has been merged.&lt;/p&gt;

&lt;p&gt;All the code for this action lives &lt;a href=&#34;https://github.com/jessfraz/branch-cleanup-action&#34;&gt;here&lt;/a&gt; if you want to skip ahead.&lt;/p&gt;

&lt;h2 id=&#34;the-workflow-file&#34;&gt;The Workflow File&lt;/h2&gt;

&lt;p&gt;You can create actions from the UI or you can write the Workflow file yourself. In this post, I am just going to use a file.&lt;/p&gt;

&lt;p&gt;Here is what it ends up looking like and I will explain what everything means in comments on the file. This lives in &lt;code&gt;.github/main.workflow&lt;/code&gt; in your repository.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;## Workflow defines what we want to call a set of actions.
workflow &amp;quot;on pull request merge, delete the branch&amp;quot; {
  ## On pull_request defines that whenever a pull request event is fired this 
  ## workflow will be run.
  on = &amp;quot;pull_request&amp;quot;
  
  ## What is the ending action (or set of actions) that we are running. 
  ## Since we can set what actions &amp;quot;need&amp;quot; in our definition of an action,
  ## we only care about the last actions run here.
  resolves = [&amp;quot;branch cleanup&amp;quot;]
}

## This is our action, you can have more than one but we just have this one for 
## our example.
## I named it branch cleanup, and since it is our last action run it matches 
## the name in the resolves section above.
action &amp;quot;branch cleanup&amp;quot; {
  ## Uses defines what we are running, you can point to a repository like below 
  ## OR you can define a docker image.
  uses = &amp;quot;jessfraz/branch-cleanup-action@master&amp;quot;
  
  ## We need a github token so that when we call the github api from our
  ## scripts in the above repository we can authenticate and have permission 
  ## to delete a branch.
  secrets = [&amp;quot;GITHUB_TOKEN&amp;quot;]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&#34;the-event&#34;&gt;The Event&lt;/h2&gt;

&lt;p&gt;Okay so since this post is called &amp;ldquo;The Life of an Action&amp;rdquo; let&amp;rsquo;s start on wtf
actually happens. All actions get triggered on
a GitHub event. For the list of events supported &lt;a href=&#34;https://developer.github.com/actions/creating-workflows/workflow-configuration-options/#events-supported-in-workflow-files&#34;&gt;see here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Above we chose the &lt;code&gt;pull_request&lt;/code&gt; event. This is triggered when a pull request is assigned, unassigned, labeled, unlabeled, opened, edited, closed, reopened, synchronized, a pull request review is requested, or a review request is removed.&lt;/p&gt;

&lt;p&gt;Okay let&amp;rsquo;s assume we triggered this event.&lt;/p&gt;

&lt;h3 id=&#34;something-happened-on-a-pull-request&#34;&gt;&amp;ldquo;Something&amp;rdquo; happened on a pull request&amp;hellip;.&lt;/h3&gt;

&lt;p&gt;Now, GitHub is like &amp;ldquo;oh holy shit, something happened on a pull request, let me fire all ze missiles of things that happen on
a pull request.&amp;rdquo;&lt;/p&gt;

&lt;p&gt;Going back to our Workflow file above, GitHub says &amp;ldquo;I am going to run the workflow &amp;lsquo;on pull request merge, delete the branch&amp;rsquo;&amp;rdquo;.&lt;/p&gt;

&lt;p&gt;What does this resolve? Oh it&amp;rsquo;s &amp;ldquo;branch cleanup&amp;rdquo;. Let me order all the Actions branch cleanup requires (in this case none) and run them in order/parallel
so we end on &amp;ldquo;branch cleanup.&amp;rdquo;&lt;/p&gt;

&lt;h2 id=&#34;the-action&#34;&gt;The Action&lt;/h2&gt;

&lt;p&gt;At this point GitHub is like &amp;lsquo;yo you guys, I need to run the &amp;ldquo;branch cleanup&amp;rdquo; Action. let me get what it is using.&amp;rsquo;&lt;/p&gt;

&lt;p&gt;This takes us back to the &lt;code&gt;uses&lt;/code&gt; section of our file. We are pointing to a repository: &lt;code&gt;jessfraz/branch-cleanup-action@master&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In this repository is a Dockerfile. This Dockerfile defines the environment our action will run in.&lt;/p&gt;

&lt;h3 id=&#34;dockerfile&#34;&gt;Dockerfile&lt;/h3&gt;

&lt;p&gt;Let&amp;rsquo;s take a look at that and I will add comments to try and explain.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;## FROM defines what Docker image we are starting at. A docker image is a bunch 
## of files combined in a tarball.
## This image is all the files we need for an Alpine OS environment.
FROM alpine:latest

## This label defines our action name, we could have named it butts but
## I decided to be an adult.
LABEL &amp;quot;com.github.actions.name&amp;quot;=&amp;quot;Branch Cleanup&amp;quot;
## This label defines the description for our action.
LABEL &amp;quot;com.github.actions.description&amp;quot;=&amp;quot;Delete the branch after a pull request has been merged&amp;quot;
## We can pick from a variety of icons for our action.
## The list of icons is here: https://developer.github.com/actions/creating-github-actions/creating-a-docker-container/#supported-feather-icons
LABEL &amp;quot;com.github.actions.icon&amp;quot;=&amp;quot;activity&amp;quot;
## This is the color for the action icon that shows up in the UI when it&#39;s run.
LABEL &amp;quot;com.github.actions.color&amp;quot;=&amp;quot;red&amp;quot;

## These are the packages we are installing. Since I just wrote a shitty bash 
## script for our Action we don&#39;t really need all that much. We need bash, 
## CA certificates and curl so we can send a request to the GitHub API
## and jq so I can easily muck with JSON from bash.
RUN	apk add --no-cache \
	bash \
	ca-certificates \
	curl \
	jq

## Now I am going to copy my shitty bash script into the image.
COPY cleanup-pr-branch /usr/bin/cleanup-pr-branch

## The cmd for the container defines what arguments should be executed when 
## it is run.
## We are just going to call back to my shitty script.
CMD [&amp;quot;cleanup-pr-branch&amp;quot;]
&lt;/code&gt;&lt;/pre&gt;

&lt;h3 id=&#34;the-script&#34;&gt;The Script&lt;/h3&gt;

&lt;p&gt;Below is the contents of the bash script I am executing.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;#!/bin/bash
set -e
set -o pipefail

# This is populated by our secret from the Workflow file.
if [[ -z &amp;quot;$GITHUB_TOKEN&amp;quot; ]]; then
	echo &amp;quot;Set the GITHUB_TOKEN env variable.&amp;quot;
	exit 1
fi

# This one is populated by GitHub for free :)
if [[ -z &amp;quot;$GITHUB_REPOSITORY&amp;quot; ]]; then
	echo &amp;quot;Set the GITHUB_REPOSITORY env variable.&amp;quot;
	exit 1
fi

URI=https://api.github.com
API_VERSION=v3
API_HEADER=&amp;quot;Accept: application/vnd.github.${API_VERSION}+json&amp;quot;
AUTH_HEADER=&amp;quot;Authorization: token ${GITHUB_TOKEN}&amp;quot;

main(){
    # In every runtime environment for an Action you have the GITHUB_EVENT_PATH 
    # populated. This file holds the JSON data for the event that was triggered.
    # From that we can get the status of the pull request and if it was merged.
    # In this case we only care if it was closed and it was merged.
	action=$(jq --raw-output .action &amp;quot;$GITHUB_EVENT_PATH&amp;quot;)
	merged=$(jq --raw-output .pull_request.merged &amp;quot;$GITHUB_EVENT_PATH&amp;quot;)

	echo &amp;quot;DEBUG -&amp;gt; action: $action merged: $merged&amp;quot;

	if [[ &amp;quot;$action&amp;quot; == &amp;quot;closed&amp;quot; ]] &amp;amp;&amp;amp; [[ &amp;quot;$merged&amp;quot; == &amp;quot;true&amp;quot; ]]; then
        # We only care about the closed event and if it was merged.
        # If so, delete the branch.
		ref=$(jq --raw-output .pull_request.head.ref &amp;quot;$GITHUB_EVENT_PATH&amp;quot;)
		owner=$(jq --raw-output .pull_request.head.repo.owner.login &amp;quot;$GITHUB_EVENT_PATH&amp;quot;)
		repo=$(jq --raw-output .pull_request.head.repo.name &amp;quot;$GITHUB_EVENT_PATH&amp;quot;)
		default_branch=$(
 			curl -XGET -sSL \
				-H &amp;quot;${AUTH_HEADER}&amp;quot; \
 				-H &amp;quot;${API_HEADER}&amp;quot; \
				&amp;quot;${URI}/repos/${owner}/${repo}&amp;quot; | jq .default_branch
		)

		if [[ &amp;quot;$ref&amp;quot; == &amp;quot;$default_branch&amp;quot; ]]; then
			# Never delete the default branch.
			echo &amp;quot;Will not delete default branch (${default_branch}) for ${owner}/${repo}, exiting.&amp;quot;
			exit 0
		fi

		echo &amp;quot;Deleting branch ref $ref for owner ${owner}/${repo}...&amp;quot;
		curl -XDELETE -sSL \
			-H &amp;quot;${AUTH_HEADER}&amp;quot; \
			-H &amp;quot;${API_HEADER}&amp;quot; \
			&amp;quot;${URI}/repos/${owner}/${repo}/git/refs/heads/${ref}&amp;quot;

		echo &amp;quot;Branch delete success!&amp;quot;
	fi
}

main &amp;quot;$@&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;So at this point GitHub has executed our script in our runtime environment.&lt;/p&gt;

&lt;p&gt;GitHub will post the status of the action back to the UI and you can see it from the Actions tab.&lt;/p&gt;

&lt;p&gt;Hopefully this has made some clarity as to how things are run in GitHub Actions. I can&amp;rsquo;t wait to see what you all build.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>You might not need Kubernetes</title>
                <link>https://blog.jessfraz.com/post/you-might-not-need-k8s/</link>
                <pubDate>Mon, 05 Nov 2018 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/you-might-not-need-k8s/</guid>
                    <description>

&lt;p&gt;I have realized recently that a lot of people think I am just a shill for
Kubernetes and I am not. What I have done is write a few blog posts on
some interesting problems to be solved in Kubernetes. &lt;em&gt;But&lt;/em&gt; I would like to
emphasize that those problems are pretty exclusive to the way Kubernetes was
designed and you could easily build your own orchestrator without them.&lt;/p&gt;

&lt;h2 id=&#34;use-containerd&#34;&gt;Use Containerd&lt;/h2&gt;

&lt;p&gt;If you need an example of a custom, minimal orchestrator with containerd you
should checkout &lt;a href=&#34;https://github.com/ehazlett/stellar/&#34;&gt;stellar&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Or see my &lt;a href=&#34;https://blog.jessfraz.com/post/secret-design-docs-multi-tenant-orchestrator/&#34;&gt;design doc for a multi-tenant orchestrator&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I&amp;rsquo;ll let you dive into that in your own time though. Let&amp;rsquo;s take a new look at
a blog post I wrote about &lt;a href=&#34;https://blog.jessfraz.com/post/building-container-images-securely-on-kubernetes/&#34;&gt;Building images securely on Kubernetes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I feel like I should have more clearly stated how this problem is pretty
exclusive to Kubernetes. It&amp;rsquo;s also not really a hard problem. The hard problem
I was solving in that post was &lt;em&gt;not&lt;/em&gt; how to build images on Kubernetes but how to
build images as an unprivileged user in Linux. &lt;em&gt;That&lt;/em&gt; is a hard problem. And
a serious problem for companies who don&amp;rsquo;t allow root on their machines.&lt;/p&gt;

&lt;p&gt;The easier choice if all you need to do is build an image &lt;em&gt;and&lt;/em&gt; you are already
using containerd, is to run
&lt;a href=&#34;https://github.com/moby/buildkit&#34;&gt;buildkit&lt;/a&gt; on the same machine and then you
can use the buildkit API library to build your dockerfiles.&lt;/p&gt;

&lt;p&gt;Or just run docker-in-docker, I have done this for years on my CI with
absolutely no problems.&lt;/p&gt;

&lt;p&gt;Anyways, the point I am trying to make is you should use whatever is the easiest
thing for your use case and not just what is popular on the internet. With
complexity comes a steep learning curve and with a massive number of pluggable
layers comes yak shaves until the end of time.&lt;/p&gt;

&lt;p&gt;Think for yourselves, don&amp;rsquo;t be sheep.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/sheep.gif&#34; alt=&#34;/img/sheep.gif&#34; /&gt;&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Installing and Using Wireguard, obviously with containers</title>
                <link>https://blog.jessfraz.com/post/installing-and-using-wireguard/</link>
                <pubDate>Thu, 14 Jun 2018 12:17:58 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/installing-and-using-wireguard/</guid>
                    <description>

&lt;p&gt;&lt;a href=&#34;https://www.wireguard.com/&#34;&gt;Wireguard&lt;/a&gt; is the hip, new way to VPN :P&lt;/p&gt;

&lt;p&gt;No, but seriously I wanted to try it out because it is super interesting and
I think the direction it is going is awesome. Read about it
&lt;a href=&#34;https://www.wireguard.com/#about-the-project&#34;&gt;on their website&lt;/a&gt; if
you have not already.&lt;/p&gt;

&lt;p&gt;What is cool about Wireguard is it integrates into the Linux
networking stack so you have a lot of power over interactions with it. In other
words, it is very easy to clone the interface into specific containers. Or
just use it on your host.&lt;/p&gt;

&lt;p&gt;If you are new to my blog, I HATEEEE installing things on my host. I run
everything in containers. Wireguard is a kernel module. BUT guess what,
literally anything can be run in a container. This post is going
to go over how to install the Wireguard module by using a container and how to
run the tools from a container as well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UPDATE (April 2020):&lt;/strong&gt; You might want to use &lt;a href=&#34;https://tailscale.com&#34;&gt;Tailscale&lt;/a&gt;. It is simple to install
and cross platform since it uses the go implementation of wireguard. Then you don&amp;rsquo;t have to
mess with the kernel!&lt;/p&gt;

&lt;p&gt;I will never forget this thread from 2017 ;) so glad to see the go implementation happen!&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;Soooooo waiting for the userspace portable Go implementation.&lt;/p&gt;&amp;mdash; Filippo Valsorda 🇮🇹 (@FiloSottile) &lt;a href=&#34;https://twitter.com/FiloSottile/status/877581856387944450?ref_src=twsrc%5Etfw&#34;&gt;June 21, 2017&lt;/a&gt;&lt;/blockquote&gt; &lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;h2 id=&#34;installing&#34;&gt;Installing&lt;/h2&gt;

&lt;p&gt;I wrote a &lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/wireguard/install/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;
for installing the kernel module.&lt;/p&gt;

&lt;p&gt;You can run it with:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-console&#34;&gt;$ docker run --rm -it \
 	--name wireguard \
 	-v /lib/modules:/lib/modules \
 	-v /usr/src:/usr/src:ro \
 	r.j3ss.co/wireguard:install
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This only works if you have your kernel headers installed in &lt;code&gt;/usr/src&lt;/code&gt; and
your kernel allows kernel modules (&lt;code&gt;CONFIG_MODULES=y&lt;/code&gt;). This will change your kernel modules on your
host since you are mounting that directory.&lt;/p&gt;

&lt;p&gt;If you are like me and set &lt;code&gt;CONFIG_MODULES=n&lt;/code&gt; then you can use my
&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/kernel-builder/Dockerfile&#34;&gt;kernel-builder Dockerfile&lt;/a&gt;
to build a custom kernel.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-console&#34;&gt;$ docker run --rm -it \
    -v /usr/src:/usr/src \
    -v /lib/modules:/lib/modules \
    -v /boot:/boot \
    --name kernel-builder \
    r.j3ss.co/kernel-builder
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;That will pop you into a bash shell where you can run the following
&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/kernel-builder/build_kernel&#34;&gt;build script&lt;/a&gt;
to build a specific kernel version.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-console&#34;&gt;# build_kernel 4.17.1
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;That saves the &lt;code&gt;vmlinuz&lt;/code&gt; to &lt;code&gt;/boot&lt;/code&gt; (on your host, since you mounted that directory) where you can then update your initramfs
for the new image and add it to your bootloader if needed.&lt;/p&gt;

&lt;h2 id=&#34;using-the-tools&#34;&gt;Using the tools&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;wg&lt;/code&gt; is the command for interacting with Wireguard. You can learn more about it
in their &lt;a href=&#34;https://www.wireguard.com/quickstart/#command-line-interface&#34;&gt;docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I put the tools in a container and added a bash alias for them:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-console&#34;&gt;$ type wg
wg is a function
wg () 
{ 
    docker run -it --rm --log-driver none -v /tmp:/tmp --cap-add NET_ADMIN --net host --name wg r.j3ss.co/wg &amp;quot;$@&amp;quot;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then you can run the following commands to try sending some packets through
Wireguard. The below steps come from the
&lt;a href=&#34;https://git.zx2c4.com/WireGuard/plain/contrib/examples/ncat-client-server/client.sh&#34;&gt;following script&lt;/a&gt;
which is &lt;code&gt;Copyright (C) 2015-2018 Jason A. Donenfeld. All Rights Reserved. GPL-2.0&lt;/code&gt;.
I merely added comments for the steps.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-console&#34;&gt;$ export WG_PRIVATE_KEY=&amp;quot;$(wg genkey)&amp;quot;

# open file descriptior 3 with some initial details
# this is just wireguard&#39;s demo server, you can use the container to spin up
# your own, don&#39;t actually use this as your server
$ exec 3&amp;lt;&amp;gt;/dev/tcp/demo.wireguard.com/42912

$ wg pubkey &amp;lt;&amp;lt;&amp;lt;&amp;quot;$WG_PRIVATE_KEY&amp;quot; &amp;gt;&amp;amp;3

$ IFS=: read -r status server_pubkey server_port internal_ip &amp;lt;&amp;amp;3

# make sure the status is &amp;quot;OK&amp;quot;
$ echo $status
OK

# delete the link if you already had one
$ sudo ip link del dev wg0 || true

# create the wireguard link
$ sudo ip link add dev wg0 type wireguard

# save the private key to a temporary file
# obviously in a real world scenario we wouldn&#39;t be throwing these around
# all willy nilly
$ echo &amp;quot;$WG_PRIVATE_KEY&amp;quot; &amp;gt; /tmp/wg-privatekey

# configure the interface
$ wg set wg0 private-key /tmp/wg-privatekey peer &amp;quot;$server_pubkey&amp;quot; allowed-ips 0.0.0.0/0 endpoint &amp;quot;demo.wireguard.com:$server_port&amp;quot; persistent-keepalive 25

# assign an ip address/peer
$ sudo -E ip address add &amp;quot;$internal_ip&amp;quot;/24 dev wg0

# bring the interface up
$ sudo ip link set up dev wg0

# make it the default route
$ export WG_HOST=&amp;quot;$(wg show wg0 endpoints | sed -n &#39;s/.*\t\(.*\):.*/\1/p&#39;)&amp;quot;

# add the routes
$ sudo -E ip route add $(ip route get $WG_HOST | sed &#39;/ via [0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/{s/^\(.* via [0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\).*/\1/}&#39; | head -n 1)
$ sudo ip route add 0/1 dev wg0
$ sudo ip route add 128/1 dev wg0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Test it is routing!&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-console&#34;&gt;$ curl https://httpbin.j3ss.co/ip
{&amp;quot;origin&amp;quot;:&amp;quot;163.172.161.0&amp;quot;}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And that&amp;rsquo;s all. Just thought it was kinda fun using this and now it is very
easy to install :)&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Talks</title>
                <link>https://blog.jessfraz.com/post/talks/</link>
                <pubDate>Thu, 07 Jun 2018 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/talks/</guid>
                    <description>

&lt;p&gt;I figured it would be nice to have one canonical place for talks I have given.
So here it is&amp;hellip;&lt;/p&gt;

&lt;h2 id=&#34;2019&#34;&gt;2019&lt;/h2&gt;

&lt;h3 id=&#34;cern-why-open-source-firmware-is-important-https-indico-cern-ch-event-819789&#34;&gt;&lt;a href=&#34;https://indico.cern.ch/event/819789/&#34;&gt;CERN - Why Open Source Firmware is Important&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;This talk will dive into some of the problems of running servers at scale, including data from surveys about physical infrastructure and firmware concerns. In this talk, we&amp;rsquo;ll understand how open source firmware will solve some of these common problems. Why is open source firmware important for security and root of trust? We&amp;rsquo;ll discuss that as well, and cover the state of open source firmware today.&lt;/p&gt;

&lt;h3 id=&#34;qcon-london-panel-secure-isolation-of-applications-https-www-infoq-com-presentations-secure-isolation-applications&#34;&gt;&lt;a href=&#34;https://www.infoq.com/presentations/secure-isolation-applications/&#34;&gt;QCon London - Panel: Secure Isolation of Applications&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Co-Speakers: Justin Cormack, Per Buer, Allison Randal, Kenton Varda&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Applications have been isolated by lots of different means: processes, virtual machines, containers, and new methods are appearing such as SGX and in-process isolates. What is secure? Have Spectre and Meltdown changed the landscape? What should be used?&lt;/p&gt;

&lt;h3 id=&#34;qcon-london-a-journey-into-intel-s-sgx-https-www-infoq-com-presentations-intel-sgx&#34;&gt;&lt;a href=&#34;https://www.infoq.com/presentations/intel-sgx/&#34;&gt;QCon London - A Journey into Intel’s SGX&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;This talk takes a deep dive into Intel&amp;rsquo;s SGX technology. It covers an overview of computer architecture as background and walks the audience through one version of the hardware and its flaws, as well as what changed in the next version.&lt;/p&gt;

&lt;h2 id=&#34;2018&#34;&gt;2018&lt;/h2&gt;

&lt;h3 id=&#34;re-invent-container-power-hour-https-www-youtube-com-watch-v-hcckvz25uu4&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=HCCkVz25UU4&#34;&gt;re:Invent - Container Power Hour&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Co-Speakers: &lt;a href=&#34;https://twitter.com/clare_liguori&#34;&gt;Clare Liguori&lt;/a&gt; and &lt;a href=&#34;https://twitter.com/abbyfuller&#34;&gt;Abby Fuller&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This talk goes over using containers on AWS.&lt;/p&gt;

&lt;h3 id=&#34;chaosconf-breaking-containers-https-www-youtube-com-watch-v-1hhvs4pdrrk-list-pllix5ktghjqktzdfddyujrlhc-icfhvan-index-11&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=1hhVS4pdrrk&amp;amp;list=PLLIx5ktghjqKtZdfDDyuJrlhC-ICfhVAN&amp;amp;index=11&#34;&gt;ChaosConf - Breaking Containers&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;Chaos engineering and stories of bugs about containers.&lt;/p&gt;

&lt;h3 id=&#34;linuxconfau-containers-aka-crazy-user-space-fun-https-www-youtube-com-watch-v-7mzbiotciaq&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=7mzbIOtcIaQ&#34;&gt;LinuxConfAu - Containers aka crazy user space fun&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;Like the movie Plan 9 from outer space, this talk covers containers from
user space. What are they? Where did they come from?
How much koolaid is involved in adopting them into your life&amp;hellip; watch for the
jokes, learn from the interesting technical details.&lt;/p&gt;

&lt;h2 id=&#34;2017&#34;&gt;2017&lt;/h2&gt;

&lt;h3 id=&#34;google-cloud-next-build-user-trust-running-containers-securely-https-www-youtube-com-watch-v-cd4ju7qzybe&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=Cd4JU7qzYbE&#34;&gt;Google Cloud Next - Build user trust: running containers securely&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Co-Speaker: Alex Mohr&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This talk covers all the ways you can secure your Kubernetes cluster using a
Certificate Authority, Authentication, Secrets and more. We  also describe and
demonstrate the ways you can use Seccomp, AppArmor, SELinux and cgroups to make
your application containers as secure as possible - so you can build organizational
and customer trust.&lt;/p&gt;

&lt;h3 id=&#34;coreos-fest-container-linux-on-the-desktop-https-www-youtube-com-watch-v-ges4-x6y278&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=gES4-X6y278&#34;&gt;CoreOS Fest - Container Linux on the Desktop!&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;This talk covers how to build a secure desktop OS with only containers and
CoreOS Container Linux. It also describes the benefits gained from using
Container Linux as a base OS and how to go about running it on the desktop.&lt;/p&gt;

&lt;h3 id=&#34;kubecon-dance-madly-on-the-lip-of-a-volcano-https-www-youtube-com-watch-v-snjylw8fv9a&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=sNjylW8FV9A&#34;&gt;Kubecon - Dance Madly on the Lip of a Volcano&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Co-Speaker: &lt;a href=&#34;https://twitter.com/BrandonPhilips&#34;&gt;Brandon Philips&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This talk covers how we designed an awesome security release process for
Kubernetes and all it’s sub-projects.&lt;/p&gt;

&lt;p&gt;Open source projects strive to be transparent in everything they do, but when
it comes to fixing security patches they need to find the right balance of
“open” and “responsible.” This means vulnerabilities should be reported in
a safe way as well as patches tested and reviewed with a limited audience. The
companies that rely on Kubernetes should have time to patch their systems
before a public announcement.&lt;/p&gt;

&lt;p&gt;Various sets of infrastructure and collaboration are needed to make this
a reality. The design we used could also be applied to other projects and even
internally in your company.&lt;/p&gt;

&lt;h2 id=&#34;2016&#34;&gt;2016&lt;/h2&gt;

&lt;h3 id=&#34;container-summit-building-containers-in-pure-bash-and-c-https-containersummit-io-events-nyc-2016-videos-building-containers-in-pure-bash-and-c&#34;&gt;&lt;a href=&#34;https://containersummit.io/events/nyc-2016/videos/building-containers-in-pure-bash-and-c&#34;&gt;Container Summit - Building Containers in Pure Bash and C&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;This talk demonstrates how to build containers from the Primitives in Linux
without using a container runtime. Learn about the objects that make up
containers themselves.&lt;/p&gt;

&lt;h3 id=&#34;arrested-devops-exciting-topics-like-containers-security-https-www-youtube-com-watch-v-qps5u5hdcim&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=qPs5U5hdciM&#34;&gt;Arrested DevOps - Exciting Topics like Containers &amp;amp; Security&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://twitter.com/benjammingh&#34;&gt;Ben Hughes&lt;/a&gt; and I chat with
&lt;a href=&#34;https://twitter.com/bridgetkromhout&#34;&gt;Bridget Kromhout&lt;/a&gt; about everyone&amp;rsquo;s
favorite topic, security.&lt;/p&gt;

&lt;h3 id=&#34;github-universe-blurry-lines-between-individual-contributor-corporate-backers-https-www-youtube-com-watch-v-4iem6jk6pty&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=4Iem6JK6PtY&#34;&gt;Github Universe - Blurry lines between individual contributor &amp;amp; corporate backers&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;When working on open source projects, your contributions and opinions on the
project and its motives are usually very personal. This talk
covers intricacies of &amp;ldquo;choosing your battles&amp;rdquo; and how personal passion for
a project might conflict with corporate motives.&lt;/p&gt;

&lt;h3 id=&#34;container-camp-application-sandboxes-vs-containers-https-www-youtube-com-watch-v-mfnhsx6sjva&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=mfnhSX6SJVA&#34;&gt;Container Camp - Application Sandboxes vs. Containers&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;This talk covers the differences between application sandboxes and containers.
The most well known sandbox is Chrome, for providing &amp;ldquo;hard guarantees about what
ultimately a piece of code can or cannot do no matter what its inputs are&amp;rdquo;.&lt;/p&gt;

&lt;p&gt;At its core, the Linux Chrome sandbox uses namespaces along with seccomp and
other native features to provide these guarantees. Containers are composed of
the same primitives. What is needed for containers to provide this promise?
Can it be done by default? What steps are already being made to get towards
containers that actually &amp;ldquo;contain&amp;rdquo;? What challenges will be faced?&lt;/p&gt;

&lt;h2 id=&#34;2015&#34;&gt;2015&lt;/h2&gt;

&lt;h3 id=&#34;dockercon-eu-the-latest-in-docker-engine-https-www-youtube-com-watch-v-i7i4sy-irka&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=I7i4SY-iRkA&#34;&gt;Dockercon EU - The Latest in Docker Engine&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Co-Speaker: &lt;a href=&#34;https://twitter.com/icecrime&#34;&gt;Arnaud Porterie&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Learn about the latest capabilities in Docker Engine and how to use them in
your application. This session also covers best practices for using Engine,
troubleshooting tips, and cool lesser known features.&lt;/p&gt;

&lt;p&gt;This video has the first ever demo of Seccomp in Docker as well as a fun story
about trying to save a docker image to a floppy disk.&lt;/p&gt;

&lt;h3 id=&#34;dockercon-container-hacks-and-fun-images-https-www-youtube-com-watch-v-cysvvv1avss&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=cYsVvV1aVss&#34;&gt;DockerCon - Container Hacks and Fun Images&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;This talk is a 100% live demo of running desktop applications in containers.
Everything from Spotify to Skype. Explore some of the more interesting things
you can containerize on Linux. View first hand different workflows for how to
run/build different apps in containers. This talk covers desktop apps as well
as some other apps you would have never thought could run in a container.&lt;/p&gt;

&lt;h3 id=&#34;container-camp-willy-wonka-of-containers-https-www-youtube-com-watch-v-gslzz8czczc&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=GsLZz8cZCzc&#34;&gt;Container Camp - Willy Wonka of Containers&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;This talk has live demos of desktop applications in containers including Steam.&lt;/p&gt;

&lt;h3 id=&#34;hashiconf-dockerizing-all-the-things-https-www-youtube-com-watch-v-pee8hcqtfq4&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=PeE8hcQtFq4&#34;&gt;HashiConf - Dockerizing all the Things&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;This talk goes over the way the Docker project uses containers for their
testing infrastructure as well as internal infrastructure. Find out about real
pain points solved by running things in containers as well as some different
hurdles uncovered along the way.&lt;/p&gt;

&lt;h3 id=&#34;dotgo-the-docker-trail-https-www-youtube-com-watch-v-j55awjgzfv8&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=j55aWjgzfV8&#34;&gt;DotGo - The Docker Trail&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;This talk recounts stories from the trenches of developing Docker, explaining 3
odd things her team stumbled upon in their Go code and how they fixed them. One
of which is very odd and gets into the depths of &lt;code&gt;dlopen&lt;/code&gt;-ing yourself.&lt;/p&gt;

&lt;h3 id=&#34;google-cloud-platform-podcast-containers-https-www-youtube-com-watch-v-zu8nsrnfz4m&#34;&gt;&lt;a href=&#34;https://www.youtube.com/watch?v=zu8NSrNFZ4M&#34;&gt;Google Cloud Platform Podcast - Containers&lt;/a&gt;&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://twitter.com/francesc&#34;&gt;Francesc Campoy&lt;/a&gt; and I talk all about
Dockercon EU and containers.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Containers, Security, and Echo Chambers</title>
                <link>https://blog.jessfraz.com/post/containers-security-and-echo-chambers/</link>
                <pubDate>Sun, 20 May 2018 12:17:58 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/containers-security-and-echo-chambers/</guid>
                    <description>

&lt;p&gt;There seems to be some confusion around sandboxing containers as of late,
mostly because of the recent launch of &lt;a href=&#34;https://github.com/google/gvisor&#34;&gt;gvisor&lt;/a&gt;.
Before I get into the body of this post I would like to make one thing clear.
I have no problem with gvisor itself. I think it is very technically &amp;ldquo;cool.&amp;rdquo;
I do have a problem with the messaging around it and marketing.&lt;/p&gt;

&lt;p&gt;There is a large amount of ignorance towards the existing defaults to make
containers secure. Which is crazy since I have written many blog posts on it
and given many talks on the subject. But I digress, let&amp;rsquo;s focus on the part of the README that
mentions sandboxing with SELinux, Seccomp, and Apparmor. It says: &amp;ldquo;However, in practice
it can be extremely difficult (if not impossible) to reliably define a policy
for arbitrary, previously unknown applications, making this approach challenging
to apply universally.&amp;rdquo;&lt;/p&gt;

&lt;p&gt;Greetings. Reporting for duty. Literally I am the person who can do that. I was
the person who &lt;em&gt;did&lt;/em&gt; do that. I added the default Seccomp profile to Docker and
maintained the default Apparmor profile. I have also done A LOT of research
with regard to Linux kernel isolation and making containers secure.
I also literally reported for duty, two years ago and made the patch to add the
Seccomp annotation to Kubernetes&amp;hellip; with the hopes of eventually turning on
a default filter.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;&lt;a href=&#34;https://twitter.com/nathanmccauley?ref_src=twsrc%5Etfw&#34;&gt;@nathanmccauley&lt;/a&gt; &lt;a href=&#34;https://twitter.com/brendandburns?ref_src=twsrc%5Etfw&#34;&gt;@brendandburns&lt;/a&gt; &lt;a href=&#34;https://twitter.com/kelseyhightower?ref_src=twsrc%5Etfw&#34;&gt;@kelseyhightower&lt;/a&gt; &lt;a href=&#34;https://twitter.com/thockin?ref_src=twsrc%5Etfw&#34;&gt;@thockin&lt;/a&gt; I already offered to help&lt;/p&gt;&amp;mdash; jessie frazelle (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/717215121840451584?ref_src=twsrc%5Etfw&#34;&gt;April 5, 2016&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;All big organizations have problems with &amp;ldquo;not invented here.&amp;rdquo; I tried my very
best to inform everyone how these sandboxing mechanisms work but I am going to
try one last time here.&lt;/p&gt;

&lt;h2 id=&#34;more-than-one-layer-of-security-required&#34;&gt;More than One Layer of Security Required&lt;/h2&gt;

&lt;p&gt;In my last blog post,
&lt;a href=&#34;https://blog.jessfraz.com/post/hard-multi-tenancy-in-kubernetes/&#34;&gt;Hard Multi-Tenancy in Kubernetes&lt;/a&gt;,
I mentioned this as well. It is also a good read if you want to learn about the
thought process for secure isolation. To be truly secure you need more than one
layer of security so that when there is a vulnerability in one layer, the attacker also
needs a vulnerability in another layer to bypass the isolation mechanism.&lt;/p&gt;

&lt;p&gt;In Docker, we worked really hard to create secure defaults for the container
isolation itself. I then tried to bring all those up the stack into
orchestrators.&lt;/p&gt;

&lt;p&gt;Container runtimes have security layers defined by Seccomp, Apparmor, kernel
namespaces, cgroups, capabilities, and an unprivileged Linux user. All the
layers don’t perfectly overlap, but a few do.&lt;/p&gt;

&lt;p&gt;Let&amp;rsquo;s go over some of the ones that do overlap. I could do them all, but
I would be here all day. The &lt;code&gt;mount&lt;/code&gt; syscall is prevented by the default
Apparmor profile, default Seccomp profile, and &lt;code&gt;CAP_SYS_ADMIN&lt;/code&gt;. This is a neat
example as it is literally three layers. Wow.&lt;/p&gt;

&lt;p&gt;Everyone&amp;rsquo;s favorite thing to complain about in containers or to prove that they
know something is creating a fork bomb. Well this is actually easily
preventable. With the PID cgroup you can set a max number of processes per
container.&lt;/p&gt;

&lt;p&gt;What about things that are not namespaced by the linux kernel..? &lt;code&gt;CAP_SYS_TIME&lt;/code&gt;
prevents people from changing the time inside containers. And the default
Seccomp profile prevents modifications or interacting with the kernel keyring.&lt;/p&gt;

&lt;p&gt;If you would like a list of all the syscalls prevented by the default Seccomp
profile, I behoove you to read the list &lt;a href=&#34;https://github.com/jessfraz/community/blob/1eaf775381bbd6d3c6e32816144beba1bca807b4/contributors/design-proposals/seccomp.md#default-profile&#34;&gt;here&lt;/a&gt;. It also has descriptions of each.&lt;/p&gt;

&lt;p&gt;Two years ago, there was &lt;a href=&#34;https://www.nccgroup.trust/us/our-research/understanding-and-hardening-linux-containers/&#34;&gt;a great Whitepaper from NCC Group about hardening
linux containers&lt;/a&gt;. Still to this day I get all the good feels when I see all the mentions of my work in it. But if you have any hesitations towards the defaults in Docker or otherwise I suggest you educate yourself first.&lt;/p&gt;

&lt;p&gt;I will call out my favorite chart here though. Below shows the defaults from
various container runtimes as of two years ago.  Note the strong defaults in
Docker. The paper also explains at length the defaults and would be a less
biased version than me explaining myself.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/defaults.png&#34; alt=&#34;defaults.png&#34; /&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href=&#34;https://github.com/jessfraz/docker/blob/6837cfc13cba842186a7261aa9bbd3a8755fd11e/docs/security/non-events.md&#34;&gt;non-events&lt;/a&gt; are also an interesting read.&lt;/p&gt;

&lt;h2 id=&#34;breaking-changes&#34;&gt;Breaking Changes&lt;/h2&gt;

&lt;p&gt;A lot of the push back I got from the default Seccomp profile was related to it
being a breaking change.&lt;/p&gt;

&lt;p&gt;I get that this is very scary. No really I get it. When we added it to Docker,
guess who got paged when the Docker apt repo was down and it was on the front
page of hacker news with tech bros crying: me. So I was absolutely horrified at
the thought of making a breaking change that might land on the front page of
hacker news as well.&lt;/p&gt;

&lt;p&gt;The last thing I ever wanted to do was cause a breaking change. That shit was
terrifying. I lost sleep over weeks worrying about it. I tested every single
Dockerfile on GitHub with the default profile. I ran &lt;code&gt;strace&lt;/code&gt; on each for
EPERMS and sent them all to elastic search. I made a project just for it:
&lt;a href=&#34;https://github.com/jessfraz/strace2elastic&#34;&gt;strace2elastic&lt;/a&gt;. It&amp;rsquo;s super dumb
but was fun.&lt;/p&gt;

&lt;p&gt;By the time we released I knew I had done at least everything in my power to
make sure we didn&amp;rsquo;t break anyone. The release actually went really well too.
However, when you try to explain this to other projects they of course have
their doubts, which I do not blame them for. I wish there was a better way
to trust the genuine people who just want to help in open source.&lt;/p&gt;

&lt;h2 id=&#34;so-why-all-the-confusion-and-fud&#34;&gt;So why all the confusion and FUD?&lt;/h2&gt;

&lt;p&gt;Well, it&amp;rsquo;s simple really. Marketing. The tech never sells itself. It&amp;rsquo;s all
about marketing.&lt;/p&gt;

&lt;p&gt;When you work at a large organization you are surrounded by an echo chamber. So
if everyone in the org is saying &amp;ldquo;containers are not secure,&amp;rdquo; you are bound to
believe it and not research actual facts. To be clear I am not saying containers
are secure, literally nothing is secure. Spreading FUD while ignorant or not
doing proper research is harmful to the facts and hard work many people put in
to making containers at least decently isolated by default.&lt;/p&gt;

&lt;h2 id=&#34;operability&#34;&gt;Operability&lt;/h2&gt;

&lt;p&gt;There is another problem I have with gvisor. In my opinion, I think it would be
quite hard to operate. People enjoy debugging with certain workflows and
reinventing syscalls is going to be quite hard to debug. Just look up one of
Bryan Cantrill&amp;rsquo;s rants on unikernels which are harder to debug as well.&lt;/p&gt;

&lt;p&gt;I believe it is putting a lot of extra burden on the operator to learn how to
operate. At the end of the day you are left with a decision to trust or
research the container security defaults or use a new runtime that
re-implements all the syscalls in user-space and has poorer performance because of
that. I also have yet to see a report on the fact that running in user-space is
actually more secure. The implementation could be closely related to that of
user mode linux and even user mode linux was never fully vetted for
multi-tenancy so what are you really gaining. I truly believe it cannot be
possibly more secure than the defaults for containers are today and surely it
is not as secure as a real hypervisor. But, again, nothing is actually secure.&lt;/p&gt;

&lt;p&gt;I am not trying to throw shade at gvisor but merely clear up some FUD in the
world of open source marketing. I truly believe that people choosing projects
to use should research into them and not just choose something shiny that came
out of Big Corp. I also believe that people at Big Corp should embrace the work
and ideas of people outside their echo chamber. Sometimes they even work in the
echo chamber but just don&amp;rsquo;t abide by the echo chamber beliefs.&lt;/p&gt;

&lt;p&gt;Open your minds and hearts to the ideas of other people and you might just
create something you never thought was possible in the first place.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; See &lt;a href=&#34;https://blog.hansenpartnership.com/measuring-the-horizontal-attack-profile-of-nabla-containers/&#34;&gt;James Bottomley&amp;rsquo;s research on Horizontal Attack Profile&lt;/a&gt; which shows gVisor uses more syscalls than a standard docker container.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Hard Multi-Tenancy in Kubernetes</title>
                <link>https://blog.jessfraz.com/post/hard-multi-tenancy-in-kubernetes/</link>
                <pubDate>Fri, 18 May 2018 12:17:58 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/hard-multi-tenancy-in-kubernetes/</guid>
                    <description>

&lt;p&gt;&lt;strong&gt;EDIT:&lt;/strong&gt; See my post on a &lt;a href=&#34;https://blog.jessfraz.com/post/secret-design-docs-multi-tenant-orchestrator/&#34;&gt;design doc for a multi-tenant orchestrator&lt;/a&gt; instead.
I wrote this when an internal requirement was to use Kubernetes but I do not personally think you should use Kubernetes for this use case.&lt;/p&gt;

&lt;p&gt;Kubernetes is the new kernel. We can refer to it as a “cluster kernel” versus
the typical operating system kernel. This means a lot of great things for users
trying to deploy applications. It also leads to a lot of the same challenges we
have already faced with operating system kernels. One of which being privilege
isolation. In Kubernetes, we refer to this as multi-tenancy, or the dream of
being able to isolate tenants of a cluster.&lt;/p&gt;

&lt;p&gt;The
&lt;a href=&#34;https://docs.google.com/document/d/15w1_fesSUZHv-vwjiYa9vN_uyc--PySRoLKTuDhimjc/edit#heading=h.3dawx97e3hz6&#34;&gt;models for multi-tenancy&lt;/a&gt;
have been discussed at length in the
&lt;a href=&#34;https://docs.google.com/document/d/1SkVdOPR4jozYDT8ro51hU3yrf1sHS8Gez73xM3PCsVo/edit&#34;&gt;community’s multi-tenancy working group&lt;/a&gt;.
&lt;strong&gt;NOTE: to view most of these Google docs you need to be a member of the
&lt;a href=&#34;https://groups.google.com/forum/#!forum/kubernetes-wg-multitenancy&#34;&gt;kubernetes-wg-multitenancy Google group&lt;/a&gt;.&lt;/strong&gt;
There have also been &lt;a href=&#34;https://docs.google.com/document/d/1fj3yzmeU2eU8ZNBCUJG97dk_wC7228-e_MmdcmTNrZY/edit&#34;&gt;some proposals&lt;/a&gt;
offered to solve each model. The current model of tenancy in Kubernetes assumes
the cluster is the security boundary. You can build a SaaS on top of Kubernetes
but you need to bring your own trusted API and not just use the Kubernetes API.
Of course, with that comes &lt;a href=&#34;https://docs.google.com/document/d/1PjlsBmZw6Jb3XZeVyZ0781m6PV7-nSUvQrwObkvz7jg/edit&#34;&gt;a lot of considerations you must also think about
when building your cluster securely for a SaaS even&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The model I am going to be focusing on for this post is “hard multi-tenancy.”
This implies that tenants do not trust each other and are assumed to be actively
malicious and untrustworthy. Hard multi-tenancy means multiple tenants in the
same cluster should not have access to anything from other tenants. In this
model, the goal is to have the security boundary be the Kubernetes namespace object.&lt;/p&gt;

&lt;p&gt;The hard multi-tenancy model has not been solved yet, but there have been
&lt;a href=&#34;https://docs.google.com/document/d/1fj3yzmeU2eU8ZNBCUJG97dk_wC7228-e_MmdcmTNrZY/edit&#34;&gt;a few proposals&lt;/a&gt;.
All systems have weaknesses and nothing is perfect. With a system as complex and
large as Kubernetes it is hard to trust the entire system to not be vulnerable.
In this regard and in the regard of the existing proposals, one single exploit
in Kubernetes leads to full supervisor privileges and then it’s game over.&lt;/p&gt;

&lt;p&gt;This is not an acceptable way to secure a system and guarantee isolation
between tenants. I will cover in this post why having more than one layer of
security is so important.&lt;/p&gt;

&lt;p&gt;The attack surface with the highest risk of logical vulnerabilities is the
Kubernetes API. This must be isolated between tenants. The attack surface with
the highest risk of remote code execution are the services running in containers.
These must also be isolated between tenants.&lt;/p&gt;

&lt;p&gt;If you take one look at the open source repository and the speed to which
Kubernetes is growing, it is already taking on a lot of the same aspects of
the monolithic kernels of Windows, Mac OS X, Linux, and FreeBSD. Fortunately,
there have already been a lot of solutions to privilege separation in monolithic
kernels researched and implemented.&lt;/p&gt;

&lt;p&gt;The solution I am going to focus on is
&lt;a href=&#34;http://nathandautenhahn.com/downloads/publications/asplos200-dautenhahn.pdf&#34;&gt;Nested Kernel: Intra-Kernel Isolation&lt;/a&gt;.
This paper solves the problem of privilege isolation in monolithic kernels by
nesting a small kernel inside the monolithic kernel.&lt;/p&gt;

&lt;h2 id=&#34;more-than-one-layer-of-security-required&#34;&gt;More than One Layer of Security Required&lt;/h2&gt;

&lt;p&gt;What we know of today as “sandboxes” are defined as having multiple layers of
security. For example, the sandbox I made for the
&lt;a href=&#34;https://contained.af&#34;&gt;contained.af&lt;/a&gt; playground has
security layers defined by seccomp, apparmor, kernel namespaces, cgroups,
capabilities, and an unprivileged Linux user. All those layers don’t necessarily
overlap, but a few do. If a user was to have an apparmor or seccomp bypass and
they tried to call &lt;code&gt;mount&lt;/code&gt; inside the container, the Linux capability of
&lt;code&gt;CAP_SYS_ADMIN&lt;/code&gt; would still prevent them from executing &lt;code&gt;mount&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;These layers ensure that one vulnerability in the system does not take out the
entire security of the system. We need this for hard multi-tenancy in Kubernetes
as well. This is why all the existing proposals are insufficient. We need at
least two layers and these comprise only one.&lt;/p&gt;

&lt;p&gt;With intra-kernel isolation applied to Kubernetes, we get two layers. Let me
dive in a bit deeper into how this would work.&lt;/p&gt;

&lt;h2 id=&#34;isolation-via-namespaces&#34;&gt;Isolation via Namespaces&lt;/h2&gt;

&lt;p&gt;The existing proposals for hard multi-tenancy assume that the security boundary
for multiple users on Kubernetes would be the namespace. “Namespace” in this
regard being those defined by Kubernetes. The proposals all have the weakness
that if you exploit one part of Kubernetes you can then have privileges to
transverse namespaces and therefore transverse the tenants.&lt;/p&gt;

&lt;p&gt;With Intra-Kernel Isolation, the namespace would still be the security boundary.
However, instead of all sharing the main Kubernetes system services, each
namespace would have it’s own “nested” Kubernetes system services. Meaning
the api-server, kube-proxy, etc would all be running individually in a pod
in that namespace. The tenant who deploys to that namespace would then have
no access to the actual root-level Kubernetes system services but merely the
ones running in their namespace. An exploit in Kubernetes would not be game
over for the whole system, but only game-over within that namespace.&lt;/p&gt;

&lt;p&gt;Another security boundary would also be the container isolation itself. These
pods could be further locked down by the existing resources like
&lt;code&gt;PodSecurityPolicy&lt;/code&gt; and &lt;code&gt;NetworkPolicy&lt;/code&gt;. With the ever growing innovation in
the ecosystem, you could even run VMs (katacontainers) for hardware-isolation
between containers giving you the highest level of security between the services
in your cluster.&lt;/p&gt;

&lt;p&gt;For those familiar with Linux namespaces you can think of this as a &lt;code&gt;clone&lt;/code&gt;
for Kubernetes. The design is roughly similar.&lt;/p&gt;

&lt;p&gt;For example on linux cloning new namespaces looks like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-c&#34;&gt;clone(CLONE_NEWNS | CLONE_NEWIPC | CLONE_NEWUTS | CLONE_NEWNET | CLONE_NEWPID… )
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;So when you create a new Kubernetes namespace with intra-kernel isolation this
roughly translates to, purely example not to be taken literally:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-c&#34;&gt;clone(CLONE_NEWAPISERVER | CLONE_NEWKVSTORE | CLONE_NEWKUBEPROXY | CLONE_NEWKUBEDNS…)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In Linux, namespaces control what a process can see. This holds true for users
designated to a namespace in Kubernetes. Since each namespace would have its
own system services that would be all they could see.&lt;/p&gt;

&lt;p&gt;Unlike the pseudo code above, the Kubernetes namespace will automatically get
new components of each system service. This is more in line with the design of
Solaris Zones or FreeBSD Jails.&lt;/p&gt;

&lt;p&gt;In my blog post
&lt;a href=&#34;https://blog.jessfraz.com/post/containers-zones-jails-vms/&#34;&gt;Setting the Record Straight: containers vs. Zones vs. Jails vs. VMs&lt;/a&gt;,
I go over the differences between those various isolation techniques.
In this design, we are more inline with that of Zones or Jails. Containers come
with all the parts. The namespaces in Kubernetes should automatically set up a
well isolated world, just like that of Zones or Jails without the user having
to worry about if they configured it correctly.&lt;/p&gt;

&lt;p&gt;Another problem with namespaces in Linux is that
&lt;a href=&#34;https://blog.jessfraz.com/post/two-objects-not-namespaced-linux-kernel/&#34;&gt;not everything is namespaced&lt;/a&gt;.
This design ensures that every part of Kubernetes is isolated per tenant.&lt;/p&gt;

&lt;h2 id=&#34;isolation-via-resource-control&#34;&gt;Isolation via Resource Control&lt;/h2&gt;

&lt;p&gt;There are still a few unanswered questions just with the design above alone.
Let’s take a look at another control mechanism in Linux: &lt;code&gt;cgroups&lt;/code&gt;.
Cgroups control what a process can use. They are the masters of resource control.&lt;/p&gt;

&lt;p&gt;This concept would need to be applied to Kubernetes namespaces as well. Rather
than controlling resources like memory consumption and CPU, it would apply to
nodes. The tenant within a namespace would only be able to access certain nodes
designated to it. All the namespace services would be isolated at the machine
level as well. No services from different tenants would run on the same machine.
This could always be a setting in the future but the default should be that
nodes are not shared.&lt;/p&gt;

&lt;p&gt;This model allows for designating to our nested API server a set of kubelets
on various nodes to use.&lt;/p&gt;

&lt;p&gt;At this point we have isolation of what a tenant can see (Kubernetes namespace)
and what they can use (nodes designated to a namespace).&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/mtimage0.png&#34; alt=&#34;mtimage0.png&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Alternatively, if the system services for the namespaces were isolated with
nested VM containers (katacontainers) and you considered all the other variables
outlined in &lt;a href=&#34;https://docs.google.com/document/d/1PjlsBmZw6Jb3XZeVyZ0781m6PV7-nSUvQrwObkvz7jg/edit&#34;&gt;this design doc&lt;/a&gt;. Then those services could share nodes. This
would give you a bit better resource utilization than above. It is illustrated below.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/mtimage1.png&#34; alt=&#34;mtimage1.png&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Taking it even a step further for even better resource utilization, if you
isolated the whole system and containers into fully sandboxed or VM containers
as per &lt;a href=&#34;https://docs.google.com/document/d/1PjlsBmZw6Jb3XZeVyZ0781m6PV7-nSUvQrwObkvz7jg/edit&#34;&gt;this design doc&lt;/a&gt;, then all services could share nodes. This is illustrated below.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/mtimage2.png&#34; alt=&#34;mtimage2.png&#34; /&gt;&lt;/p&gt;

&lt;h2 id=&#34;tenants-that-span-multiple-namespaces&#34;&gt;Tenants that Span Multiple Namespaces&lt;/h2&gt;

&lt;p&gt;A few times it has been brought up in the working group that tenants might need
to span multiple namespaces. While I don’t believe this should be a default,
I don’t see a problem with it.&lt;/p&gt;

&lt;p&gt;Let’s take a look again at how namespaces work in Linux and how we use them
for containers. Each namespace is a file descriptor. You can share a namespace
between containers by designating the file descriptor for the namespace you want
to share and calling &lt;code&gt;setns&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In Kubernetes, we could implement the same sort of design. A superuser can
delegate a namespace is to be shared between tenants with access to that namespace.&lt;/p&gt;

&lt;p&gt;Overall this design uses the expertise from past art of kernel isolation
techniques. It is also designed with the lessons learned from past kernel
isolation techniques.&lt;/p&gt;

&lt;p&gt;With the growing ecosystem and core of Kubernetes it’s important to have more
than one layer of security between tenants. Security techniques such as failsafe
defaults, complete mediation, least privilege, and least common mechanism are
very popular but hard to apply to monolithic kernels. Kubernetes by default
shares everything and has many different, sometimes very broken, drivers and
plugins just like that of an operating system kernel. Applying the same
isolation techniques of kernels to Kubernetes will allow for a better privilege
isolation solution.&lt;/p&gt;

&lt;h2 id=&#34;where-does-this-leave-us&#34;&gt;Where does this leave us?&lt;/h2&gt;

&lt;p&gt;We have fully isolated and solved our threat model in a very strong way.
The attack surface with the highest risk of logical vulnerabilities, the
Kubernetes API, has full logical separation in that each tenant has their own.
The attack surface with the highest risk of remote code execution, containers
themselves, have full virtualized separation from other tenants. This isolation
either comes from isolating via designated nodes themselves to tenants or by
running containers that use hardware isolation. The only viable path to other
tenants is getting remote code execution in some service, then breaking out of
the container (and/or VM).&lt;/p&gt;

&lt;p&gt;The first diagram of intra-kernel isolation via node resource control illustrates
close to the same as having two fully separate clusters operated by one superuser.
Since nodes are designated to each tenant, you do not really gain more efficient
resource utilization either.&lt;/p&gt;

&lt;p&gt;The model with the highest gain of resource control comes from securely setting
up your cluster to use nested virtual machines as containers or fully sandboxing
the containers themselves so that the boundary is the container not the node.
This eases the operators pain of running more than one cluster and allows
resources to be used more effectively while also sustaining more than one
layer of security.&lt;/p&gt;

&lt;p&gt;None of this is set in stone. This is my idea for solving this problem. If
you are interested in discussing this or other aspects of tenancy in Kubernetes
please join the &lt;a href=&#34;https://github.com/kubernetes/community/tree/master/wg-multitenancy&#34;&gt;working group&lt;/a&gt;. I look forward to discussing this there. Thanks!&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Building Container Images Securely on Kubernetes</title>
                <link>https://blog.jessfraz.com/post/building-container-images-securely-on-kubernetes/</link>
                <pubDate>Tue, 20 Mar 2018 11:25:24 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/building-container-images-securely-on-kubernetes/</guid>
                    <description>

&lt;p&gt;A lot of people seem to want to be able to build container images in Kubernetes
without mounting in the docker socket or doing anything to compromise the
security of their cluster.&lt;/p&gt;

&lt;p&gt;This all was brought to my attention when my awesome coworker at &lt;a href=&#34;https://twitter.com/gabrtv&#34;&gt;Gabe
Monroy&lt;/a&gt;
and I were chatting with &lt;a href=&#34;https://twitter.com/michellenoorali&#34;&gt;Michelle Noorali&lt;/a&gt; over pizza at
Kubecon in Austin last December.&lt;/p&gt;

&lt;p&gt;Here is pretty much how it went down:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;Gabe: I’d would love to switch our clusters to a lightweight runtime like 
containerd, but we need those docker build apis right now. I wish someone 
would come up with an unprivileged container image builder..

Me: Oh that’s easy

Gabe: Bullshit, if it was easy someone would have done it already. I’ve wanted 
this for years. Please pass the ranch dressing.

Me: I’m telling you you’re wrong. I’ll prove it to you. It’s easy.

Judgy Four Seasons Staff: Excuse me, can I help you?

Me: Nah we’re good. Actually if you could grab me a slice of that Papa John&#39;s 
jalapano &amp;amp; pineapple that would be great.

.. next morning ..

100 lines of bash shaming in Gabe&#39;s inbox proving it could be done.
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&#34;prior-art&#34;&gt;Prior Art&lt;/h2&gt;

&lt;p&gt;A few years ago when I worked at Docker, &lt;a href=&#34;https://github.com/stevvooe&#34;&gt;Stephen Day&lt;/a&gt;
and &lt;a href=&#34;https://twitter.com/crosbymichael&#34;&gt;Michael Crosby&lt;/a&gt; did a POC demo of
a standalone image builder.&lt;/p&gt;

&lt;p&gt;It still actually exists today in
&lt;a href=&#34;https://github.com/stevvooe/distribution/tree/dist-demo&#34;&gt;a fork of docker/distribution on Stephen&amp;rsquo;s github&lt;/a&gt;.
It consisted of a &lt;code&gt;dist&lt;/code&gt; command line tool for interacting with the registry
and runc. Combined together with the awesome powers of bash like so (&lt;code&gt;nsinit&lt;/code&gt;
was &lt;code&gt;runc&lt;/code&gt; before &lt;code&gt;runc&lt;/code&gt; was A Thing):&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;#!/bin/bash

function FROM () {
    mkdir rootfs
    dist pull &amp;quot;$1&amp;quot; rootfs
}

function USERNS() {
    export nsinituserns=&amp;quot;$1&amp;quot;
}

function CWD() {
    export nsinitcwd=&amp;quot;$1&amp;quot;
}

function MEM() {
    export nsinitmem=&amp;quot;$1&amp;quot;
}

function EXEC() {
    nsinit exec \
        --tty \
        --rootfs &amp;quot;$(pwd)/rootfs&amp;quot; \
        --create \
        --cwd=&amp;quot;$nsinitcwd&amp;quot; \
        --memory-limit=&amp;quot;$nsinitmem&amp;quot; \
        --memory-swap -1 \
        --userns-root-uid=&amp;quot;$nsinituserns&amp;quot; \
        -- $@
}

function RUN() {
    t=&amp;quot;\&amp;quot;$@\&amp;quot;&amp;quot;
    EXEC sh -c &amp;quot;$t&amp;quot;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;So in their demo, you would source the above bash script and then execute your
Dockerfile like it was also a bash script. Pretty cool right.&lt;/p&gt;

&lt;p&gt;So that is what I sent to Gabe&amp;rsquo;s inbox to prove it was possible but also:
&amp;ldquo;Look, I will make you something nice.&amp;rdquo;&lt;/p&gt;

&lt;h2 id=&#34;designing-something-nice&#34;&gt;Designing Something Nice&lt;/h2&gt;

&lt;p&gt;So I went out on my mission to make them something nice, which lead me through
a sea of existing tools. I collected all my findings &lt;a href=&#34;https://docs.google.com/document/d/1rT2GUSqDGcI2e6fD5nef7amkW0VFggwhlljrKQPTn0s/edit?usp=sharing&#34;&gt;in a design doc&lt;/a&gt; if you are curious as to what I think about the other existing tools.&lt;/p&gt;

&lt;p&gt;I didn&amp;rsquo;t want to reinvent the world I just wanted to make it unprivileged and
a single binary with a simple user interface that could easily be switched out
with docker.&lt;/p&gt;

&lt;p&gt;Not all of my ideas are good. I first started on a FUSE snapshotter. Turns out
FUSE kinda sucks&amp;hellip;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;so fuse calls &lt;code&gt;getxattr&lt;/code&gt; 2x the amount it calls &lt;code&gt;lookup&lt;/code&gt; even if the damn inodes have no xattrs&amp;hellip;. and it has to go back and forth from kernel to userspace to do it&amp;hellip; I need a drink.&lt;/p&gt;&amp;mdash; jessie frazelle (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/961712246178099200?ref_src=twsrc%5Etfw&#34;&gt;February 8, 2018&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;I started playing with &lt;a href=&#34;https://github.com/moby/buildkit&#34;&gt;buildkit&lt;/a&gt;. It&amp;rsquo;s an
awesome project. &lt;a href=&#34;https://github.com/tonistiigi&#34;&gt;Tõnis Tiigi&lt;/a&gt; did a really
stellar job on it and I thought to myself, &amp;ldquo;I definitely want to use this as
the backend.&amp;rdquo;&lt;/p&gt;

&lt;p&gt;Buildkit is more cache-efficient than Docker because it can execute multiple
build stages concurrently with its internal DAG.&lt;/p&gt;

&lt;p&gt;Then I stumbled upon
&lt;a href=&#34;https://github.com/AkihiroSuda/buildkit_poc/commit/511c7e71156fb349dca52475d8c0dc0946159b7b&#34;&gt;Akihiro Suda&amp;rsquo;s patches for an unprivileged Buildkit&lt;/a&gt;.
This was &lt;em&gt;perfect&lt;/em&gt; for my use case.&lt;/p&gt;

&lt;p&gt;I owe all these fine folks so much for the great work I got to build on top of.
:)&lt;/p&gt;

&lt;p&gt;And thus came &lt;a href=&#34;https://github.com/genuinetools/img&#34;&gt;img&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So that was all fine and dandy and it works great as unprivileged&amp;hellip; on my
host. Now I&amp;rsquo;m a huge fan of desktop tools and this actually filled a large void
in my tooling that now I can build as unprivileged on my host without Docker.&lt;/p&gt;

&lt;p&gt;But I still have to make this work in Kubernetes so I can make Gabe happy and
fulfill my dreams of eating more pineapple and jalapeno pizzas at Kubecons.&lt;/p&gt;

&lt;h2 id=&#34;why-is-this-problem-so-hard&#34;&gt;Why is this problem so hard?&lt;/h2&gt;

&lt;p&gt;Let me go over in detail some of the patches needed to even make this work as
unprivileged on my host.&lt;/p&gt;

&lt;p&gt;For one, we need &lt;code&gt;subuid&lt;/code&gt; and &lt;code&gt;subgid&lt;/code&gt; maps. See &lt;a href=&#34;https://github.com/opencontainers/runc/pull/1692&#34;&gt;@AkihiroSuda&amp;rsquo;s patch&lt;/a&gt;.
We also need to &lt;code&gt;setgroups&lt;/code&gt;. See &lt;a href=&#34;https://github.com/opencontainers/runc/pull/1693&#34;&gt;@AkihiroSuda&amp;rsquo;s patch for that as well&lt;/a&gt;.
Those allow us to use &lt;code&gt;apt&lt;/code&gt; in unprivileged user namespaces.&lt;/p&gt;

&lt;p&gt;Then if we want to use the containerd snapshotter backends and actually mount
the filesystems as we diff them, then we need unprivileged mounting. Which
can only be done from &lt;em&gt;inside&lt;/em&gt; a user and mount namespace. So we need to do
this at the start of our binary before we even do anything else.&lt;/p&gt;

&lt;p&gt;Granted mounting is not a requirement of building docker images. You can always
go the route of &lt;a href=&#34;https://github.com/cyphar/orca-build&#34;&gt;orca-build&lt;/a&gt; and
&lt;a href=&#34;https://github.com/openSUSE/umoci&#34;&gt;umoci&lt;/a&gt; and not mount at all. &lt;a href=&#34;https://umo.ci/&#34;&gt;umoci&lt;/a&gt;
is also an unprivileged image builder and was made long before I even made mine
by the talented &lt;a href=&#34;https://github.com/cyphar&#34;&gt;Aleksa Sarai&lt;/a&gt; who is also
responsible for a lot of the rootless containers work upstream in runc.&lt;/p&gt;

&lt;h2 id=&#34;getting-this-to-work-in-containers&#34;&gt;Getting this to work &lt;em&gt;in&lt;/em&gt; containers&amp;hellip;&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;img&lt;/code&gt; works on my host which is all fine and dandy but I gotta help my k8s pals do
their builds&amp;hellip;&lt;/p&gt;

&lt;p&gt;Enter the next problem. For the record, all these problems apply to any
builder that is using runc to launch containers as an unprivileged user.&lt;/p&gt;

&lt;p&gt;The next issue involved &lt;a href=&#34;https://github.com/opencontainers/runc/issues/1658&#34;&gt;not being able to mount proc inside a Docker container&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;My first thought was &amp;ldquo;well it must be something Docker is doing&amp;rdquo;. So I isolated
the problem, put it in a container and ten minutes after I dove into the rabbit
hole I realized it was the fact that Docker sets paths inside &lt;code&gt;/proc&lt;/code&gt; to be
masked and readonly by default, preventing me from mounting.&lt;/p&gt;

&lt;p&gt;Duh I thought to
myself. Remember that thing we never thought we&amp;rsquo;d need&amp;hellip; well we need it.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;&amp;quot;We&amp;#39;ll never need this&amp;quot;&lt;br&gt;&lt;br&gt;&amp;quot;Fuck, we need that&amp;quot;&lt;/p&gt;&amp;mdash; julia ferraioli (@juliaferraioli) &lt;a href=&#34;https://twitter.com/juliaferraioli/status/970396059871666176?ref_src=twsrc%5Etfw&#34;&gt;March 4, 2018&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;You can find all the fun details on &lt;a href=&#34;https://github.com/opencontainers/runc/issues/1658#issuecomment-373122073&#34;&gt;opencontainers/runc#1658&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Well this blows, I could obviously just run the container as &lt;code&gt;--privileged&lt;/code&gt; but
thats really stupid and defeats the whole point of this exercise. I did not
want to add any extra capabilities or any host devices which is exactly what
&lt;code&gt;privileged&lt;/code&gt; does&amp;hellip; gross.&lt;/p&gt;

&lt;p&gt;So I opened an &lt;a href=&#34;https://github.com/moby/moby/issues/36597&#34;&gt;issue on Docker&lt;/a&gt; and
&lt;a href=&#34;https://github.com/moby/moby/issues/36644&#34;&gt;made a patch&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Okay so problem solved. Wait&amp;hellip; no&amp;hellip; now I gotta pull that option through to
kubernetes&amp;hellip;&lt;/p&gt;

&lt;p&gt;So I opened a proposal there: &lt;a href=&#34;https://github.com/kubernetes/community/pull/1934&#34;&gt;kubernetes/community#1934&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And I made a patch just for playing with it on my fork:
&lt;a href=&#34;https://github.com/jessfraz/kubernetes/tree/rawproc&#34;&gt;jessfraz/kubernetes#rawproc&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Okay now I want to try it in a cluster&amp;hellip;
enter &lt;code&gt;acs-engine&lt;/code&gt;. I made a branch there as well for easily combining together
all my patches for testing: &lt;a href=&#34;https://github.com/jessfraz/acs-engine/tree/rawaccess&#34;&gt;jessfraz/acs-engine#rawaccess&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here is a yaml file you can use to deploy and try it:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-yaml&#34;&gt;apiVersion: v1
kind: Pod
metadata:
  labels:
    run: img
  name: img
  annotations:
    container.apparmor.security.beta.kubernetes.io/img: unconfined
spec:
  securityContext:
    runAsUser: 1000
  initContainers:
    # This container clones the desired git repo to the EmptyDir volume.
    - name: git-clone
      image: r.j3ss.co/jq
      args:
        - git
        - clone
        - --single-branch
        - --
        - https://github.com/jessfraz/dockerfiles
        - /repo # Put it in the volume
      securityContext:
        allowPrivilegeEscalation: false
        readOnlyRootFilesystem: true
      volumeMounts:
        - name: git-repo
          mountPath: /repo
  containers:
  - image: r.j3ss.co/img
    imagePullPolicy: Always
    name: img
    resources: {}
    workingDir: /repo
    command:
    - img
    - build
    - -t
    - irssi
    - irssi/
    securityContext:
      rawProc: true
    volumeMounts:
    - name: cache-volume
      mountPath: /tmp
    - name: git-repo
      mountPath: /repo
  volumes:
  - name: cache-volume
    emptyDir: {}
  - name: git-repo
    emptyDir: {}
  restartPolicy: Never
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&#34;so-is-this-secure&#34;&gt;So is this secure?&lt;/h2&gt;

&lt;p&gt;Well I am running that pod as user 1000. Granted it does have access to a raw
proc without masks&amp;hellip; the nested containers do not. The nested containers
have &lt;code&gt;/proc&lt;/code&gt; set as
read-only and masked paths. The nested containers also use a default seccomp
profile denying privileged operations that should not be allowed.&lt;/p&gt;

&lt;p&gt;Your main concern here is &lt;em&gt;my code&lt;/em&gt; and the code in buildkit and runc.
Personally I think that&amp;rsquo;s fine because I obviously trust myself, but you are
more than welcome to audit it and open bugs and/or patches.&lt;/p&gt;

&lt;p&gt;If you randomly generate different users for all your pod builds to run under
then you are relying on the user isolation of linux itself.&lt;/p&gt;

&lt;p&gt;If you are running a cluster inside your organization, it&amp;rsquo;s unlikely someone is
going waste a kernel 0day popping your cluster from within your org.&lt;/p&gt;

&lt;p&gt;This is much better than the current situation where people are mounting the
docker socket into containers and everything is running as root.&lt;/p&gt;

&lt;p&gt;You can even use a &lt;a href=&#34;https://kubernetes.io/docs/concepts/policy/pod-security-policy/#users-and-groups&#34;&gt;Pod Security Policy&lt;/a&gt;
and set &lt;code&gt;MustRunAs&lt;/code&gt; to make sure all your pods are being run as users within
a certain range of uids.&lt;/p&gt;

&lt;p&gt;You are effectively as safe as any other non-root
user running on a shared machine.&lt;/p&gt;

&lt;p&gt;If you are running random builds from users off the internet I would suggest
using VMs. You can use my patches to acs-engine to run all your pods in Intel&amp;rsquo;s
Clear Containers and you would then have hardware isolation for
your little builders :) You just need to use
&lt;a href=&#34;https://github.com/Azure/acs-engine/blob/master/examples/kubernetes-clear-containers.json&#34;&gt;this config&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And that ends the most epic yak shave ever, minus the patches all being merged
upstream. Thanks for playing. Feel free to try it out on Azure with my branch
to acs-engine. That was a lot of patching and I&amp;rsquo;m tired. Peace.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Nerd Sniped by BINFMT_MISC</title>
                <link>https://blog.jessfraz.com/post/nerd-sniped-by-binfmt_misc/</link>
                <pubDate>Sun, 04 Mar 2018 11:25:24 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/nerd-sniped-by-binfmt_misc/</guid>
                    <description>&lt;p&gt;This is a story about how I got nerd sniped by
&lt;a href=&#34;https://blog.cloudflare.com/using-go-as-a-scripting-language-in-linux/&#34;&gt;a blog post from Cloudflare Engineering&lt;/a&gt;.
The TLDR on their post is that you can script in Go if you use BINFMT_MISC in
the kernel.&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;https://www.kernel.org/doc/html/v4.14/admin-guide/binfmt-misc.html&#34;&gt;BINFMT_MISC&lt;/a&gt; is really well documented and awesome. In the end, all they had to do to
script in Go was to mount the filesystem:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-console&#34;&gt;$ mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then, register the Go script binary format:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-console&#34;&gt;$ echo &#39;:golang:E::go::/usr/local/bin/gorun:OC&#39; | sudo tee /proc/sys/fs/binfmt_misc/register
:golang:E::go::/usr/local/bin/gorun:OC
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then you can &lt;code&gt;./&lt;/code&gt; any go file on your host:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-console&#34;&gt;$ chmod u+x helloscript.go
$ ./helloscript.go
Hello, world!
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;They go through all the extraordinary details of exit codes for the shell and
blah blah blah. It&amp;rsquo;s a great post you should really read it. Do it, go read it,
then come back here and I will take it to 11.&lt;/p&gt;

&lt;p&gt;&amp;hellip;&lt;/p&gt;

&lt;p&gt;Okay, cool, you are back. That post was dope right?&lt;/p&gt;

&lt;p&gt;I kinda want to do this with all languages. Because I LOVE SCRIPTING. Have you
seen my &lt;a href=&#34;https://github.com/jessfraz/dotfiles&#34;&gt;cloud native dotfiles&lt;/a&gt;? My bash
scripts smell like roses.&lt;/p&gt;

&lt;p&gt;Right, so I want to do this with &lt;em&gt;all&lt;/em&gt; languages&amp;hellip; but what I also hate is
installing shit on my host. Ew, we have containers for those silly things.
Luckily, I know a thing or two about containers&amp;hellip;&lt;/p&gt;

&lt;p&gt;A few years ago I made a project called
&lt;a href=&#34;https://github.com/jessfraz/binctr&#34;&gt;binctr&lt;/a&gt;.
It creates fully static, unprivileged, self-contained, containers as
executable binaries. (Wow that was a lot of words, let&amp;rsquo;s break it down.) What
&lt;code&gt;binctr&lt;/code&gt; does is embed an entire container image (aka rootfs) &lt;em&gt;into&lt;/em&gt; a fully
static binary and when you execute the binary it will unpack the image and run
it as a container. So you get containers without a daemon or privileges and
without even having the image for the rootfs of the container. You just need
this one binary.&lt;/p&gt;

&lt;p&gt;(Huge thanks to &lt;a href=&#34;https://twitter.com/lordcyphar&#34;&gt;@lordcyphar&lt;/a&gt; who got rootless
containers into runc so I could actually archive my gross hack for &lt;code&gt;binctr&lt;/code&gt;.)&lt;/p&gt;

&lt;p&gt;Kinda seems like the perfect match for trying to use all languages with
BINFMT_MISC. So I tried it.&lt;/p&gt;

&lt;p&gt;(Preface: this post should not be tried at home, which is why I did not
unarchive &lt;code&gt;binctr&lt;/code&gt;, I am merely showing a different, very crazy abstraction).&lt;/p&gt;

&lt;p&gt;I put common lisp in a container. Why common lisp? Well I could do this with
&lt;em&gt;any language&lt;/em&gt; and I&amp;rsquo;m a bit insane haven&amp;rsquo;t you noticed&amp;hellip;&lt;/p&gt;

&lt;p&gt;Then I embedded the image into a binary with &lt;code&gt;binctr&lt;/code&gt;. I made one slight
modification to the spec in &lt;code&gt;binctr&lt;/code&gt; that allowed me to use local files,
basically so I could get the script into the container after the executable is
run pointing to the file.&lt;/p&gt;

&lt;p&gt;Then I registered my common lisp binary format with BINFMT_MISC&amp;hellip;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-console&#34;&gt;$ echo &#39;:clisp:E::lisp::/usr/local/bin/clisp:OC&#39; | sudo tee /proc/sys/fs/binfmt_misc/register
:clisp:E::lisp::/usr/local/bin/clisp:OC
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;code&gt;/usr/local/bin/clisp&lt;/code&gt; is just my &lt;code&gt;binctr&lt;/code&gt; generated binary with common lisp.&lt;/p&gt;

&lt;p&gt;And boom, now I can &amp;ldquo;dot slash&amp;rdquo; any &lt;code&gt;.lisp&lt;/code&gt; file and it will run in my common
lisp container.&lt;/p&gt;

&lt;p&gt;Obviously, my container needed to be packaged with any dependencies and packages
I needed but I didn&amp;rsquo;t need to install any of that shit on my host so I consider
it a win.&lt;/p&gt;

&lt;p&gt;Imagine if an entire OS had all the languages packaged this way so that
everything could be &amp;ldquo;dot slashed&amp;rdquo; and executed but without actually installing
the language to your host operating system.&lt;/p&gt;

&lt;p&gt;I think it would be dope.&lt;/p&gt;

&lt;p&gt;Thanks for tuning in for this crazy blog post. Catch ya later.
Hacker news, you can shove your comments right up your&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Personal Infrastructure</title>
                <link>https://blog.jessfraz.com/post/personal-infrastructure/</link>
                <pubDate>Sat, 16 Dec 2017 11:25:24 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/personal-infrastructure/</guid>
                    <description>

&lt;p&gt;This post is kind of like &amp;ldquo;part two&amp;rdquo; on my series on all the weird things I do
for my personal infrastructure. If you missed &amp;ldquo;part one&amp;rdquo;, you should check out
&lt;a href=&#34;https://blog.jessfraz.com/post/home-lab-is-the-dopest-lab/&#34;&gt;Home Lab is the Dopest Lab&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I run a lot of little things to make my life easier, like a CI, some bots, and
a bunch of services just for the lolz. This post will go over all of those. These
run scattered across my NUCs and the cloud.&lt;/p&gt;

&lt;p&gt;Let&amp;rsquo;s start with the most useful.&lt;/p&gt;

&lt;h3 id=&#34;continuous-integration&#34;&gt;Continuous Integration&lt;/h3&gt;

&lt;p&gt;I host my own continuous integration server. Yes, you guessed it&amp;hellip; it&amp;rsquo;s Jenkins.
I use the Jenkins DSL plugin to keep everything in sync. You can find all my
DSLs in my repo &lt;a href=&#34;https://github.com/jessfraz/jenkins-dsl&#34;&gt;github.com/jessfraz/jenkins-dsl&lt;/a&gt;.
This has all the configurations for views, keeps forks up to date, mirrors all my
repositories to private git (more on this in &lt;a href=&#34;#git-server&#34;&gt;git&lt;/a&gt;),
builds all Dockerfiles to push to Docker Hub and my private registry (more on
this in &lt;a href=&#34;#private-docker-registry&#34;&gt;private docker registry&lt;/a&gt;) and a bunch of
maintenance scripts.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&#34;https://github.com/jessfraz/jenkins-dsl/blob/master/Makefile&#34;&gt;Makefile&lt;/a&gt; in
this repo calls out to bash scripts which generate new DSLs for any new GitHub
repos I create. Yep I even generate the automation&amp;hellip;&lt;/p&gt;

&lt;p&gt;There&amp;rsquo;s a bunch of other fun things in there as well that you can discover by
poking around yourself.&lt;/p&gt;

&lt;p&gt;I host my own postfix server alongside Jenkins. You
can find the postfix docker image at &lt;code&gt;r.j3ss.co/postfix&lt;/code&gt; or the &lt;a href=&#34;https://github.com/jessfraz/dockerfiles/tree/master/postfix&#34;&gt;Dockerfile&lt;/a&gt;. It&amp;rsquo;s super minimal and less gross than literally every
other postfix image in existence.&lt;/p&gt;

&lt;p&gt;You can run it with:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run --restart always -d \
    --name postfix \
    --net container:jenkins \
    -e &amp;quot;ROOT_ALIAS=root@blah.com&amp;quot; \
    -e &amp;quot;RELAY=[smtp-relay.gmail.com]:587&amp;quot; \
    -e &amp;quot;TLS=1&amp;quot; \
    -e &amp;quot;MY_DESTINATION=...., localhost&amp;quot; \
    -e &amp;quot;MAILNAME=blah.com&amp;quot; \
    r.j3ss.co/postfix
&lt;/code&gt;&lt;/pre&gt;

&lt;h3 id=&#34;private-docker-registry&#34;&gt;Private Docker Registry&lt;/h3&gt;

&lt;p&gt;I host my own private docker registry with my own notary server and authentication
server. Why? Well because about 4 years ago when I started using docker, Docker
Hub was super slow and I came to love having my own super fast one.&lt;/p&gt;

&lt;p&gt;I still push all the images to both Docker Hub and my registry and both are
signed so it&amp;rsquo;s really like I am using Docker Hub as my backup. Yay, highly
available&amp;hellip; just kidding.&lt;/p&gt;

&lt;p&gt;I made a pretty shitty UI for it. You can play with it at &lt;a href=&#34;https://r.j3ss.co/&#34;&gt;r.j3ss.co&lt;/a&gt;.
The UI is from my &lt;a href=&#34;https://github.com/jessfraz/reg&#34;&gt;reg&lt;/a&gt; project but the
server component lives in the &lt;a href=&#34;https://github.com/jessfraz/reg/tree/master/server&#34;&gt;server subdirectory&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The really nice thing about both the &lt;code&gt;reg&lt;/code&gt; command line and server is that you
can get a list of CVEs on an image.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/cves.png&#34; alt=&#34;cves&#34; /&gt;&lt;/p&gt;

&lt;p&gt;I do this by hosting my own instance of &lt;a href=&#34;https://github.com/coreos/clair&#34;&gt;CoreOS&amp;rsquo;s Clair&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Most of my Dockerfiles live at
&lt;a href=&#34;https://github.com/jessfraz/dockerfiles&#34;&gt;github.com/jessfraz/dockerfiles&lt;/a&gt; if
you are curious.&lt;/p&gt;

&lt;p&gt;I also went over all of this on my talk on &lt;a href=&#34;https://docs.google.com/presentation/d/17Hml1iFqdXElxOcrh9caQSC5px5mDgaS015Vhaz42ZY/edit?usp=sharing&#34;&gt;Over Engineering my
Laptop / Container Linux on the Desktop&lt;/a&gt;. This includes all the reasons why I have continuous integration as well.&lt;/p&gt;

&lt;p&gt;I have a script to cleanup the registry of old images &lt;a href=&#34;https://github.com/jessfraz/dotfiles/blob/master/bin/clean-registry&#34;&gt;clean-registry&lt;/a&gt;. This deletes old registry blobs that are not used
in the latest version of the tag. I don&amp;rsquo;t really care about old images and
I don&amp;rsquo;t want to have a huge registry filled with old shit. There is a &lt;a href=&#34;https://github.com/jessfraz/jenkins-dsl/blob/master/projects/maintenance/garbage_collect_registry.groovy&#34;&gt;jenkins
DSL&lt;/a&gt; to run this.&lt;/p&gt;

&lt;h3 id=&#34;git-server&#34;&gt;Git Server&lt;/h3&gt;

&lt;p&gt;I host my own git server. You
can find the gitserver docker image at &lt;code&gt;r.j3ss.co/gitserver&lt;/code&gt; or the &lt;a href=&#34;https://github.com/jessfraz/dockerfiles/tree/master/gitserver&#34;&gt;Dockerfile&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can run it with:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run --restart always -d \
    --name gitserver \
    -p 127.0.0.1:22:22 \
    -e &amp;quot;PUBKEY=$(cat ~/.ssh/authorized_keys)&amp;quot; \
    -v &amp;quot;/mnt/disks/gitserver:/home/git&amp;quot; \
    r.j3ss.co/gitserver
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;It has it&amp;rsquo;s own UI that is run with Gitiles. You
can find the Gitiles docker image at &lt;code&gt;r.j3ss.co/gitiles&lt;/code&gt; or the &lt;a href=&#34;https://github.com/jessfraz/dockerfiles/tree/master/gitiles&#34;&gt;Dockerfile&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can run it with:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run --restart always -d \
    --name gitiles \
    -p 127.0.0.1:8080:8080 \
    -e BASE_GIT_URL=&amp;quot;git@git.blah.com&amp;quot; \
    -e SITE_TITLE=&amp;quot;git.blah.com&amp;quot; \
    -v &amp;quot;/mnt/disks/gitserver:/home/git&amp;quot; \
    -w /home/git \
    r.j3ss.co/gitiles
&lt;/code&gt;&lt;/pre&gt;

&lt;h3 id=&#34;ghb0t&#34;&gt;ghb0t&lt;/h3&gt;

&lt;p&gt;This is one of my most useful things. It&amp;rsquo;s a GitHub Bot to automatically delete
your fork&amp;rsquo;s branches after a pull request has been merged.&lt;/p&gt;

&lt;p&gt;I am &lt;em&gt;so&lt;/em&gt; OCD about keeping git repos clean and this is my little helper.&lt;/p&gt;

&lt;p&gt;Check out the repo: &lt;a href=&#34;https://github.com/jessfraz/ghb0t&#34;&gt;github.com/jessfraz/ghb0t&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;I go to fork your thing and there is like 300 branches my face is like &lt;a href=&#34;https://t.co/JpdpO447KS&#34;&gt;pic.twitter.com/JpdpO447KS&lt;/a&gt;&lt;/p&gt;&amp;mdash; jessie frazelle (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/823425160787021825?ref_src=twsrc%5Etfw&#34;&gt;January 23, 2017&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;h3 id=&#34;irc-bouncer&#34;&gt;IRC Bouncer&lt;/h3&gt;

&lt;p&gt;I host my own IRC Bouncer with ZNC.
You can find the ZNC docker image at &lt;code&gt;r.j3ss.co/znc&lt;/code&gt; or the &lt;a href=&#34;https://github.com/jessfraz/dockerfiles/tree/master/znc&#34;&gt;Dockerfile&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can run it with:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run --restart always -d \
    --name znc \
    -p 6697:6697 \
    -v &amp;quot;/mnt/disks/znc:/home/user/.znc&amp;quot; \
    r.j3ss.co/znc
&lt;/code&gt;&lt;/pre&gt;

&lt;h3 id=&#34;upmail&#34;&gt;upmail&lt;/h3&gt;

&lt;p&gt;This service provides email notifications for &lt;a href=&#34;https://github.com/sourcegraph/checkup&#34;&gt;sourcegraph/checkup&lt;/a&gt;.
If you are unfamiliar with checkup&amp;hellip; it&amp;rsquo;s distributed, lock-free, self-hosted
health checks and status pages, written in Go.&lt;/p&gt;

&lt;p&gt;I wrote a small little server to send email alerts for it and it lives
at &lt;a href=&#34;https://github.com/jessfraz/upmail&#34;&gt;github.com/jessfraz/upmail&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&#34;ipython&#34;&gt;iPython&lt;/h3&gt;

&lt;p&gt;Not really all that novel but I also run an iPython server for doing little
script things in. I just use the &lt;code&gt;jupyter/minimal-notebook&lt;/code&gt; Docker image for that.&lt;/p&gt;

&lt;h3 id=&#34;conclusion&#34;&gt;Conclusion&lt;/h3&gt;

&lt;p&gt;I run a lot of little shitty services for a personal &lt;a href=&#34;https://github.com/jessfraz/pastebinit&#34;&gt;pastebin&lt;/a&gt;
and other things
but those are all really less cool. My attention span for blog posts is about
5 minutes and we have runneth over so I am going to call it a day with
this&amp;hellip; until next time. Peace out.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Home Lab is the Dopest Lab</title>
                <link>https://blog.jessfraz.com/post/home-lab-is-the-dopest-lab/</link>
                <pubDate>Sun, 03 Dec 2017 11:25:24 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/home-lab-is-the-dopest-lab/</guid>
                    <description>

&lt;p&gt;I always have some random side project I am working on, whether it is making the
&lt;a href=&#34;https://drive.google.com/open?id=17Hml1iFqdXElxOcrh9caQSC5px5mDgaS015Vhaz42ZY&#34;&gt;world&amp;rsquo;s most over engineered desktop OS all running in containers&lt;/a&gt; or updating all my Makefiles to
be the definition of glittering beauty.&lt;/p&gt;

&lt;p&gt;This post is going to go over I how I recently redid all my home networking and
ultimately how I got to here:&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;ssh-ed into my dev NUC from a Pixelbook 39,000 feet, authenticated from an ssh key on a yubikey, the future is dope AF&lt;/p&gt;&amp;mdash; jessie frazelle (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/933155384419897344?ref_src=twsrc%5Etfw&#34;&gt;November 22, 2017&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;I used &lt;a href=&#34;https://unifi-sdn.ubnt.com/&#34;&gt;Unifi&lt;/a&gt; for everything and this is what I got:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access Point: &lt;a href=&#34;https://unifi-shd.ubnt.com/&#34;&gt;AP AC SHD&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Switch: &lt;a href=&#34;https://www.ubnt.com/unifi-switching/unifi-switch-16-150w/&#34;&gt;Switch 16-150W&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Router: &lt;a href=&#34;https://www.ubnt.com/unifi-routing/usg/&#34;&gt;Security Gateway&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It was so good looking when it arrived.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;My network is about to get real&amp;hellip; fast!!!&lt;br&gt;&lt;br&gt;This switch is (dare I say it) sexy as hell. &lt;a href=&#34;https://t.co/fmaLkW2AFB&#34;&gt;pic.twitter.com/fmaLkW2AFB&lt;/a&gt;&lt;/p&gt;&amp;mdash; jessie frazelle (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/931304322100539395?ref_src=twsrc%5Etfw&#34;&gt;November 16, 2017&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;I love fun side projects so obviously I set it all up right away. You need
a &amp;ldquo;controller&amp;rdquo; to have the nice Unifi UI. You can buy a cloud key but I wanted
to run the controller in container just like &lt;a href=&#34;http://blog.dustinkirkland.com/2016/12/unifi-controller-in-lxd.html&#34;&gt;Dustin Kirkland&lt;/a&gt;. So I set about writing a Dockerfile for the
controller and it is now at &lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/unifi/Dockerfile&#34;&gt;r.j3ss.co/unifi&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can run it with:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;docker run -d --restart always \
    -v /etc/localtime:/etc/localtime:ro \
    --name unifi \
    --volume path/to/where/you/want/your/data:/config \
    -p 3478:3478/udp \
    -p 10001:10001/udp \
    -p 8080:8080 \
    -p 8081:8081 \
    -p 8443:8443 \
    -p 8843:8843 \
    -p 8880:8880 \
    r.j3ss.co/unifi
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The web UI is at https://{ip}:8443. To adopt an access point, and get it
to show up in the software you will need to ssh into the AP and run:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;ssh ubnt@$AP-IP mca-cli set-inform http://$address:8080/inform
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then I went crazy and made sure everything that needed to talk to each other
was on the same subnet and everything else was isolated into it&amp;rsquo;s own subnet.
I used VLANs to do this.&lt;/p&gt;

&lt;p&gt;Also be careful not to subnet yourself into a hole ;)&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;me just now: &amp;quot;this was my fear! sub-netting myself into a hole!&amp;quot;&lt;/p&gt;&amp;mdash; jessie frazelle (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/936253292556050433?ref_src=twsrc%5Etfw&#34;&gt;November 30, 2017&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;The best thing about these APs are they are Power over Ethernet! One cord, one
cord!!!&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-conversation=&#34;none&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;&amp;lt;naughty-by-nature&amp;gt;You down wit&amp;#39; PoE?&amp;lt;/naughty-by-nature&amp;gt;&lt;/p&gt;&amp;mdash; Dan McDonald (@kebesays) &lt;a href=&#34;https://twitter.com/kebesays/status/931306201014513665?ref_src=twsrc%5Etfw&#34;&gt;November 16, 2017&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;h3 id=&#34;nucs&#34;&gt;NUCs&lt;/h3&gt;

&lt;p&gt;I have a bunch of Intel NUCs thanks to &lt;a href=&#34;https://twitter.com/carolynvs&#34;&gt;Carolyn Van Slyck&lt;/a&gt; and &lt;a href=&#34;https://twitter.com/jbeda&#34;&gt;Joe
Beda&lt;/a&gt; for their thought leadership&amp;hellip; my wallet is
not happy with you two. Also check out &lt;a href=&#34;http://carolynvanslyck.com/blog/2017/10/my-little-cluster/&#34;&gt;Carolyn&amp;rsquo;s post on her NUC setup&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;They have LEDs on the front that change color. There is a kernel driver for them.&lt;/p&gt;&amp;mdash; Joe Beda (@jbeda) &lt;a href=&#34;https://twitter.com/jbeda/status/920672603177607168?ref_src=twsrc%5Etfw&#34;&gt;October 18, 2017&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;I hooked them all into my Switch (&lt;strong&gt;glorious&lt;/strong&gt;) and into their own subnet. Then
I went about setting up SSH for all of them.&lt;/p&gt;

&lt;p&gt;I use Yubikeys for authentication to GitHub and literally everything else where
that is possible so I made a bot to sync any new ssh keys added to my GitHub to
the authorized keys on my server. It lives at &lt;a href=&#34;https://github.com/jessfraz/sshb0t&#34;&gt;github.com/jessfraz/sshb0t&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I would &lt;strong&gt;ONLY&lt;/strong&gt; recommend doing that if you have two factor auth turned on so
you ensure no one else but you can access your account. And honestly if someone
gets into my GitHub account I am going to have wayyyy worse issues that them
getting into my NUCs.&lt;/p&gt;

&lt;p&gt;I have ssh keys on Yubikeys that I set up. There is a &lt;a href=&#34;https://github.com/drduh/YubiKey-Guide&#34;&gt;really great guide to
doing this on GitHub&lt;/a&gt; so I am not going
to repeat it.&lt;/p&gt;

&lt;p&gt;I have dockerfiles for all the Yubikey tools you need to set it up in my
&lt;a href=&#34;https://github.com/jessfraz/dockerfiles&#34;&gt;dockerfiles repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For example you can jump into a container with &lt;code&gt;ykman&lt;/code&gt; with:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;docker run --rm -it \
    -v /etc/localtime:/etc/localtime:ro \
    --device /dev/usb \
    --device /dev/bus/usb \
    --name ykman \
    r.j3ss.co/ykman bash
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This works for all the other docker images like &lt;code&gt;ykpersonalize&lt;/code&gt; etc. If you get
stuck all the commands are in my dotfile aliases at
&lt;a href=&#34;https://github.com/jessfraz/dotfiles/blob/master/.dockerfunc&#34;&gt;github.com/jessfraz/dotfiles&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I like to require &amp;ldquo;touch to authenticate&amp;rdquo;. You can do this with:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;# for every ssh connection
ykman openpgp touch aut on

# for signing
ykman openpgp touch sig on

# for encrypting
ykman openpgp touch enc on
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For the Chromebook Pixelbook ssh client authentication you just need the Smart Card
reader extension and you are good to go! You can find the guide on that from
the &lt;a href=&#34;https://chromium.googlesource.com/apps/libapps/+/master/nassh/doc/hardware-keys.md&#34;&gt;Chromium Docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let me just answer the most common question I get&amp;hellip; No, I don&amp;rsquo;t use Crouton
on my Chromebooks I just ssh to the cloud or to my home lab. I like things
clean and minimal if you have not noticed already.&lt;/p&gt;

&lt;p&gt;Okay so that&amp;rsquo;s all for now. I&amp;rsquo;ll do another deep dive into the rest of my
infrastructure when I&amp;rsquo;m not overwhelmed with how much there is&amp;hellip;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;There’s so much:&lt;br&gt;- scripts for setting up ssh on yubikeys&lt;br&gt;- unifi setup&lt;br&gt;- nuc provisioning &lt;br&gt;- auto updates &amp;amp; maintenance&lt;br&gt;- build infrastructure for all my images etc&lt;br&gt;- security of all the things&lt;br&gt;- cameras&lt;br&gt;- keeping all laptops up to date&lt;/p&gt;&amp;mdash; jessie frazelle (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/935667037145305088?ref_src=twsrc%5Etfw&#34;&gt;November 29, 2017&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;https://platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Windows for Linux Nerds</title>
                <link>https://blog.jessfraz.com/post/windows-for-linux-nerds/</link>
                <pubDate>Sat, 09 Sep 2017 11:25:24 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/windows-for-linux-nerds/</guid>
                    <description>

&lt;p&gt;I recently started a job at Microsoft. In my first week I have already
learned so much about Windows, I figured I would try to put it all into
writing. This post is coming to you from a Windows Subsystem for Linux console!&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;I&amp;#39;m
headed to Seattle because I&amp;#39;M JOINING MICROSOFT, at the airport wearing
this awesome shirt from &lt;a href=&#34;https://twitter.com/listonb&#34;&gt;@listonb&lt;/a&gt;
&amp;amp; &lt;a href=&#34;https://twitter.com/Taylorb_msft&#34;&gt;@Taylorb_msft&lt;/a&gt; ���� &lt;a
href=&#34;https://t.co/8rnAg1dsPd&#34;&gt;pic.twitter.com/8rnAg1dsPd&lt;/a&gt;&lt;/p&gt;&amp;mdash; jessie
frazelle (@jessfraz) &lt;a
href=&#34;https://twitter.com/jessfraz/status/904710675779514368&#34;&gt;September 4,
2017&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;//platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;New job and I got a Windows computer and a Linux computer! If you are new to my
blog, let me tell you: I love setting up a perfect desktop experience.
I&amp;rsquo;ve written a few posts on it (for Linux), you should check them out.
Setting up a Windows
computer is something I have not done in quite some time. I will
describe a bit how to &lt;a href=&#34;#setting-up-a-windows-machine-in-a-reproducible-way&#34;&gt;set up a windows machine in a reproducible
way&lt;/a&gt; at the end of this
post.&lt;/p&gt;

&lt;p&gt;I would like to thank &lt;a href=&#34;https://twitter.com/richturn_ms&#34;&gt;Rich Turner&lt;/a&gt;, &lt;a href=&#34;https://twitter.com/gigastarks&#34;&gt;John
Starks&lt;/a&gt;,
&lt;a href=&#34;https://twitter.com/Taylorb_msft&#34;&gt;Taylor Brown&lt;/a&gt;, and &lt;a href=&#34;https://twitter.com/VirtualScooley&#34;&gt;Sarah
Cooley&lt;/a&gt; for taking the time to explain
a lot of the following to me. :)&lt;/p&gt;

&lt;h2 id=&#34;windows-subsystem-for-linux-wsl&#34;&gt;Windows Subsystem for Linux (WSL)&lt;/h2&gt;

&lt;p&gt;Let&amp;rsquo;s start with Windows Subsystem for Linux, aka
WSL. Even &lt;a href=&#34;https://twitter.com/monkchips&#34;&gt;@monkchips&lt;/a&gt; wrote that since I joined
Microsoft &lt;a href=&#34;https://redmonk.com/jgovernor/2017/09/06/on-hiring-jessie-frazelle-microsofts-developer-advocacy-hot-streak-continues/&#34;&gt;&amp;ldquo;Linux
Subsystem for Windows will definitely be getting
a workout.&amp;rdquo;&lt;/a&gt;
I am super excited about Windows Subsystem for Linux. It is one of the coolest
pieces of tech I&amp;rsquo;ve seen since I started using Docker.&lt;/p&gt;

&lt;p&gt;First, a little background on how WSL works&amp;hellip;&lt;/p&gt;

&lt;p&gt;You can learn a lot more about this from the
&lt;a href=&#34;https://blogs.msdn.microsoft.com/wsl/2016/04/22/windows-subsystem-for-linux-overview/&#34;&gt;Windows Subsystem for Linux Overview&lt;/a&gt;. I will go over some of the parts I found to be the most interesting.&lt;/p&gt;

&lt;p&gt;The Windows NT kernel was designed from the beginning to support running POSIX,
OS/2, and other subsystems. In the early days, these were just user-mode
programs that would interact with &lt;code&gt;ntdll&lt;/code&gt; to perform system calls. Since the
Windows NT kernel supported POSIX there was already a &lt;code&gt;fork&lt;/code&gt; system call
implemented in the kernel. However, the Windows NT call for &lt;code&gt;fork&lt;/code&gt;,
&lt;code&gt;NtCreateProcess&lt;/code&gt;, is not directly compatible with the Linux syscall so it has
some special handling you can read about more under &lt;a href=&#34;#system-calls&#34;&gt;System Calls&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are both user and kernel mode parts to WSL. Below is a diagram showing
the basic Windows kernel and user modes alongside the WSL user and kernel
modes.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/wsl.png&#34; alt=&#34;wsl&#34; /&gt;&lt;/p&gt;

&lt;p&gt;The blue boxes represent kernel components and the green boxes are Pico Processes.
The LX Session Manager Service handles the life cycle of Linux instances.
LXCore and lxsys, &lt;code&gt;lxcore.sys&lt;/code&gt; and &lt;code&gt;lxss.sys&lt;/code&gt; respectively,
translate the Linux syscalls into NT APIs.&lt;/p&gt;

&lt;h3 id=&#34;pico-processes&#34;&gt;Pico Processes&lt;/h3&gt;

&lt;p&gt;As you can see in the diagram above, &lt;code&gt;init&lt;/code&gt; and &lt;code&gt;/bin/bash&lt;/code&gt; are
Pico processes. Pico processes work by having system calls and user mode
exceptions dispatched to a paired driver. Pico processes and drivers allow
Windows Subsystem for Linux to load executable ELF binaries into a Pico
process’ address space and execute them on top of a Linux-compatible layer of
system calls.&lt;/p&gt;

&lt;p&gt;You can read even more in depth on this from the &lt;a href=&#34;https://blogs.msdn.microsoft.com/wsl/2016/05/23/pico-process-overview/&#34;&gt;MSDN Pico Processes
post&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&#34;system-calls&#34;&gt;System Calls&lt;/h3&gt;

&lt;p&gt;One of the first things I did in WSL was run a syscall fuzzer. I knew it would
break but it was interesting for the purposes of figuring out which syscalls
had been implemented without looking at the source. This was how I realized
PID and mount namespaces were already implemented into &lt;code&gt;clone&lt;/code&gt; and &lt;code&gt;unshare&lt;/code&gt;!&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/wsl-unshare.gif&#34; alt=&#34;wsl-namespaces&#34; /&gt;&lt;/p&gt;

&lt;p&gt;The WSL kernel drivers, &lt;code&gt;lxss.sys&lt;/code&gt; and &lt;code&gt;lxcore.sys&lt;/code&gt;, handle the Linux system call
requests and translate them to the Windows NT kernel. None of this code came
from the Linux kernel, it was all re-implemented by Windows engineers. This is
truly mind blowing.&lt;/p&gt;

&lt;p&gt;When a syscall is made from a Linux executable it gets
passed to &lt;code&gt;lxcore.sys&lt;/code&gt; which will translate it into the equivalent Windows NT
call. For example, &lt;code&gt;open&lt;/code&gt; to &lt;code&gt;NtOpenFile&lt;/code&gt; and &lt;code&gt;kill&lt;/code&gt; to
&lt;code&gt;NTTerminateProcess&lt;/code&gt;. If there is no mapping then the Windows kernel mode
driver will handle the request directly. This was the case for &lt;code&gt;fork&lt;/code&gt;, which
has &lt;code&gt;lxcore.sys&lt;/code&gt; prepare the process to be copied and then call the appropriate
Windows NT kernel APIs to create and copy the process.&lt;/p&gt;

&lt;p&gt;You can learn more from the &lt;a href=&#34;https://blogs.msdn.microsoft.com/wsl/2016/06/08/wsl-system-calls/&#34;&gt;MSDN System Calls
post&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&#34;launching-windows-executables&#34;&gt;Launching Windows Executables&lt;/h3&gt;

&lt;p&gt;Since WSL allows for running Linux binaries natively (without a VM),
this allows for some really fun interactions.&lt;/p&gt;

&lt;p&gt;You can actually spawn Windows binaries from WSL. Linux ELF binaries get
handled by &lt;code&gt;lxcore.sys&lt;/code&gt; and &lt;code&gt;lxss.sys&lt;/code&gt; as described above and Windows binaries
go through the typical Windows userspace.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/cmd-exe.gif&#34; alt=&#34;cmd.exe&#34; /&gt;&lt;/p&gt;

&lt;p&gt;You can even launch Windows GUI apps as well this way! Imagine a Linux setup
where you can launch PowerPoint without a VM&amp;hellip;. well this is it!!&lt;/p&gt;

&lt;h3 id=&#34;launching-x-applications&#34;&gt;Launching X Applications&lt;/h3&gt;

&lt;p&gt;You can also run X Applications in WSL. You just need an X server. I used
&lt;a href=&#34;https://sourceforge.net/projects/vcxsrv/&#34;&gt;&lt;code&gt;vcxsrv&lt;/code&gt;&lt;/a&gt; to try it out. I run
&lt;code&gt;i3&lt;/code&gt; on all my Linux machines and tried it out in WSL like my awesome coworker &lt;a href=&#34;https://twitter.com/bketelsen&#34;&gt;Brian Ketelsen&lt;/a&gt;
did in his &lt;a href=&#34;https://www.brianketelsen.com/i3-windows/&#34;&gt;blog post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/wsl-i3.jpg&#34; alt=&#34;wsl-i3&#34; /&gt;&lt;/p&gt;

&lt;p&gt;The hidpi is a little gross but if you play with the settings for the X server
you can get it to a tolerable place. While I think this is neat for running
whatever X applications you love, personally I am going to stick to
using &lt;code&gt;tmux&lt;/code&gt; as my entrypoint for WSL and using the Windows GUI apps I need vs.
Linux X applications. This just feels less heavy (remember, I love minimal)
and I haven&amp;rsquo;t come across an X application I can not live without for the
time being. It&amp;rsquo;s nice to know X applications can work when I do need something
though. :)&lt;/p&gt;

&lt;h3 id=&#34;pain-points&#34;&gt;Pain Points&lt;/h3&gt;

&lt;p&gt;There are still quite a few pain points with using Windows Subsystem for Linux,
but it&amp;rsquo;s important to remember it is still in the beginnings.
So that you all have an idea of what to expect I will list
them here and we can watch how they improve in future builds. Each item links to
the respective GitHub issue.&lt;/p&gt;

&lt;p&gt;Keep in mind, I am using the default Windows console for everything. It has
improved significantly since I played with it 2 years ago while we were
working on porting the Docker client and daemon to Windows. :)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/Microsoft/BashOnWindows/issues/235&#34;&gt;&lt;strong&gt;Copy/Paste&lt;/strong&gt;&lt;/a&gt;:
I am used to using &lt;code&gt;ctrl-shift-v&lt;/code&gt; and &lt;code&gt;ctrl-shift-c&lt;/code&gt; for copy
paste in a terminal and of course those don&amp;rsquo;t work. From what I can tell
&lt;code&gt;enter&lt;/code&gt; is copy&amp;hellip; supa weird&amp;hellip; and &lt;code&gt;ctrl-v&lt;/code&gt; says it&amp;rsquo;s paste. Of course it
doesn&amp;rsquo;t work for me. I can get paste to work by two-finger clicking in the
term, but that does not work in &lt;code&gt;vim&lt;/code&gt; and it&amp;rsquo;s a pretty weird interaction.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/Microsoft/BashOnWindows/issues/279&#34;&gt;&lt;strong&gt;Scroll&lt;/strong&gt;&lt;/a&gt;:
This might just be a &lt;em&gt;huge&lt;/em&gt; pet peeve of mine but the scroll
should not be able to scroll down to nothing. This happens all the time by
accident for me with the mouse and I have no idea why the terminal is
rendering more space down there.
Also typing after I have scrolled should return me back to the
console place where I am typing. It unfortunately does not.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/Microsoft/BashOnWindows/issues/873&#34;&gt;&lt;strong&gt;Files Slow&lt;/strong&gt;&lt;/a&gt;:
Saving a lot of files to disk is super slow. This applies for
example to git clones, unpacking tarballs and more. Windows is not used to
applications that save a lot of files so this is being worked on to be more
performant. Obviously the unix way of &amp;ldquo;everything is a file&amp;rdquo; does not scale
well when saving a lot of small files is super slow.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/Microsoft/BashOnWindows/issues/1051&#34;&gt;&lt;strong&gt;Sharing Files between Windows and
WSL&lt;/strong&gt;&lt;/a&gt;:
Right now, like I pointed out,
your Windows filesystem is mounted as &lt;code&gt;/mnt/c&lt;/code&gt; in WSL. But you can&amp;rsquo;t quite
yet have a git repo cloned in WSL and then also edit from Windows. The VolFS
file system, all file paths that don&amp;rsquo;t begin with /mnt, such as /home, is
much closer to Linux standards. If you need to access files in VolFS,
you can use &lt;code&gt;bash.exe&lt;/code&gt; to copy them somewhere under &lt;code&gt;/mnt/c&lt;/code&gt;,
use Windows to do whatever on it, then use &lt;code&gt;bash.exe&lt;/code&gt; to copy them back
when you are done. You can also all Visual Studio code on the file from WSL
and that will work. :)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;setting-up-a-windows-machine-in-a-reproducible-way&#34;&gt;Setting Up a Windows Machine in a Reproducible Way&lt;/h2&gt;

&lt;p&gt;This was super important to me since I am used to Linux where everything is
scriptable and I have scripts for starting from a blank machine to my exact
perfect setup. A few people mentioned I should check out
&lt;a href=&#34;http://boxstarter.org&#34;&gt;boxstarter.org&lt;/a&gt; for making this possible on Windows.&lt;/p&gt;

&lt;p&gt;Turns out it works super well! My gist for my machine lives &lt;a href=&#34;https://gist.github.com/jessfraz/7c319b046daa101a4aaef937a20ff41f&#34;&gt;on
github&lt;/a&gt;.
There is another powershell script there for uninstalling a few programs.
I love all things minimal so I like to uninstall applications I will never use.
I also learned some cool powershell commands for listing all your installed
applications.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-powershell&#34;&gt;#--- List all installed programs --#
Get-ItemProperty
HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\*
| Select-Object DisplayName, DisplayVersion, Publisher, InstallDate
|Format-Table -AutoSize

#--- List all store-installed programs --#
Get-AppxPackage | Select-Object Name, PackageFullName, Version |Format-Table
-AutoSize
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I am going to be scripting more of this out in the future with regard to
pinning applications to the taskbar in powershell and a bunch of other
settings. Stay tuned.&lt;/p&gt;

&lt;p&gt;Overall, I hope you now understand some basics around Windows Subsystem for
Linux and are as excited as I am to see it grow and evolve in the future!&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>A Rant on Usable Security</title>
                <link>https://blog.jessfraz.com/post/a-rant-on-usable-security/</link>
                <pubDate>Thu, 27 Jul 2017 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/a-rant-on-usable-security/</guid>
                    <description>

&lt;p&gt;I recently gave a talk at DevOps Days
(&lt;a href=&#34;https://docs.google.com/a/jessfraz.com/presentation/d/1QnakgUC8AaNydPZCmKGYYja8gs2WoHbHRSjioIVdD9g/edit?usp=drivesdk&#34;&gt;slides&lt;/a&gt;)
and it had a pretty great response. I&amp;rsquo;m still pretty care-mad about the topics
it covered so I figured I would turn some key points from it into a blog post.&lt;/p&gt;

&lt;p&gt;The overall outline of the talk covered the past, present, and future of
usable security. Let&amp;rsquo;s start with the past.&lt;/p&gt;

&lt;h2 id=&#34;the-past&#34;&gt;The Past&lt;/h2&gt;

&lt;p&gt;A lot of the security tooling of the past (that we still use today)
require users to jump through a lot of hoops or learn a hard to grok interface.
One of the examples I used was GPG. Contrary to popular opinion, I actually
don&amp;rsquo;t find GPG entirely unusable. I obviously agree that it could be easier
to use, rotate keys, revoke keys blah blah blah. While I find it not exactly
terrible, I can see and completely understand why the majority of
criticism I hear about GPG is that it is hard to use.&lt;/p&gt;

&lt;p&gt;There is a point at which better security comes at the expense of convenience.
This needs to stop happening. Stop compromising convenience for security.
Instead find the right balance between the two. Doing this takes collaboration
from both security engineers and software engineers.&lt;/p&gt;

&lt;p&gt;Dave Cheney recently had a great tweet.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;Why is all software shit? Today I discovered the &lt;a href=&#34;https://twitter.com/duosec&#34;&gt;@duosec&lt;/a&gt; API returns 200 even if someone denies the 2fa request.&lt;/p&gt;&amp;mdash; Dαve Cheney (@davecheney) &lt;a href=&#34;https://twitter.com/davecheney/status/889725425781424129&#34;&gt;July 25, 2017&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;//platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;I love this tweet because it reeks of the stench that only security engineers
built this API. Most software engineers I know would decide to use an HTTP
status code&amp;hellip; I mean that&amp;rsquo;s what they are for. ;)&lt;/p&gt;

&lt;p&gt;When you combine expertise in different areas you build better products. This is
not rocket science. However egos tend to get in the way as well as biases
towards people who know and like the same things you do. I assure you,
though, when security and software engineers work together
truly usable security will be the outcome.&lt;/p&gt;

&lt;h2 id=&#34;the-present&#34;&gt;The Present&lt;/h2&gt;

&lt;p&gt;A lot of the content for this portion of the talk focused on how containers make
securing your infrastructure easier. I will touch on some of that but if you
wish to know more you should checkout the
&lt;a href=&#34;https://docs.google.com/a/jessfraz.com/presentation/d/1QnakgUC8AaNydPZCmKGYYja8gs2WoHbHRSjioIVdD9g/edit?usp=drivesdk&#34;&gt;slides&lt;/a&gt;
or some of my other blog posts on container security.&lt;/p&gt;

&lt;p&gt;Two key features in Docker are the default AppArmor and Seccomp profiles.
AppArmor and Seccomp are Linux Security Modules that are not exactly usable
by someone who is unfamiliar with either.&lt;/p&gt;

&lt;p&gt;AppArmor can control and audit various process actions such as file
(read, write, execute, etc) and system functions (mount, network tcp, etc).
It has its own meta language, so to speak, and I actually have a repo that changes
the docs for it to more a readable format via a cron job:
&lt;a href=&#34;https://github.com/jessfraz/apparmor-docs&#34;&gt;github.com/jessfraz/apparmor-docs&lt;/a&gt;.
The default profile for AppArmor does super sane things like preventing writing to
&lt;code&gt;/proc/{num}&lt;/code&gt;, &lt;code&gt;/proc/sys&lt;/code&gt;, &lt;code&gt;/sys&lt;/code&gt; and preventing &lt;code&gt;mount&lt;/code&gt; to name a few.&lt;/p&gt;

&lt;p&gt;Syscall filters allow an application to define
what syscalls it allows or denies. The default in Docker is a whitelist that I
initially wrote. Some of the key things it blocks are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;add_key&lt;/code&gt;, &lt;code&gt;keyctl&lt;/code&gt;, &lt;code&gt;request_key&lt;/code&gt;: Prevent containers from using the kernel
keyring, which is not namespaced. I wrote a blog post on
&lt;a href=&#34;https://blog.jessfraz.com/post/two-objects-not-namespaced-linux-kernel/&#34;&gt;Two Objects not Namespaced by the Linux Kernel&lt;/a&gt;
and the keyring was one I mentioned.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;clone&lt;/code&gt;, &lt;code&gt;unshare&lt;/code&gt;: Deny cloning new namespaces. Also gated by &lt;code&gt;CAP_SYS_ADMIN&lt;/code&gt;
for &lt;code&gt;CLONE_*&lt;/code&gt; flags, except &lt;code&gt;CLONE_USERNS&lt;/code&gt;. I specifically wanted to block
cloning new user namespaces inside containers because they are notorious
for being points of entry for kernel bugs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There also is an
&lt;a href=&#34;https://github.com/moby/moby/blob/52f32818df8bad647e4c331878fa44317e724939/docs/security/seccomp.md#syscalls-blocked-by-the-default-profile&#34;&gt;entire document that I started in the docker repo&lt;/a&gt;
that outlines what we block and why.&lt;/p&gt;

&lt;p&gt;Having written the default seccomp profile for Docker I am pretty familiar with
how hard this would be for other people. It requires a deep knowledge of the
application being contained and the syscalls it requires. This was also a quite
terrifying feature to add to Docker. When I added it, Docker was already very
popular and if anything would break in a big way it would be on the front page
of hacker news and all the maintainers would have a very bad day. So turning
on something that will &lt;code&gt;EPERM&lt;/code&gt; by default if we left out any important syscall
is terrifying. I had stress nightmares for weeks. In the end everything went
much smoother than I feared but that was also after HEAVY HEAVY testing. Luckily
I run super obscure things in containers so I even caught that we left out &lt;code&gt;send&lt;/code&gt;
and &lt;code&gt;recv&lt;/code&gt; right before the release by running Skype (a 32 bit application) in
a container.&lt;/p&gt;

&lt;p&gt;By making a default for all containers, we can secure a very large amount of
users without them even realizing it&amp;rsquo;s happening. This leads perfectly into
my ideas for the future and continuing this motion of making security
on by default and invisible to users.&lt;/p&gt;

&lt;h2 id=&#34;the-future&#34;&gt;The Future&lt;/h2&gt;

&lt;p&gt;I tend to have pretty weird brain child ideas and this is one of them.
I started thinking about where else a kernel feature like seccomp could easily
be integrated and used by a large number of people. The answer is&amp;hellip;
programming languages. I do work with the Go team and as a full content warning
none of this crazy that follows is in any way endorsed by them. ;)&lt;/p&gt;

&lt;p&gt;The idea I had is to do &lt;strong&gt;build-time generated&lt;/strong&gt; seccomp filters that will be
&lt;strong&gt;applied on run&lt;/strong&gt;.&lt;/p&gt;

&lt;h4 id=&#34;why-generate-seccomp-filters-at-build-time&#34;&gt;Why generate seccomp filters at &lt;strong&gt;build-time&lt;/strong&gt;?&lt;/h4&gt;

&lt;p&gt;Generating security filters/profiles at runtime has been done in the past
&amp;amp; failed&amp;hellip; over and over and over again. Something is always missed while
profiling the application. You cannot guarantee that everything that your
application will do will be called while in this profiling phase. Unless of
course you have 100% test coverage, which if you do: Good For You. When the
&amp;ldquo;thing that was missed&amp;rdquo; is called and blocked, users will just turn off the
&amp;ldquo;security.&amp;rdquo; This happens all the time with things like SELinux and AppArmor.&lt;/p&gt;

&lt;p&gt;By generating filters at build-time we can ensure ALL code is included in the
filter. I wrote a POC of this and I showed it at
&lt;a href=&#34;https://kiwicon.org/the-con/talks/#e253&#34;&gt;Kiwicon&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are three problems though.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Executing other binaries. I can&amp;rsquo;t know what syscalls the binary being called
is going to use so we are back at square one.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;package main

import (
    &amp;quot;fmt&amp;quot;
    &amp;quot;log&amp;quot;
    &amp;quot;os/exec&amp;quot;
)

func main() {
    cmd := exec.Command(&amp;quot;myprogram&amp;quot;)
    out, err := cmd.CombinedOutput()
    if err != nil {
        log.Fatal(err)
    }
    fmt.Printf(&amp;quot;%s\n&amp;quot;, out)
}
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Plugins. This problem is solvable in that &lt;em&gt;if&lt;/em&gt; this feature was to exist
we could export at the plugin build time the seccomp filters to a
field in the ELF binary or something similar.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;func main() {
    p, err := plugin.Open(&amp;quot;plugin_name.so&amp;quot;)
    if err != nil {
        log.Fatal(err)
    }
    v, err := p.Lookup(&amp;quot;V&amp;quot;)
    if err != nil {
        log.Fatal(err)
    }
    fmt.Printf(&amp;quot;%#v\n&amp;quot;, v)
}
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Sending arbitrary arguments to &lt;code&gt;syscall.RawSyscall&lt;/code&gt; and similar.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;func main() {
    if len(os.Args) &amp;lt;= 3 {
        log.Fatal(&amp;quot;must pass 4 arguments to syscall.RawSyscall&amp;quot;)
    }
    r1, r2, errno := syscall.RawSyscall(strToUintptr(os.Args[0]),
        strToUintptr(os.Args[1]),
        strToUintptr(os.Args[2]),
        strToUintptr(os.Args[3]))
    if errno != 0 {
        log.Fatalf(&amp;quot;errno: %#v&amp;quot;, errno)
    }
    fmt.Printf(&amp;quot;r1: %#v\nr2: %#v\n&amp;quot;, r1, r2)
}
func strToUintptr(s string) uintptr {
    return *(*uintptr)(unsafe.Pointer(&amp;amp;s))
}
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While this is not perfect by any stretch of the imagination I believe it should
open your mind to what &lt;em&gt;could&lt;/em&gt; be possible in the future. Hopefully my dream
of making binaries sandbox themselves will eventually get there. I know I won&amp;rsquo;t
stop until it does. ;) Overall, I would like you to remember to find the
right balance between secure AND usable. Don’t break users and get security
engineering and software engineering working together!&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Two Objects not Namespaced by the Linux Kernel</title>
                <link>https://blog.jessfraz.com/post/two-objects-not-namespaced-linux-kernel/</link>
                <pubDate>Wed, 26 Apr 2017 12:17:58 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/two-objects-not-namespaced-linux-kernel/</guid>
                    <description>

&lt;p&gt;If you are new to my blog then you might be new to the concept of Linux kernel
namespaces. I suggest first reading
&lt;a href=&#34;https://blog.jessfraz.com/post/getting-towards-real-sandbox-containers/&#34;&gt;Getting Towards Real Sandbox Containers&lt;/a&gt;
and
&lt;a href=&#34;https://blog.jessfraz.com/post/containers-zones-jails-vms/&#34;&gt;Setting the Record Straight: containers vs. Zones vs. Jails vs. VMs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Linux namespaces are one of the primitives that make up what is known as a
&amp;ldquo;container.&amp;rdquo; They control what a process can see. Cgroups, the other main
ingredient of &amp;ldquo;containers&amp;rdquo;, control what a process can use. But let&amp;rsquo;s focus for
this post on namespaces. The current set of namespaces in the kernel are:
mount, pid, uts, ipc, net, user, and cgroup. These all cover basically exactly what
they are named after. But what is not covered? Well, let&amp;rsquo;s go over two
of the things not namespaced by the Linux kernel.&lt;/p&gt;

&lt;h3 id=&#34;time&#34;&gt;Time&lt;/h3&gt;

&lt;p&gt;First, and my favorite to nerd out about, is &lt;strong&gt;time.&lt;/strong&gt; Now, it should go without
saying that &lt;em&gt;if&lt;/em&gt; you want to set the time in Linux you need &lt;code&gt;CAP_SYS_TIME&lt;/code&gt;. By
default you do not get this capability in Docker containers. The &lt;code&gt;settimeofday&lt;/code&gt;,
etc syscalls are also blocked by the default seccomp profile in Docker as well.&lt;/p&gt;

&lt;p&gt;What happens if you do change the time in a container?&lt;/p&gt;

&lt;p&gt;Well, it&amp;rsquo;s not namespaced so the time on the host would change as well.
&amp;ldquo;But whaaaaa? I thought containers were just like a VM&amp;rdquo;, you ask. Again, you
should read my post
&lt;a href=&#34;https://blog.jessfraz.com/post/containers-zones-jails-vms/&#34;&gt;Setting the Record Straight: containers vs. Zones vs. Jails vs. VMs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One of my favorite questions I have been asked at a conference is &amp;ldquo;If you could
add any new namespace to Linux what would it be?&amp;rdquo; Obviously this is an awesome
question, totally up my alley, and not even a statement from someone trying to
prove to me &amp;ldquo;they know things.&amp;rdquo; But I digress, I always answer with &amp;ldquo;Time.&amp;rdquo;
There is no production use case for this, other than making more NTP
hell for yourself. I do believe there is a development use case: if you want to
change the time for a test running in one container but not mess with the other
tests running in other containers. What a fun way to make a chaos monkey for NTP!
:P&lt;/p&gt;

&lt;h3 id=&#34;kernel-keyring&#34;&gt;Kernel Keyring&lt;/h3&gt;

&lt;p&gt;The kernel keyring is another item not namespaced. There have been recent efforts
to &lt;a href=&#34;https://patchwork.kernel.org/patch/9394983/&#34;&gt;fix this for &lt;em&gt;user namespaces&lt;/em&gt;&lt;/a&gt;.
Again, the default Docker seccomp profile blocks these syscalls so you don&amp;rsquo;t
shoot yourself in the foot.&lt;/p&gt;

&lt;p&gt;What happens if you use the kernel keyring from within in a container?&lt;/p&gt;

&lt;p&gt;Well if root in one container stores keys in the keyring, any other containers
on that same host can see it in their keyring, which is really just the same
exact keyring.&lt;/p&gt;

&lt;p&gt;All in all, I hope this proves once again that you need more than just
namespaces and cgroups to get any sort of &amp;ldquo;real&amp;rdquo; isolation with containers.
Please, please don&amp;rsquo;t disable seccomp or add extra capabilities you don&amp;rsquo;t need.
Happy containering! I must leave you with this gif&amp;hellip; :D&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/turn-back-time.gif&#34; alt=&#34;turn-back-time&#34; /&gt;&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Setting the Record Straight: containers vs. Zones vs. Jails vs. VMs</title>
                <link>https://blog.jessfraz.com/post/containers-zones-jails-vms/</link>
                <pubDate>Tue, 28 Mar 2017 12:17:58 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/containers-zones-jails-vms/</guid>
                    <description>

&lt;p&gt;I&amp;rsquo;m tired of having the same conversation over and over again with people so
I figured I would put it into a blog post.&lt;/p&gt;

&lt;p&gt;Many people ask me if I have tried or what I think of Solaris Zones / BSD Jails. The
answer is simply: I have tried them and I definitely like them. The conversation
then heads towards them telling me how Zones and Jails are far superior to
containers and that I should basically just give up with Linux containers and use VMs.&lt;/p&gt;

&lt;p&gt;Which to be honest is a bit forward to someone who has spent a large portion of
her career working with containers and trying to make containers more secure.
Here is what I tell them:&lt;/p&gt;

&lt;h3 id=&#34;the-design-of-solaris-zones-bsd-jails-vms-and-containers-are-very-different&#34;&gt;The Design of Solaris Zones, BSD Jails, VMs and containers are very different.&lt;/h3&gt;

&lt;p&gt;Solaris Zones, BSD Jails, and VMs are first class concepts. This is clear from
the &lt;a href=&#34;https://us-east.manta.joyent.com/jmc/public/opensolaris/ARChive/PSARC/2002/174/zones-design.spec.opensolaris.pdf&#34;&gt;Solaris Zone Design Spec&lt;/a&gt; and the &lt;a href=&#34;https://www.freebsd.org/doc/handbook/jails.html&#34;&gt;BSD Jails Handbook&lt;/a&gt;.
I hope it can go without saying that VMs are very much a first class object
without me having to link you somewhere :P.&lt;/p&gt;

&lt;p&gt;Containers on the other hand are not real things. I have said this in many
talks and I&amp;rsquo;m saying it again now.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;CONTAINERS ARE NOT A REAL THING!!! &lt;a href=&#34;https://twitter.com/jessfraz&#34;&gt;@jessfraz&lt;/a&gt; talking containers &lt;a href=&#34;https://twitter.com/hashtag/GoogleNext17?src=hash&#34;&gt;#GoogleNext17&lt;/a&gt; &lt;a href=&#34;https://t.co/gzxjNnSk2n&#34;&gt;pic.twitter.com/gzxjNnSk2n&lt;/a&gt;&lt;/p&gt;&amp;mdash; Jorge Silva (@thejsj) &lt;a href=&#34;https://twitter.com/thejsj/status/840295431779172352&#34;&gt;March 10, 2017&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;//platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;A &amp;ldquo;container&amp;rdquo; is just a term people use to describe a combination of Linux
namespaces and cgroups. &lt;em&gt;Linux namespaces and cgroups&lt;/em&gt; ARE first class objects.
NOT containers.&lt;/p&gt;

&lt;p&gt;I am trying to make this distinction very clear to make a point. The designs
are different. PERIOD.&lt;/p&gt;

&lt;p&gt;Let&amp;rsquo;s go over some of the things you can do with containers that you CANNOT do
with Jails or Zones or VMs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sharing Namespaces&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since containers are made with specific building blocks of namespaces this
allows for doing some super neat things like sharing namespaces.&lt;/p&gt;

&lt;p&gt;There are many different namespaces but I will give a couple examples.&lt;/p&gt;

&lt;p&gt;This specific example can be seen in a demo by Arnaud Porterie from &lt;a href=&#34;https://www.youtube.com/watch?v=I7i4SY-iRkA&#34;&gt;our talk at
Dockercon EU in 2015&lt;/a&gt;. You can
have your application running in one container, then in a different
container sharing a net namespace you can run wireshark and inspect the packets
from the first container.&lt;/p&gt;

&lt;p&gt;You could also do the same with sharing a pid namespace, except instead of
running wireshark you can run strace and debug your application from an
entirely different container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sharing X socket&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I assume if you are on my blog you are familiar with my posts on &lt;a href=&#34;https://blog.jessfraz.com/post/docker-containers-on-the-desktop/&#34;&gt;running
containers on your desktop&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&#34;legos&#34;&gt;Legos&lt;/h3&gt;

&lt;p&gt;To really drive home a point I&amp;rsquo;m going to make an analogy describing each of
these things in terms of legos.&lt;/p&gt;

&lt;p&gt;VMs, Jails, and Zones are if you bought the legos already put together AND
glued. So it&amp;rsquo;s basically the Death Star and you
don&amp;rsquo;t have to do any work you get it pre-assembled out of the box. You can&amp;rsquo;t even take it apart.&lt;/p&gt;

&lt;p&gt;Containers come with just the pieces so while the box says to build the Death
Star, you are not tied to that. You can build two boats connected by a flipping
ocean and no one is going to stop you.&lt;/p&gt;

&lt;p&gt;This kind of flexibility allows for super awesome things but of course comes at
a price.&lt;/p&gt;

&lt;h3 id=&#34;complexity-bugs&#34;&gt;Complexity == Bugs&lt;/h3&gt;

&lt;p&gt;Now is the point where the person I would be having the conversation with starts
yelling at me that containers are not secure. Hello, thank you, I am aware.
Also if anyone gives a shit about actually fixing this, it&amp;rsquo;s me.&lt;/p&gt;

&lt;p&gt;Again, containers were not a top level design, they are something we build
&lt;em&gt;from&lt;/em&gt; Linux primitives. Zones, Jails, and VMs are designed as top level
isolation.&lt;/p&gt;

&lt;p&gt;The cool things I expressed above allow for a level of flexibility and control that Zones,
Jails, and VMs do not. By design.&lt;/p&gt;

&lt;p&gt;This extra complexity leads to bugs that lead to container escapes. Don&amp;rsquo;t get
me wrong you could also escape a VM, Jail or Zone, but the design is not as
complicated as that of the primitives that make up containers.
Less is more, and the less complexity you have the less likely you will have odd,
edge case bugs.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/roll-safe.jpg&#34; alt=&#34;roll-safe&#34; /&gt;&lt;/p&gt;

&lt;p&gt;The point I am trying to make is that Jails, Zones, VMs and containers were
designed and built in different ways. Containers are not a Linux isolation primitive, they
merely consume Linux primitives which allow for some interesting interactions.
They are not perfect; Nothing is.&lt;/p&gt;

&lt;p&gt;We can make them better by reducing some of the complexity and building
hardening features around them which is a goal I have been trying and will
continue trying to do.&lt;/p&gt;

&lt;p&gt;You can get a sandbox level of isolation with containers, which I &lt;a href=&#34;https://blog.jessfraz.com/post/getting-towards-real-sandbox-containers/&#34;&gt;wrote in
more detail about here&lt;/a&gt;.
But this requires doing the work of building the Death Star from your pieces of
Seccomp, AppArmor, and SELinux profiles.&lt;/p&gt;

&lt;p&gt;I personally love Zones, Jails, and VMs and I think they all have a particular
use case. The confusion with containers primarily lies in assuming they fulfill
the same use case as the others; which they do not. Containers allow for a flexibility
and control that is not possible with Jails, Zones, or VMs. And THAT IS A FEATURE.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;/rant&amp;gt;&lt;/code&gt;&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Ultimate Linux on the Desktop</title>
                <link>https://blog.jessfraz.com/post/ultimate-linux-on-the-desktop/</link>
                <pubDate>Mon, 16 Jan 2017 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/ultimate-linux-on-the-desktop/</guid>
                    <description>

&lt;p&gt;Over the past couple of years I have set out to create the ultimate Linux on
the desktop experience for myself. Obviously everyone who runs Linux has their
own &lt;a href=&#34;https://misc.j3ss.co/gifs/ihaveopinionsaboutthings.gif&#34;&gt;opinions on things&lt;/a&gt;.
What this post will outline is &lt;em&gt;my&lt;/em&gt; ultimate Linux on the desktop experience.
So just remember that before you get your panties in a knot on HackerNews
because you live and die by Xmonad (I live and die by i3, fight me).&lt;/p&gt;

&lt;p&gt;First, you should already know that I run everything on my laptop in containers.
I outlined this in my posts about
&lt;a href=&#34;https://blog.jessfraz.com/post/docker-containers-on-the-desktop/&#34;&gt;Docker Containers on the Desktop&lt;/a&gt;
and
&lt;a href=&#34;https://blog.jessfraz.com/post/runc-containers-on-the-desktop/&#34;&gt;Runc Containers on the Desktop&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&#34;base-os&#34;&gt;Base OS&lt;/h2&gt;

&lt;p&gt;I used to use Debian as my base OS but I recently decided to try and run
CoreOS&amp;rsquo; Container Linux on the desktop. Container Linux is made for servers,
so obviously it doesn&amp;rsquo;t have graphics drivers. I added them and made a few other
horrible tweaks that I&amp;rsquo;m sure would make some people at CoreOS cringe. I am not
proud of these things but overall it worked!&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/coreos.png&#34; alt=&#34;coreos&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Mostly the changes for graphics drivers were the same
exact changes you would make installing Gentoo on your host: setting
&lt;code&gt;VIDEO_CARDS=&amp;quot;intel i915&amp;quot;&lt;/code&gt; in &lt;code&gt;/etc/portage/make.conf&lt;/code&gt;, &lt;code&gt;emerge&lt;/code&gt;-ing
&lt;code&gt;sys-kernel/linux-firmware&lt;/code&gt; etc, etc.&lt;/p&gt;

&lt;p&gt;Then I cut out the things I don&amp;rsquo;t need that only pertain to if you are using
Container Linux on your server, cluster management tools, support request
tools (lolz) etc. These were all pretty simple changes that I made to a new
&lt;code&gt;ebuild&lt;/code&gt; that I cloned from the &lt;code&gt;coreos-base/coreos ebuild&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I need to clean up the mess I&amp;rsquo;ve made of my forks of
the coreos build &lt;a href=&#34;https://github.com/jessfraz/scripts&#34;&gt;scripts&lt;/a&gt;,
&lt;a href=&#34;https://github.com/jessfraz/init&#34;&gt;init&lt;/a&gt;,
&lt;a href=&#34;https://github.com/jessfraz/coreos-overlay&#34;&gt;ebuilds&lt;/a&gt;,
&lt;a href=&#34;https://github.com/jessfraz/manifest&#34;&gt;manifest&lt;/a&gt;,
and &lt;a href=&#34;https://github.com/jessfraz/baselayout&#34;&gt;base layout&lt;/a&gt;. But you can checkout
the &lt;code&gt;desktop&lt;/code&gt; branch at each of those.&lt;/p&gt;

&lt;p&gt;Let me go over some of the benefits I get from using CoreOS&amp;rsquo; Container Linux as
my base OS.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;I can build my own images, it&amp;rsquo;s Gentoo and I know &lt;code&gt;emerge&lt;/code&gt; so I can
customize the base anyway I want.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;CoreOS&amp;rsquo; Container Linux uses the same auto update system as ChromeOS, which
is all based on &lt;a href=&#34;https://github.com/google/omaha&#34;&gt;Google&amp;rsquo;s Omaha&lt;/a&gt;. So all
I need to do to have auto updates for my OS is continuously release the
modded version of Container Linux to an Omaha server, which I will host.
(Yes I know I am insane to go through this much effort for my own laptop but
whatever.)&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;The filesystem is setup perfectly for running containers, there is
a read-only &lt;code&gt;/usr&lt;/code&gt; and a stateful read/write &lt;code&gt;/&lt;/code&gt;. The data stored on &lt;code&gt;/&lt;/code&gt;
will never be manipulated by the update process. Plus since &lt;code&gt;/usr&lt;/code&gt; is
read-only it really forces you to run everything in containers.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;My hardware has TPM capabilities so I get &lt;a href=&#34;https://coreos.com/blog/coreos-trusted-computing.html&#34;&gt;Trusted Computing through
Container Linux&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&#34;x11-wayland&#34;&gt;X11 &amp;amp; Wayland&lt;/h2&gt;

&lt;p&gt;Currently this setup is using X11 but that is not the goal in the future.
I plan to move it over to Wayland after the
&lt;a href=&#34;https://github.com/SirCmpwn/sway&#34;&gt;port of i3, sway,&lt;/a&gt; is feature compatible
with i3. It&amp;rsquo;s really close to done so I can try it out currently.&lt;/p&gt;

&lt;p&gt;This would eliminate all the problems with X being the worst, something
something keylogging blah blah blah. I&amp;rsquo;m not going to go into more detail now
because this is not meant to be a rant.&lt;/p&gt;

&lt;h2 id=&#34;everything-in-containers&#34;&gt;Everything in Containers&lt;/h2&gt;

&lt;p&gt;I already mentioned my two other blog posts on running desktop apps with
Docker and Runc, but on this laptop I wanted something better.&lt;/p&gt;

&lt;p&gt;You see the problem with both Docker and Runc as they are today is that they
must be run as root. And I&amp;rsquo;m not talking about the process &lt;em&gt;in&lt;/em&gt; the container.
I&amp;rsquo;m talking about the container spawner itself.&lt;/p&gt;

&lt;p&gt;I outlined the future of
&lt;a href=&#34;https://blog.jessfraz.com/post/getting-towards-real-sandbox-containers/&#34;&gt;Sandbox Containers&lt;/a&gt;
and there are patches to Runc to enable
&lt;a href=&#34;https://github.com/opencontainers/runc/pull/774&#34;&gt;rootless containers&lt;/a&gt;. If you want to know more
you should also watch &lt;a href=&#34;https://www.youtube.com/watch?v=r6EcUyamu94&amp;amp;feature=youtu.be&#34;&gt;Aleksa Sarai&amp;rsquo;s talk&lt;/a&gt;.
On this laptop I am &lt;em&gt;only&lt;/em&gt; using rootless containers.&lt;/p&gt;

&lt;p&gt;So overall everything runs in containers, I can automatically update my
operating system, and the containers are NOT running as root on my host. This
is the dream and reality.&lt;/p&gt;

&lt;p&gt;If you want to know all the stuff about what laptop I use you should checkout
my &lt;a href=&#34;https://usesthis.com/interviews/jessie.frazelle/&#34;&gt;uses this interview&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I gave a talk on this at
&lt;a href=&#34;https://www.youtube.com/watch?v=gES4-X6y278&#34;&gt;CoreOS Fest 2017&lt;/a&gt;,
check out &lt;a href=&#34;https://www.youtube.com/watch?v=gES4-X6y278&#34;&gt;the video&lt;/a&gt; and
&lt;a href=&#34;https://docs.google.com/presentation/d/17Hml1iFqdXElxOcrh9caQSC5px5mDgaS015Vhaz42ZY/edit?usp=sharing&#34;&gt;slides&lt;/a&gt;.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Scripting Your Way Outta Hell</title>
                <link>https://blog.jessfraz.com/post/scripting-your-way-outta-hell/</link>
                <pubDate>Fri, 30 Sep 2016 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/scripting-your-way-outta-hell/</guid>
                    <description>&lt;p&gt;It all started innocently enough. I &lt;em&gt;had&lt;/em&gt; &amp;ldquo;jfrazelle&amp;rdquo; as my GitHub handle for
years, but my Twitter, IRC and other handles are all &amp;ldquo;jessfraz&amp;rdquo;. No one on
GitHub was actually using &amp;ldquo;jessfraz&amp;rdquo; so I sat on it waiting to make my move.&lt;/p&gt;

&lt;p&gt;I&amp;rsquo;m currently on vacation this week so of course I was looking to break all the
things. One thing you must know about me is that at no point was I thinking
I hate this. I actually love stuff like this, I live for pain. Why else
would I run Linux on the desktop? But back to the story.&lt;/p&gt;

&lt;p&gt;I polled the twitterverse&amp;hellip;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;Over/Under how many links you think I will break if I change my github username? (Yes, I know there are redirects, but still.)&lt;/p&gt;&amp;mdash; Jess Frazelle (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/781697124748722177&#34;&gt;September 30, 2016&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;//platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;And then I made my move&amp;hellip;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;It is done. Let&amp;#39;s watch the world burn together. &lt;a href=&#34;https://t.co/YpLqpP1X38&#34;&gt;https://t.co/YpLqpP1X38&lt;/a&gt; &lt;a href=&#34;https://t.co/4MX1tTthHO&#34;&gt;pic.twitter.com/4MX1tTthHO&lt;/a&gt;&lt;/p&gt;&amp;mdash; Jess Frazelle (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/781705751626670081&#34;&gt;September 30, 2016&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;//platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;Everything was fine for a few minutes. Another thing you must know about me is:
I have a private Jenkins instance for continuous builds and testing. Yes, I am
this much of a nerd, but it is essential for building all the Dockerfiles for
my publicly readable private docker registry at &lt;code&gt;r.j3ss.co&lt;/code&gt;. I will save all
that for another blog post, but the jobs started triggering. Immediately
I got a bunch of emails about failed builds because Jenkins could not clone the
repos.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;oh noe &lt;a href=&#34;https://t.co/ZRQnwWNR5L&#34;&gt;pic.twitter.com/ZRQnwWNR5L&lt;/a&gt;&lt;/p&gt;&amp;mdash; Jess Frazelle (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/781745173168619520&#34;&gt;September 30, 2016&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;//platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;&amp;ldquo;This is fine&amp;rdquo; I thought to myself. It&amp;rsquo;s all configured with Jenkins DSLs and
I can just do a sed on those files and it will work again.&lt;/p&gt;

&lt;p&gt;I do this.&lt;/p&gt;

&lt;p&gt;The &amp;ldquo;apply-dsl&amp;rdquo; job is still red, &lt;em&gt;oh duh&lt;/em&gt; because it cannot clone the repo
where the DSLs live to even fix the problem. So I change it manually.&lt;/p&gt;

&lt;p&gt;This is fine.&lt;/p&gt;

&lt;p&gt;The builds all start again. Except now all the Go builds are failing because
importing &amp;ldquo;jfrazelle/&amp;hellip;&amp;rdquo; is not working. Vendor your crap kids!!!&lt;/p&gt;

&lt;p&gt;So I fix all these repos with the best vim command ever &lt;code&gt;argdo&lt;/code&gt;. &lt;code&gt;argdo&lt;/code&gt; will
apply the script you run to all the open buffers, so just open the buffers of
all the go files and run this:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;argdo %s/jfrazelle/jessfraz/g | update
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;| update&lt;/code&gt; makes sure it saves the buffer when it&amp;rsquo;s done editing.&lt;/p&gt;

&lt;p&gt;After ~50 repos of this I am tired but it&amp;rsquo;s fine. It&amp;rsquo;s all fine. Things are working again.&lt;/p&gt;

&lt;p&gt;Now I&amp;rsquo;m wondering who else I have broken&amp;hellip; I search GitHub to see&amp;hellip;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;I&amp;#39;m going to need some more tires for this fire. &lt;a href=&#34;https://t.co/YGgMWmaETt&#34;&gt;pic.twitter.com/YGgMWmaETt&lt;/a&gt;&lt;/p&gt;&amp;mdash; Jess Frazelle (@jessfraz) &lt;a href=&#34;https://twitter.com/jessfraz/status/781941461164052480&#34;&gt;September 30, 2016&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;//platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;I am for sure going to hell for this. What have I done?&lt;/p&gt;

&lt;p&gt;I &lt;a href=&#34;https://github.com/search?utf8=%E2%9C%93&amp;amp;q=%22jfrazelle+-%3E+jessfraz%22+author%3Ajessfraz&amp;amp;type=Issues&amp;amp;ref=searchresults&#34;&gt;made/am making some pull requests&lt;/a&gt;
to various repos. A few of those in the query above
are actually forks of my repos that don&amp;rsquo;t show up in GitHub as forks because
of the way the person forked it so they can be ignored.&lt;/p&gt;

&lt;p&gt;Overall, I &lt;em&gt;think&lt;/em&gt; I really f*cked this entire situation by having an account for
&amp;ldquo;jfrazelle&amp;rdquo; and an account for &amp;ldquo;jessfraz&amp;rdquo; and swapping them.  I think this is
why the &lt;code&gt;git clone/fetch/etc&lt;/code&gt; redirects that should happen when you change your
username are broken. So let me just make this clear, none of this is GitHub&amp;rsquo;s
fault. I pretty much did this super wrong. Also I have a deep fear of someone
taking my old username and making fake repos to try and trick imports in Go so
I figured I will squat on it forever to avoid this. Maybe someone from GitHub
can alleviate my probably irrational fear.&lt;/p&gt;

&lt;p&gt;Amazingly all the Travis CI builds transferred seamlessly. People have been all
up in my mentions on Twitter saying they did this and all their autobuilds for
Docker Hub broke. This honestly doesn&amp;rsquo;t affect me because I host my own
registry that continuously builds AND I allow the general public to pull images
from it.&lt;/p&gt;

&lt;p&gt;In conclusion, I &lt;em&gt;actually&lt;/em&gt; think everything &lt;em&gt;is&lt;/em&gt; fine now. :)&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Blurred Lines</title>
                <link>https://blog.jessfraz.com/post/blurred-lines/</link>
                <pubDate>Sat, 17 Sep 2016 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/blurred-lines/</guid>
                    <description>

&lt;p&gt;Last week, I gave a talk at &lt;a href=&#34;http://githubuniverse.com/2016/program/sessions/#blurry-lines&#34;&gt;Github Universe&lt;/a&gt;
and afterwards several people suggested I write a blog post on it. Here it
is. This post will cover intricacies of &amp;ldquo;choosing your battle&amp;rdquo; and how personal
passion for a project might conflict with corporate motives.&lt;/p&gt;

&lt;p&gt;I have experienced open source from the side of the contributor,
the side of the maintainer,
and the side of the corporate-backed maintainer and contributor. The latter is
what really comes into play here but a lot of the passion I talk about is
obviously present in the former and of course important for empathy for those
on the other side.&lt;/p&gt;

&lt;h2 id=&#34;passion&#34;&gt;Passion&lt;/h2&gt;

&lt;p&gt;Passion is a driving force behind involvement in open source software. People
who believe in a project and use it are the ones who contribute and give back
most heavily. Open source is this rare opportunity to work with people who want
to achieve the same things as you.&lt;/p&gt;

&lt;p&gt;If you have contributed to an open source project before, you know that feeling
when your first pull request to a project is merged. It is magical. In that
moment you have become a part of something so much bigger than yourself.&lt;/p&gt;

&lt;h3 id=&#34;what-happens-to-this-fiery-passion-when-it-is-fueled-by-a-paycheck-from-a-company&#34;&gt;What happens to this fiery passion when it is fueled by a paycheck from a company?&lt;/h3&gt;

&lt;p&gt;This is where things get complicated. When should you fight? When should you
compromise?&lt;/p&gt;

&lt;p&gt;The goal of this post is to show that you can stand up for what you believe in
and keep your job. Getting paid to work on open source is a rare and wonderful
opportunity, but you should not have to give up your passion in the process.
Your passion should be &lt;em&gt;why&lt;/em&gt; companies want to pay you.&lt;/p&gt;

&lt;h2 id=&#34;lessons-learned&#34;&gt;Lessons Learned&lt;/h2&gt;

&lt;p&gt;In my talk (which I will link to the video when it comes out), I told a brief
history of how we evolved the Docker core team during my time there. We even
had three different names: core, meta, engine (yet kept the same team members
the entire time). I&amp;rsquo;m not going to go over all those stories so I suggest you
check out the video, but these are the lessons we learned.&lt;/p&gt;

&lt;h3 id=&#34;hire-from-the-community&#34;&gt;Hire from the community.&lt;/h3&gt;

&lt;p&gt;Previous to joining Docker I had used Docker, given talks on Docker, and
contributed to the project. The startup I worked at previously built its
infrastructure around Docker. All members of the Docker core team were a part
of the community before joining. This is important.&lt;/p&gt;

&lt;p&gt;When you are a member of a community you feel surrounded by your peers being
a part of it. And this includes all members outside the company. Your passion
for the project will and &lt;em&gt;should&lt;/em&gt; always come first. You may get paid by the
company but you will protect the project at all costs, because at the end of
the day, the project and the community were what you were a part of first.&lt;/p&gt;

&lt;p&gt;You cannot hire everyone from the community. Then the trust for the project and
any further growth to the community is effectively ruined.&lt;/p&gt;

&lt;h3 id=&#34;maintainership-for-a-project-must-be-earned&#34;&gt;Maintainership for a project must be earned.&lt;/h3&gt;

&lt;p&gt;When an employee joins your company they should not automatically get push
access to the project. I almost feel like I should repeat this because it is SO
important to building trust with the community. &lt;strong&gt;Everyone must play by the
same rules.&lt;/strong&gt; EVERYONE.&lt;/p&gt;

&lt;p&gt;The Docker project collects stats on just about everything using
&lt;a href=&#34;https://github.com/icecrime/vossibility-stack&#34;&gt;github.com/icecrime/vossibility-stack&lt;/a&gt;.
Whether you are contributing code, contributing documentation, commenting on
issues, or doing code reviews you are eligible to become a maintainer after
regular activity.&lt;/p&gt;

&lt;p&gt;This is key because this eliminates the &amp;ldquo;it&amp;rsquo;s all about who you know&amp;rdquo; scenario.
Without hard data of contributions there is no way to be sure you are not
overlooking some amazing gem in your project that should be rewarded for their
hard work.&lt;/p&gt;

&lt;h3 id=&#34;allow-saying-no&#34;&gt;Allow saying NO.&lt;/h3&gt;

&lt;p&gt;A very common conflict that will occur is one between other teams in the
company and your &amp;ldquo;core&amp;rdquo; team. The company will have a feature they want to
push, which perhaps lands as a patch bomb right before a release, has no tests,
and has pockets of code relying on a service not even in production yet. They will
just expect this to be merged.&lt;/p&gt;

&lt;p&gt;Now of course, your open source team will fight it. They will stand up for the
project. Maybe some members will eventually cave but others will still keep on
fighting. It will cause stress and fear of being terminated. It will also cause
turmoil between these teams internally which creates an awkward work
environment.&lt;/p&gt;

&lt;p&gt;There are a few things you can do to avoid this, the first one being: allow
saying NO. Even from external maintainers, since as I said EVERYONE plays by
the same rules, allow saying NO.&lt;/p&gt;

&lt;h3 id=&#34;create-explicit-guidelines-for-acceptable-patches-release-cutoffs&#34;&gt;Create explicit guidelines for acceptable patches &amp;amp; release cutoffs.&lt;/h3&gt;

&lt;p&gt;By creating explicit guidelines you now make sure that everyone plays by these
same rules. People outside the company cannot send a patch bomb
without tests right before a release and neither can those internally. Of
course you can make whatever rules you want, as long as everyone plays by them.&lt;/p&gt;

&lt;p&gt;When everyone plays by the same rules, your community will trust you
the most.&lt;/p&gt;

&lt;h3 id=&#34;lgtm-lasts-forever&#34;&gt;LGTM lasts forever.&lt;/h3&gt;

&lt;p&gt;When you LGTM a pull request it is there forever, publicly. You can of course
change your mind and revert. But the second that feature gets into
a release, it will take a &lt;em&gt;very&lt;/em&gt; long time to deprecate it out if you feel like
you made the wrong decision. Someone will wind up relying on it and removing it
will become a bikeshed only leading to community disagreement.&lt;/p&gt;

&lt;h3 id=&#34;lgtm-is-tied-to-the-individual-who-said-it&#34;&gt;LGTM is tied to the individual who said it.&lt;/h3&gt;

&lt;p&gt;You cannot get a LGTM from a corporation; you get it from an individual. I have
never seen a company with a GitHub account going around doing code reviews.
People do code reviews.&lt;/p&gt;

&lt;p&gt;If someone comes back to some feature down the road, they can
see who approved it. It reflects on that person, not on their company.
They will make sure they really mean it before they say it.&lt;/p&gt;

&lt;h3 id=&#34;collaboration-and-compromise-is-key&#34;&gt;Collaboration and compromise is key.&lt;/h3&gt;

&lt;p&gt;Do not isolate your &amp;ldquo;core&amp;rdquo; team from the rest of the company. Isolation will only
create a non-inviting atmosphere to work in.&lt;/p&gt;

&lt;p&gt;You are all on the same team; you can find a way to work together and compromise
to benefit both the company AND the community.&lt;/p&gt;

&lt;h2 id=&#34;go-and-succeed&#34;&gt;Go and succeed!&lt;/h2&gt;

&lt;p&gt;If you are thinking about open sourcing a project at your company, try to keep
these things in mind! It&amp;rsquo;s never easy and there will always be some friction,
but the benefits of creating a great open source project and community will pay off!
Most of all &lt;strong&gt;LISTEN&lt;/strong&gt; to the people at your company with the passion for the
project.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>The Day I Leave the Tech Industry</title>
                <link>https://blog.jessfraz.com/post/the-day-i-leave-the-tech-industry/</link>
                <pubDate>Fri, 19 Aug 2016 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/the-day-i-leave-the-tech-industry/</guid>
                    <description>&lt;p&gt;I was inspired last night by Cate Huston&amp;rsquo;s post,
&lt;a href=&#34;http://www.catehuston.com/blog/2014/07/28/the-day-i-leave-the-tech-industry/&#34;&gt;The Day I Leave the Tech Industry&lt;/a&gt;.
I decided to write my own, except I&amp;rsquo;m not as eloquent a writer as Cate so before
I go any further please, please, please read her post and not mine.&lt;/p&gt;

&lt;p&gt;Mine is going to be a bit different. Lately I&amp;rsquo;ve been thinking more and more
about this. It seems imminent. I&amp;rsquo;m only 27 and let me repeat: it seems imminent.&lt;/p&gt;

&lt;p&gt;I&amp;rsquo;m going to tell you all the fantasy that plays in my brain for when this happens.&lt;/p&gt;

&lt;p&gt;The day I leave the tech industry will feel like a giant weight has finally been
lifted. It will be freeing. There are a few scenarios I&amp;rsquo;ve played out for what
I will do after.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;teach math in a third world country&lt;/li&gt;
&lt;li&gt;write a book&lt;/li&gt;
&lt;li&gt;play professional poker&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I could do all three. One thing is for sure though, the day I leave the tech industry
will be the day I contribute my last piece of code to open source software.&lt;/p&gt;

&lt;p&gt;Today, I am not quite ready to give up this thing I have such a &amp;ldquo;hate/love&amp;rdquo;
relationship with. Today, I &lt;em&gt;want&lt;/em&gt; to get more women contributing so that
maybe in the distant future we will feel welcome. Maybe we won&amp;rsquo;t have to fight
so hard just to be heard; to have &lt;em&gt;our&lt;/em&gt; opinions matter.&lt;/p&gt;

&lt;p&gt;Today is &lt;strong&gt;not&lt;/strong&gt; my last day in the tech industry. But it is comforting to me
to plan out this very real future. I am not just &amp;ldquo;the container girl&amp;rdquo;. I am a
human being with feelings, a limit, and a future outside of tech.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Analyzing GitHub Pull Request Data with BigQuery</title>
                <link>https://blog.jessfraz.com/post/analyzing-github-pull-request-data-with-big-query/</link>
                <pubDate>Sun, 07 Aug 2016 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/analyzing-github-pull-request-data-with-big-query/</guid>
                    <description>

&lt;p&gt;I really enjoyed &lt;a href=&#34;https://medium.com/google-cloud/analyzing-github-issues-and-comments-with-bigquery-c41410d3308#.x5qyw8yd9&#34;&gt;Felipe Hoffa’s post on Analyzing GitHub issues and comments with BigQuery
&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Which got me wondering about my favorite subject ever, &lt;a href=&#34;https://blog.jessfraz.com/post/the-art-of-closing/&#34;&gt;The Art of Closing&lt;/a&gt;. I wonder what the stats are for the top 15 projects on GitHub in terms of pull requests opened vs. pull requests closed. This post will use the &lt;a href=&#34;http://www.githubarchive.org/&#34;&gt;GitHub Archive dataset&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&#34;top-15-repositories-with-the-most-pull-requests&#34;&gt;Top 15 repositories with the most pull requests&lt;/h3&gt;

&lt;p&gt;First let’s find the &lt;strong&gt;top 15 repos with the most pull requests from 2015&lt;/strong&gt;. Let’s make sure to check the payload action is ”opened”.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;SELECT
  repo.name,
  COUNT(*) c
FROM
  [githubarchive:year.2015]
WHERE
  type IN ( &#39;PullRequestEvent&#39;)
  AND JSON_EXTRACT(payload, &#39;$.action&#39;) IN (&#39;&amp;quot;opened&amp;quot;&#39;)
GROUP BY
  repo.name
ORDER BY
  c DESC
LIMIT
  15
&lt;/code&gt;&lt;/pre&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;repo_name&lt;/th&gt;
&lt;th&gt;c&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;openmicroscopy/snoopys-sandbox&lt;/td&gt;
&lt;td&gt;11656&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;brianchandotcom/liferay-portal&lt;/td&gt;
&lt;td&gt;10803&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;Homebrew/homebrew&lt;/td&gt;
&lt;td&gt;9519&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;caskroom/homebrew-cask&lt;/td&gt;
&lt;td&gt;6833&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;apache/spark&lt;/td&gt;
&lt;td&gt;6667&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;saltstack/salt&lt;/td&gt;
&lt;td&gt;6636&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;mozilla-b2g/gaia&lt;/td&gt;
&lt;td&gt;6609&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;jlord/patchwork&lt;/td&gt;
&lt;td&gt;6155&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;GoogleCloudPlatform/kubernetes&lt;/td&gt;
&lt;td&gt;5937&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;jsdelivr/jsdelivr&lt;/td&gt;
&lt;td&gt;5747&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;rust-lang/rust&lt;/td&gt;
&lt;td&gt;5559&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;cms-sw/cmssw&lt;/td&gt;
&lt;td&gt;5507&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;code-dot-org/code-dot-org&lt;/td&gt;
&lt;td&gt;5267&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;docker/docker&lt;/td&gt;
&lt;td&gt;5083&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;NixOS/nixpkgs&lt;/td&gt;
&lt;td&gt;4873&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Okay that’s a lot of pull requests. Let’s find the projects will the &lt;strong&gt;most unique number of pull request authors&lt;/strong&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;SELECT
  repo.name,
  COUNT(*) c,
  COUNT(DISTINCT actor.id) authors,
FROM
  [githubarchive:year.2015]
WHERE
  type IN ( &#39;PullRequestEvent&#39;)
  AND JSON_EXTRACT(payload, &#39;$.action&#39;) IN (&#39;&amp;quot;opened&amp;quot;&#39;)
GROUP BY
  repo.name
ORDER BY
  authors DESC
LIMIT
  15
&lt;/code&gt;&lt;/pre&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;repo_name&lt;/th&gt;
&lt;th&gt;c&lt;/th&gt;
&lt;th&gt;authors&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;jlord/patchwork&lt;/td&gt;
&lt;td&gt;6155&lt;/td&gt;
&lt;td&gt;5396&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;octocat/Spoon-Knife&lt;/td&gt;
&lt;td&gt;3966&lt;/td&gt;
&lt;td&gt;3741&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;deadlyvipers/dojo_rules&lt;/td&gt;
&lt;td&gt;4847&lt;/td&gt;
&lt;td&gt;3076&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;Homebrew/homebrew&lt;/td&gt;
&lt;td&gt;9519&lt;/td&gt;
&lt;td&gt;2186&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;udacity/create-your-own-adventure&lt;/td&gt;
&lt;td&gt;2709&lt;/td&gt;
&lt;td&gt;2167&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;caskroom/homebrew-cask&lt;/td&gt;
&lt;td&gt;6833&lt;/td&gt;
&lt;td&gt;1517&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;borisyankov/DefinitelyTyped&lt;/td&gt;
&lt;td&gt;2694&lt;/td&gt;
&lt;td&gt;1127&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;rails/rails&lt;/td&gt;
&lt;td&gt;3100&lt;/td&gt;
&lt;td&gt;1012&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;LarryMad/recipes&lt;/td&gt;
&lt;td&gt;1086&lt;/td&gt;
&lt;td&gt;989&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;laravel/framework&lt;/td&gt;
&lt;td&gt;2736&lt;/td&gt;
&lt;td&gt;891&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;docker/docker&lt;/td&gt;
&lt;td&gt;5083&lt;/td&gt;
&lt;td&gt;882&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;rdpeng/ProgrammingAssignment2&lt;/td&gt;
&lt;td&gt;922&lt;/td&gt;
&lt;td&gt;866&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;apache/spark&lt;/td&gt;
&lt;td&gt;6667&lt;/td&gt;
&lt;td&gt;851&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;JetBrains/swot&lt;/td&gt;
&lt;td&gt;951&lt;/td&gt;
&lt;td&gt;836&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;rust-lang/rust&lt;/td&gt;
&lt;td&gt;5559&lt;/td&gt;
&lt;td&gt;835&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Now let’s see what the &lt;strong&gt;merge vs. close&lt;/strong&gt; numbers look like for those projects.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;SELECT
  repo.name,
  COUNT(*) c,
  COUNT(DISTINCT actor.id) authors,
  SUM(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;true&#39;) THEN 1 ELSE 0 END) AS merged,
  SUM(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;false&#39;) THEN 1 ELSE 0 END) AS closed,
FROM
  [githubarchive:year.2015]
WHERE
  type IN ( &#39;PullRequestEvent&#39;)
  AND JSON_EXTRACT(payload, &#39;$.action&#39;) IN (&#39;&amp;quot;closed&amp;quot;&#39;)
GROUP BY
  repo.name
ORDER BY
  authors DESC
LIMIT
  15
&lt;/code&gt;&lt;/pre&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;repo_name&lt;/th&gt;
&lt;th&gt;c&lt;/th&gt;
&lt;th&gt;authors&lt;/th&gt;
&lt;th&gt;merged&lt;/th&gt;
&lt;th&gt;closed&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;deadlyvipers/dojo_rules&lt;/td&gt;
&lt;td&gt;1636&lt;/td&gt;
&lt;td&gt;1022&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;1636&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;octocat/Spoon-Knife&lt;/td&gt;
&lt;td&gt;1103&lt;/td&gt;
&lt;td&gt;944&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;1103&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;jlord/patchwork&lt;/td&gt;
&lt;td&gt;6595&lt;/td&gt;
&lt;td&gt;705&lt;/td&gt;
&lt;td&gt;4905&lt;/td&gt;
&lt;td&gt;1690&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;LarryMad/recipes&lt;/td&gt;
&lt;td&gt;588&lt;/td&gt;
&lt;td&gt;532&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;588&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;apache/spark&lt;/td&gt;
&lt;td&gt;6653&lt;/td&gt;
&lt;td&gt;468&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;6653&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;Homebrew/homebrew&lt;/td&gt;
&lt;td&gt;9548&lt;/td&gt;
&lt;td&gt;451&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;9543&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;udacity/create-your-own-adventure&lt;/td&gt;
&lt;td&gt;2765&lt;/td&gt;
&lt;td&gt;301&lt;/td&gt;
&lt;td&gt;1946&lt;/td&gt;
&lt;td&gt;819&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;rdpeng/ProgrammingAssignment2&lt;/td&gt;
&lt;td&gt;341&lt;/td&gt;
&lt;td&gt;284&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;341&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;docker/docker&lt;/td&gt;
&lt;td&gt;5250&lt;/td&gt;
&lt;td&gt;254&lt;/td&gt;
&lt;td&gt;3979&lt;/td&gt;
&lt;td&gt;1271&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;NixOS/nixpkgs&lt;/td&gt;
&lt;td&gt;4707&lt;/td&gt;
&lt;td&gt;249&lt;/td&gt;
&lt;td&gt;3438&lt;/td&gt;
&lt;td&gt;1269&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;odoo/odoo&lt;/td&gt;
&lt;td&gt;3412&lt;/td&gt;
&lt;td&gt;233&lt;/td&gt;
&lt;td&gt;712&lt;/td&gt;
&lt;td&gt;2700&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;borisyankov/DefinitelyTyped&lt;/td&gt;
&lt;td&gt;2529&lt;/td&gt;
&lt;td&gt;221&lt;/td&gt;
&lt;td&gt;2173&lt;/td&gt;
&lt;td&gt;356&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;mozilla-b2g/gaia&lt;/td&gt;
&lt;td&gt;7197&lt;/td&gt;
&lt;td&gt;215&lt;/td&gt;
&lt;td&gt;5251&lt;/td&gt;
&lt;td&gt;1946&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;rails/rails&lt;/td&gt;
&lt;td&gt;3254&lt;/td&gt;
&lt;td&gt;212&lt;/td&gt;
&lt;td&gt;2090&lt;/td&gt;
&lt;td&gt;1164&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;caskroom/homebrew-cask&lt;/td&gt;
&lt;td&gt;6928&lt;/td&gt;
&lt;td&gt;210&lt;/td&gt;
&lt;td&gt;3044&lt;/td&gt;
&lt;td&gt;3884&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Oh that is super weird. After looking into a few of the repos with 0 merged, it seems they aren’t really using GitHub for merges.&lt;/p&gt;

&lt;h3 id=&#34;calculating-the-merge-ratio&#34;&gt;Calculating the merge ratio&lt;/h3&gt;

&lt;p&gt;So let’s exclude those and try again, this time we can even calculate the merge ratio.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;SELECT
  repo.name,
  COUNT(*) c,
  COUNT(DISTINCT actor.id) authors,
  SUM(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;true&#39;) THEN 1 ELSE 0 END) AS merged,
  SUM(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;false&#39;) THEN 1 ELSE 0 END) AS closed,
  ROUND(100*SUM(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;true&#39;) THEN 1 ELSE 0 END)/COUNT(*),2) AS merge_ratio
FROM
  [githubarchive:year.2015]
WHERE
  type IN ( &#39;PullRequestEvent&#39;)
  AND JSON_EXTRACT(payload, &#39;$.action&#39;) IN (&#39;&amp;quot;closed&amp;quot;&#39;)
GROUP BY
  repo.name
HAVING
  merged &amp;gt; 10
ORDER BY
  authors DESC
LIMIT
  15
&lt;/code&gt;&lt;/pre&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;repo_name&lt;/th&gt;
&lt;th&gt;c&lt;/th&gt;
&lt;th&gt;authors&lt;/th&gt;
&lt;th&gt;merged&lt;/th&gt;
&lt;th&gt;closed&lt;/th&gt;
&lt;th&gt;merge_ratio&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;jlord/patchwork&lt;/td&gt;
&lt;td&gt;6595&lt;/td&gt;
&lt;td&gt;705&lt;/td&gt;
&lt;td&gt;4905&lt;/td&gt;
&lt;td&gt;1690&lt;/td&gt;
&lt;td&gt;74.37&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;udacity/create-your-own-adventure&lt;/td&gt;
&lt;td&gt;2765&lt;/td&gt;
&lt;td&gt;301&lt;/td&gt;
&lt;td&gt;1946&lt;/td&gt;
&lt;td&gt;819&lt;/td&gt;
&lt;td&gt;70.38&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;docker/docker&lt;/td&gt;
&lt;td&gt;5250&lt;/td&gt;
&lt;td&gt;254&lt;/td&gt;
&lt;td&gt;3979&lt;/td&gt;
&lt;td&gt;1271&lt;/td&gt;
&lt;td&gt;75.79&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;NixOS/nixpkgs&lt;/td&gt;
&lt;td&gt;4707&lt;/td&gt;
&lt;td&gt;249&lt;/td&gt;
&lt;td&gt;3438&lt;/td&gt;
&lt;td&gt;1269&lt;/td&gt;
&lt;td&gt;73.04&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;odoo/odoo&lt;/td&gt;
&lt;td&gt;3412&lt;/td&gt;
&lt;td&gt;233&lt;/td&gt;
&lt;td&gt;712&lt;/td&gt;
&lt;td&gt;2700&lt;/td&gt;
&lt;td&gt;20.87&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;borisyankov/DefinitelyTyped&lt;/td&gt;
&lt;td&gt;2529&lt;/td&gt;
&lt;td&gt;221&lt;/td&gt;
&lt;td&gt;2173&lt;/td&gt;
&lt;td&gt;356&lt;/td&gt;
&lt;td&gt;85.92&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;mozilla-b2g/gaia&lt;/td&gt;
&lt;td&gt;7197&lt;/td&gt;
&lt;td&gt;215&lt;/td&gt;
&lt;td&gt;5251&lt;/td&gt;
&lt;td&gt;1946&lt;/td&gt;
&lt;td&gt;72.96&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;rails/rails&lt;/td&gt;
&lt;td&gt;3254&lt;/td&gt;
&lt;td&gt;212&lt;/td&gt;
&lt;td&gt;2090&lt;/td&gt;
&lt;td&gt;1164&lt;/td&gt;
&lt;td&gt;64.23&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;caskroom/homebrew-cask&lt;/td&gt;
&lt;td&gt;6928&lt;/td&gt;
&lt;td&gt;210&lt;/td&gt;
&lt;td&gt;3044&lt;/td&gt;
&lt;td&gt;3884&lt;/td&gt;
&lt;td&gt;43.94&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;cms-sw/cmssw&lt;/td&gt;
&lt;td&gt;5475&lt;/td&gt;
&lt;td&gt;205&lt;/td&gt;
&lt;td&gt;4312&lt;/td&gt;
&lt;td&gt;1163&lt;/td&gt;
&lt;td&gt;78.76&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;symfony/symfony&lt;/td&gt;
&lt;td&gt;2587&lt;/td&gt;
&lt;td&gt;185&lt;/td&gt;
&lt;td&gt;1387&lt;/td&gt;
&lt;td&gt;1200&lt;/td&gt;
&lt;td&gt;53.61&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;facebook/react-native&lt;/td&gt;
&lt;td&gt;1563&lt;/td&gt;
&lt;td&gt;185&lt;/td&gt;
&lt;td&gt;494&lt;/td&gt;
&lt;td&gt;1069&lt;/td&gt;
&lt;td&gt;31.61&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;robbyrussell/oh-my-zsh&lt;/td&gt;
&lt;td&gt;731&lt;/td&gt;
&lt;td&gt;185&lt;/td&gt;
&lt;td&gt;307&lt;/td&gt;
&lt;td&gt;424&lt;/td&gt;
&lt;td&gt;42.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;githubteacher/github-for-developers-sept-2015&lt;/td&gt;
&lt;td&gt;404&lt;/td&gt;
&lt;td&gt;181&lt;/td&gt;
&lt;td&gt;301&lt;/td&gt;
&lt;td&gt;103&lt;/td&gt;
&lt;td&gt;74.5&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;nightscout/cgm-remote-monitor&lt;/td&gt;
&lt;td&gt;1096&lt;/td&gt;
&lt;td&gt;178&lt;/td&gt;
&lt;td&gt;419&lt;/td&gt;
&lt;td&gt;677&lt;/td&gt;
&lt;td&gt;38.23&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;h3 id=&#34;using-the-diff-data&#34;&gt;Using the diff data&lt;/h3&gt;

&lt;p&gt;Sweet now let’s see on average what the size of the diffs are for these projects&amp;rsquo; pull requests.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;SELECT
  repo.name,
  COUNT(*) c,
  COUNT(DISTINCT actor.id) authors,
  SUM(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;true&#39;) THEN 1 ELSE 0 END) AS merged,
  SUM(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;false&#39;) THEN 1 ELSE 0 END) AS closed,
  ROUND(100*SUM(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;true&#39;) THEN 1 ELSE 0 END)/COUNT(*),2) AS merge_ratio,
  AVG(JSON_EXTRACT(payload, &#39;$.pull_request.additions&#39;)) AS avg_additions,
  AVG(JSON_EXTRACT(payload, &#39;$.pull_request.deletions&#39;)) AS avg_deletions,
  AVG(JSON_EXTRACT(payload, &#39;$.pull_request.changed_files&#39;)) AS avg_changed_files,
FROM
  [githubarchive:year.2015]
WHERE
  type IN ( &#39;PullRequestEvent&#39;)
  AND JSON_EXTRACT(payload, &#39;$.action&#39;) IN (&#39;&amp;quot;closed&amp;quot;&#39;)
GROUP BY
  repo.name
HAVING
  merged &amp;gt; 10
ORDER BY
  authors DESC
LIMIT
  15
&lt;/code&gt;&lt;/pre&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;repo_name&lt;/th&gt;
&lt;th&gt;c&lt;/th&gt;
&lt;th&gt;authors&lt;/th&gt;
&lt;th&gt;merged&lt;/th&gt;
&lt;th&gt;closed&lt;/th&gt;
&lt;th&gt;merge_ratio&lt;/th&gt;
&lt;th&gt;avg_additions&lt;/th&gt;
&lt;th&gt;avg_deletions&lt;/th&gt;
&lt;th&gt;avg_changed_files&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;jlord/patchwork&lt;/td&gt;
&lt;td&gt;6595&lt;/td&gt;
&lt;td&gt;705&lt;/td&gt;
&lt;td&gt;4905&lt;/td&gt;
&lt;td&gt;1690&lt;/td&gt;
&lt;td&gt;74.37&lt;/td&gt;
&lt;td&gt;47.45595147839272&lt;/td&gt;
&lt;td&gt;172.14268385140258&lt;/td&gt;
&lt;td&gt;175.20257771038666&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;udacity/create-your-own-adventure&lt;/td&gt;
&lt;td&gt;2765&lt;/td&gt;
&lt;td&gt;301&lt;/td&gt;
&lt;td&gt;1946&lt;/td&gt;
&lt;td&gt;819&lt;/td&gt;
&lt;td&gt;70.38&lt;/td&gt;
&lt;td&gt;30.39746835443038&lt;/td&gt;
&lt;td&gt;13.116455696202532&lt;/td&gt;
&lt;td&gt;6.742133815551537&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;docker/docker&lt;/td&gt;
&lt;td&gt;5250&lt;/td&gt;
&lt;td&gt;254&lt;/td&gt;
&lt;td&gt;3979&lt;/td&gt;
&lt;td&gt;1271&lt;/td&gt;
&lt;td&gt;75.79&lt;/td&gt;
&lt;td&gt;214.36685714285716&lt;/td&gt;
&lt;td&gt;115.88342857142857&lt;/td&gt;
&lt;td&gt;8.139619047619048&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;NixOS/nixpkgs&lt;/td&gt;
&lt;td&gt;4707&lt;/td&gt;
&lt;td&gt;249&lt;/td&gt;
&lt;td&gt;3438&lt;/td&gt;
&lt;td&gt;1269&lt;/td&gt;
&lt;td&gt;73.04&lt;/td&gt;
&lt;td&gt;339.9751434034417&lt;/td&gt;
&lt;td&gt;40.72678988740174&lt;/td&gt;
&lt;td&gt;5.380072232844699&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;odoo/odoo&lt;/td&gt;
&lt;td&gt;3412&lt;/td&gt;
&lt;td&gt;233&lt;/td&gt;
&lt;td&gt;712&lt;/td&gt;
&lt;td&gt;2700&lt;/td&gt;
&lt;td&gt;20.87&lt;/td&gt;
&lt;td&gt;1626.0741500586166&lt;/td&gt;
&lt;td&gt;1907.4182297772568&lt;/td&gt;
&lt;td&gt;128.01992966002345&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;borisyankov/DefinitelyTyped&lt;/td&gt;
&lt;td&gt;2529&lt;/td&gt;
&lt;td&gt;221&lt;/td&gt;
&lt;td&gt;2173&lt;/td&gt;
&lt;td&gt;356&lt;/td&gt;
&lt;td&gt;85.92&lt;/td&gt;
&lt;td&gt;887.0581257413997&lt;/td&gt;
&lt;td&gt;864.4827995255041&lt;/td&gt;
&lt;td&gt;2.8730723606168445&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;mozilla-b2g/gaia&lt;/td&gt;
&lt;td&gt;7197&lt;/td&gt;
&lt;td&gt;215&lt;/td&gt;
&lt;td&gt;5251&lt;/td&gt;
&lt;td&gt;1946&lt;/td&gt;
&lt;td&gt;72.96&lt;/td&gt;
&lt;td&gt;415.85396693066554&lt;/td&gt;
&lt;td&gt;138.59233013755733&lt;/td&gt;
&lt;td&gt;10.55578713352786&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;rails/rails&lt;/td&gt;
&lt;td&gt;3254&lt;/td&gt;
&lt;td&gt;212&lt;/td&gt;
&lt;td&gt;2090&lt;/td&gt;
&lt;td&gt;1164&lt;/td&gt;
&lt;td&gt;64.23&lt;/td&gt;
&lt;td&gt;54.88414259373079&lt;/td&gt;
&lt;td&gt;29.18561770129072&lt;/td&gt;
&lt;td&gt;6.880762138905962&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;caskroom/homebrew-cask&lt;/td&gt;
&lt;td&gt;6928&lt;/td&gt;
&lt;td&gt;210&lt;/td&gt;
&lt;td&gt;3044&lt;/td&gt;
&lt;td&gt;3884&lt;/td&gt;
&lt;td&gt;43.94&lt;/td&gt;
&lt;td&gt;8.448469976905312&lt;/td&gt;
&lt;td&gt;4.0329099307159355&lt;/td&gt;
&lt;td&gt;3.315675519630485&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;cms-sw/cmssw&lt;/td&gt;
&lt;td&gt;5475&lt;/td&gt;
&lt;td&gt;205&lt;/td&gt;
&lt;td&gt;4312&lt;/td&gt;
&lt;td&gt;1163&lt;/td&gt;
&lt;td&gt;78.76&lt;/td&gt;
&lt;td&gt;2160.7702283105023&lt;/td&gt;
&lt;td&gt;713.1713242009132&lt;/td&gt;
&lt;td&gt;37.51086757990868&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;facebook/react-native&lt;/td&gt;
&lt;td&gt;1563&lt;/td&gt;
&lt;td&gt;185&lt;/td&gt;
&lt;td&gt;494&lt;/td&gt;
&lt;td&gt;1069&lt;/td&gt;
&lt;td&gt;31.61&lt;/td&gt;
&lt;td&gt;189.86756238003838&lt;/td&gt;
&lt;td&gt;86.54638515674984&lt;/td&gt;
&lt;td&gt;10.595649392194497&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;robbyrussell/oh-my-zsh&lt;/td&gt;
&lt;td&gt;731&lt;/td&gt;
&lt;td&gt;185&lt;/td&gt;
&lt;td&gt;307&lt;/td&gt;
&lt;td&gt;424&lt;/td&gt;
&lt;td&gt;42.0&lt;/td&gt;
&lt;td&gt;54.0328317373461&lt;/td&gt;
&lt;td&gt;11.285909712722297&lt;/td&gt;
&lt;td&gt;1.987688098495212&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;symfony/symfony&lt;/td&gt;
&lt;td&gt;2587&lt;/td&gt;
&lt;td&gt;185&lt;/td&gt;
&lt;td&gt;1387&lt;/td&gt;
&lt;td&gt;1200&lt;/td&gt;
&lt;td&gt;53.61&lt;/td&gt;
&lt;td&gt;142.36722071897952&lt;/td&gt;
&lt;td&gt;168.96366447622728&lt;/td&gt;
&lt;td&gt;28.32006184770004&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;githubteacher/github-for-developers-sept-2015&lt;/td&gt;
&lt;td&gt;404&lt;/td&gt;
&lt;td&gt;181&lt;/td&gt;
&lt;td&gt;301&lt;/td&gt;
&lt;td&gt;103&lt;/td&gt;
&lt;td&gt;74.5&lt;/td&gt;
&lt;td&gt;18.217821782178216&lt;/td&gt;
&lt;td&gt;0.7574257425742574&lt;/td&gt;
&lt;td&gt;2.517326732673267&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;nightscout/cgm-remote-monitor&lt;/td&gt;
&lt;td&gt;1096&lt;/td&gt;
&lt;td&gt;178&lt;/td&gt;
&lt;td&gt;419&lt;/td&gt;
&lt;td&gt;677&lt;/td&gt;
&lt;td&gt;38.23&lt;/td&gt;
&lt;td&gt;519.7043795620438&lt;/td&gt;
&lt;td&gt;246.9434306569343&lt;/td&gt;
&lt;td&gt;8.777372262773723&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Well that&amp;rsquo;s not all that interesting&amp;hellip;&lt;/p&gt;

&lt;h3 id=&#34;can-we-prove-you-should-always-keep-your-pull-requests-small&#34;&gt;Can we prove you should always keep your pull requests small?&lt;/h3&gt;

&lt;p&gt;We &lt;em&gt;know&lt;/em&gt; that it is always better to make a small pull request to have it merged. Let&amp;rsquo;s see if we can prove that with data!&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;SELECT
  repo.name,
  COUNT(*) c,
  COUNT(DISTINCT actor.id) authors,
  ROUND(100*SUM(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;true&#39;) THEN 1 ELSE 0 END)/COUNT(*),2) AS merge_ratio,
  AVG(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;true&#39;) THEN JSON_EXTRACT(payload, &#39;$.pull_request.additions&#39;) END) AS merged_avg_additions,
  AVG(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;true&#39;) THEN JSON_EXTRACT(payload, &#39;$.pull_request.deletions&#39;) END) AS merged_avg_deletions,
  AVG(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;true&#39;) THEN JSON_EXTRACT(payload, &#39;$.pull_request.changed_files&#39;) END) AS merged_avg_changed_files,
  AVG(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;false&#39;) THEN JSON_EXTRACT(payload, &#39;$.pull_request.additions&#39;) END) AS closed_avg_additions,
  AVG(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;false&#39;) THEN JSON_EXTRACT(payload, &#39;$.pull_request.deletions&#39;) END) AS closed_avg_deletions,
  AVG(CASE WHEN JSON_EXTRACT(payload, &#39;$.pull_request.merged&#39;) IN (&#39;false&#39;) THEN JSON_EXTRACT(payload, &#39;$.pull_request.changed_files&#39;) END) AS closed_avg_changed_files,
FROM
  [githubarchive:year.2015]
WHERE
  type IN ( &#39;PullRequestEvent&#39;)
  AND JSON_EXTRACT(payload, &#39;$.action&#39;) IN (&#39;&amp;quot;closed&amp;quot;&#39;)
GROUP BY
  repo.name
HAVING
  merge_ratio &amp;gt; 5
ORDER BY
  authors DESC
LIMIT
  15
&lt;/code&gt;&lt;/pre&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;repo_name&lt;/th&gt;
&lt;th&gt;c&lt;/th&gt;
&lt;th&gt;authors&lt;/th&gt;
&lt;th&gt;merge_ratio&lt;/th&gt;
&lt;th&gt;merged_avg_additions&lt;/th&gt;
&lt;th&gt;merged_avg_deletions&lt;/th&gt;
&lt;th&gt;merged_avg_changed_files&lt;/th&gt;
&lt;th&gt;closed_avg_additions&lt;/th&gt;
&lt;th&gt;closed_avg_deletions&lt;/th&gt;
&lt;th&gt;closed_avg_changed_files&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;jlord/patchwork&lt;/td&gt;
&lt;td&gt;6595&lt;/td&gt;
&lt;td&gt;705&lt;/td&gt;
&lt;td&gt;74.37&lt;/td&gt;
&lt;td&gt;9.3565749235474&lt;/td&gt;
&lt;td&gt;0.033231396534148826&lt;/td&gt;
&lt;td&gt;1.0014271151885832&lt;/td&gt;
&lt;td&gt;158.03431952662723&lt;/td&gt;
&lt;td&gt;671.6674556213018&lt;/td&gt;
&lt;td&gt;680.798224852071&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;udacity/create-your-own-adventure&lt;/td&gt;
&lt;td&gt;2765&lt;/td&gt;
&lt;td&gt;301&lt;/td&gt;
&lt;td&gt;70.38&lt;/td&gt;
&lt;td&gt;7.863309352517986&lt;/td&gt;
&lt;td&gt;0.6747173689619733&lt;/td&gt;
&lt;td&gt;1.8144912641315518&lt;/td&gt;
&lt;td&gt;83.94017094017094&lt;/td&gt;
&lt;td&gt;42.67887667887668&lt;/td&gt;
&lt;td&gt;18.45054945054945&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;docker/docker&lt;/td&gt;
&lt;td&gt;5250&lt;/td&gt;
&lt;td&gt;254&lt;/td&gt;
&lt;td&gt;75.79&lt;/td&gt;
&lt;td&gt;176.0874591605931&lt;/td&gt;
&lt;td&gt;90.80949987434029&lt;/td&gt;
&lt;td&gt;5.965317919075145&lt;/td&gt;
&lt;td&gt;334.2045633359559&lt;/td&gt;
&lt;td&gt;194.38001573564122&lt;/td&gt;
&lt;td&gt;14.946498819826909&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;NixOS/nixpkgs&lt;/td&gt;
&lt;td&gt;4707&lt;/td&gt;
&lt;td&gt;249&lt;/td&gt;
&lt;td&gt;73.04&lt;/td&gt;
&lt;td&gt;137.50581733566025&lt;/td&gt;
&lt;td&gt;31.841768470040723&lt;/td&gt;
&lt;td&gt;2.91564863292612&lt;/td&gt;
&lt;td&gt;888.5090622537431&lt;/td&gt;
&lt;td&gt;64.79826635145784&lt;/td&gt;
&lt;td&gt;12.056737588652481&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;odoo/odoo&lt;/td&gt;
&lt;td&gt;3412&lt;/td&gt;
&lt;td&gt;233&lt;/td&gt;
&lt;td&gt;20.87&lt;/td&gt;
&lt;td&gt;200.9129213483146&lt;/td&gt;
&lt;td&gt;195.0870786516854&lt;/td&gt;
&lt;td&gt;7.095505617977528&lt;/td&gt;
&lt;td&gt;2001.8944444444444&lt;/td&gt;
&lt;td&gt;2358.966296296296&lt;/td&gt;
&lt;td&gt;159.90814814814814&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;borisyankov/DefinitelyTyped&lt;/td&gt;
&lt;td&gt;2529&lt;/td&gt;
&lt;td&gt;221&lt;/td&gt;
&lt;td&gt;85.92&lt;/td&gt;
&lt;td&gt;390.8085595950299&lt;/td&gt;
&lt;td&gt;482.45467096180397&lt;/td&gt;
&lt;td&gt;2.1339162448228257&lt;/td&gt;
&lt;td&gt;3916.13202247191&lt;/td&gt;
&lt;td&gt;3196.3567415730336&lt;/td&gt;
&lt;td&gt;7.384831460674158&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;mozilla-b2g/gaia&lt;/td&gt;
&lt;td&gt;7197&lt;/td&gt;
&lt;td&gt;215&lt;/td&gt;
&lt;td&gt;72.96&lt;/td&gt;
&lt;td&gt;398.6246429251571&lt;/td&gt;
&lt;td&gt;86.15311369262997&lt;/td&gt;
&lt;td&gt;6.51628261283565&lt;/td&gt;
&lt;td&gt;462.3448098663926&lt;/td&gt;
&lt;td&gt;280.09198355601234&lt;/td&gt;
&lt;td&gt;21.45580678314491&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;rails/rails&lt;/td&gt;
&lt;td&gt;3254&lt;/td&gt;
&lt;td&gt;212&lt;/td&gt;
&lt;td&gt;64.23&lt;/td&gt;
&lt;td&gt;23.657416267942583&lt;/td&gt;
&lt;td&gt;11.615789473684211&lt;/td&gt;
&lt;td&gt;2.6382775119617223&lt;/td&gt;
&lt;td&gt;110.95274914089347&lt;/td&gt;
&lt;td&gt;60.732817869415804&lt;/td&gt;
&lt;td&gt;14.49828178694158&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;caskroom/homebrew-cask&lt;/td&gt;
&lt;td&gt;6928&lt;/td&gt;
&lt;td&gt;210&lt;/td&gt;
&lt;td&gt;43.94&lt;/td&gt;
&lt;td&gt;8.201708278580815&lt;/td&gt;
&lt;td&gt;5.042706964520368&lt;/td&gt;
&lt;td&gt;3.9244415243101183&lt;/td&gt;
&lt;td&gt;8.641864057672503&lt;/td&gt;
&lt;td&gt;3.241503604531411&lt;/td&gt;
&lt;td&gt;2.8385684860968072&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;cms-sw/cmssw&lt;/td&gt;
&lt;td&gt;5475&lt;/td&gt;
&lt;td&gt;205&lt;/td&gt;
&lt;td&gt;78.76&lt;/td&gt;
&lt;td&gt;994.2810760667903&lt;/td&gt;
&lt;td&gt;619.9148886827459&lt;/td&gt;
&lt;td&gt;8.133812615955472&lt;/td&gt;
&lt;td&gt;6485.7067927773005&lt;/td&gt;
&lt;td&gt;1058.9337919174548&lt;/td&gt;
&lt;td&gt;146.43078245915734&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;symfony/symfony&lt;/td&gt;
&lt;td&gt;2587&lt;/td&gt;
&lt;td&gt;185&lt;/td&gt;
&lt;td&gt;53.61&lt;/td&gt;
&lt;td&gt;63.3914924297044&lt;/td&gt;
&lt;td&gt;72.16582552271089&lt;/td&gt;
&lt;td&gt;8.235760634462869&lt;/td&gt;
&lt;td&gt;233.65&lt;/td&gt;
&lt;td&gt;280.84583333333336&lt;/td&gt;
&lt;td&gt;51.534166666666664&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;facebook/react-native&lt;/td&gt;
&lt;td&gt;1563&lt;/td&gt;
&lt;td&gt;185&lt;/td&gt;
&lt;td&gt;31.61&lt;/td&gt;
&lt;td&gt;204.29757085020242&lt;/td&gt;
&lt;td&gt;88.43522267206478&lt;/td&gt;
&lt;td&gt;8.024291497975709&lt;/td&gt;
&lt;td&gt;183.19925163704397&lt;/td&gt;
&lt;td&gt;85.67352666043031&lt;/td&gt;
&lt;td&gt;11.783910196445277&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;robbyrussell/oh-my-zsh&lt;/td&gt;
&lt;td&gt;731&lt;/td&gt;
&lt;td&gt;185&lt;/td&gt;
&lt;td&gt;42.0&lt;/td&gt;
&lt;td&gt;49.74267100977199&lt;/td&gt;
&lt;td&gt;10.824104234527688&lt;/td&gt;
&lt;td&gt;1.6612377850162867&lt;/td&gt;
&lt;td&gt;57.139150943396224&lt;/td&gt;
&lt;td&gt;11.620283018867925&lt;/td&gt;
&lt;td&gt;2.224056603773585&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;githubteacher/github-for-developers-sept-2015&lt;/td&gt;
&lt;td&gt;404&lt;/td&gt;
&lt;td&gt;181&lt;/td&gt;
&lt;td&gt;74.5&lt;/td&gt;
&lt;td&gt;4.700996677740863&lt;/td&gt;
&lt;td&gt;0.4186046511627907&lt;/td&gt;
&lt;td&gt;1.1727574750830565&lt;/td&gt;
&lt;td&gt;57.71844660194175&lt;/td&gt;
&lt;td&gt;1.7475728155339805&lt;/td&gt;
&lt;td&gt;6.446601941747573&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;nightscout/cgm-remote-monitor&lt;/td&gt;
&lt;td&gt;1096&lt;/td&gt;
&lt;td&gt;178&lt;/td&gt;
&lt;td&gt;38.23&lt;/td&gt;
&lt;td&gt;173.24582338902147&lt;/td&gt;
&lt;td&gt;58.885441527446304&lt;/td&gt;
&lt;td&gt;4.985680190930788&lt;/td&gt;
&lt;td&gt;734.1299852289512&lt;/td&gt;
&lt;td&gt;363.3338257016248&lt;/td&gt;
&lt;td&gt;11.124076809453472&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;h3 id=&#34;it-is-proven&#34;&gt;IT IS PROVEN!!!&lt;/h3&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/science.gif&#34; alt=&#34;science&#34; /&gt;&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Spontaneous Combustion</title>
                <link>https://blog.jessfraz.com/post/spontaneous-combustion/</link>
                <pubDate>Wed, 03 Aug 2016 11:28:47 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/spontaneous-combustion/</guid>
                    <description>&lt;p&gt;This blog post is going to be a bit different. After watching Stranger Things,
my friend and I started discussing scary movies from our childhood. I couldn&amp;rsquo;t
help but remember a very specific strange thing that happened to me growing up.
I thought, hey, this would be a kinda weird blog post. So here it is.
The events following are factual.&lt;/p&gt;

&lt;p&gt;It was a hot, dry summer in July of 1995 in Phoenix, Arizona. We were getting
our house repainted. For those of you unfamiliar with the dry summers in Arizona
it gets to be around 120°F, which is around 49°C. The painters had left their
varnish rags on top of the trash cans outside our house. Which directly lined up
with the side of the house.&lt;/p&gt;

&lt;p&gt;A diagram is below and will come in handy later in the story.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/house-layout.jpg&#34; alt=&#34;house-layout&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Varnish rags are flammable and in combination with the Arizona heat this caused
spontaneous combustion. I kid you not.&lt;/p&gt;

&lt;p&gt;The fire started at the trash cans and reached all the way to my parent&amp;rsquo;s
bedroom. Which at the time had this horrendous green carpet.&lt;/p&gt;

&lt;p&gt;Fortunately no one was home at the time. My sister and I were spending the night
at our friends house. My mom was busy elsewhere. My dad had the interesting
circumstances of driving home while all the fire trucks kept passing him
heading the same direction. He was the first to find out.&lt;/p&gt;

&lt;p&gt;The next morning, he came to our sleepover. The very sight of him at our
friends breakfast table was unusual to say the least. My sister and I knew
something was wrong. He explained what happened and that we needed to go
shopping for new clothes. It was surreal. And the only thing I could think
about was how I left my favorite teddy bears with an &amp;ldquo;I owe you&amp;rdquo; note that they
could come to the next sleepover. Which of course would never happen. I had
abandoned them in my room which was so unfortunately placed right next to the
exterior wall where the trash cans were.&lt;/p&gt;

&lt;p&gt;This story is interesting for, of course, the moral lesson I learned at a young
age that material objects don&amp;rsquo;t matter. It&amp;rsquo;s the relationships with people that
do. But also there was something rather creepy that happened with regard to the fire.&lt;/p&gt;

&lt;p&gt;The map is important here.&lt;/p&gt;

&lt;p&gt;Growing up my parents always made us clean out our closets over summer. My
sister and I had just done that. In this massive purge of items that had fallen
into closet abyss throughout the year we also threw away our Sunday school books.&lt;/p&gt;

&lt;p&gt;Now I&amp;rsquo;m not a
religious person, on my Facebook page my religion is literally &amp;ldquo;hugs&amp;rdquo; so take
this as you want. Those books were in the trash cans that spontaneously
combusted. The pages were left burned all over the house. But one page made its
way all the way to the front door of the house. It was the Ten Commandments.
My parents framed the perfectly fire scorched page and it&amp;rsquo;s in their house today.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>10 LDFLAGS I Love</title>
                <link>https://blog.jessfraz.com/post/top-10-favorite-ldflags/</link>
                <pubDate>Mon, 18 Jul 2016 13:00:14 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/top-10-favorite-ldflags/</guid>
                    <description>&lt;p&gt;Hello and welcome to what will become the most sarcastic post on my blog.
This is going to be a series of &amp;ldquo;buzzfeed&amp;rdquo; style programming articles and after
this post I very happily pass the baton to &lt;a href=&#34;https://twitter.com/FiloSottile/status/754774945847209988&#34;&gt;Filippo Valsorda&lt;/a&gt; to continue. And I urge you to write your own as well.&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;&lt;a href=&#34;https://twitter.com/jessfraz&#34;&gt;@jessfraz&lt;/a&gt; &amp;quot;We asked Jess for her top 10 ldflags; you won&amp;#39;t believe what happened next&amp;quot;&lt;/p&gt;&amp;mdash; adg (@enneff) &lt;a href=&#34;https://twitter.com/enneff/status/754737186960838656&#34;&gt;July 17, 2016&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;//platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;So here they are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-static&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I would be an embarassment to myself if I didn&amp;rsquo;t start with the flag that
tells the linker to not link against shared libraries. This is the best flag.
STATIC BINARIES FTW.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;--export-dynamic&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This flag tells the linker to add all the symbols to the dynamic symbol
table. This is especially important if you want to do &lt;a href=&#34;https://github.com/jessfraz/macgyver&#34;&gt;&amp;ldquo;The Macgyver of Dlopening&amp;rdquo;&lt;/a&gt; and &lt;code&gt;dlopen&lt;/code&gt; yourself.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;--whole-archive&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is another flag that comes in handy when you want to &lt;code&gt;dlopen&lt;/code&gt;
yourself. See most linkers will only take into account the things it knows
it needs. But with this flag, you tell it &amp;ldquo;YOLO, I want it all&amp;rdquo; so that
later you can &lt;code&gt;dlopen&lt;/code&gt; yourself with that symbol that was never actually
used until runtime. FUN!&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;--no-whole-archive&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This flag un-sets the &lt;code&gt;--whole-archive&lt;/code&gt; flag which is nice for when you
only want the whole archive of one library but not all the others you are
linking to.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;--print-map&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This flag is just dope. It prints a link map to stdout. This gives you
information about object files, common symbols, and the values assigned to
symbols.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;--strip-all&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This flag strips all the symbol information from the artifact produced. If
say you are a few KB/MB off from your binary fitting on a floppy disk, this
flag is your friend.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;--strip-debug&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This flag is very similar to &lt;code&gt;--strip-all&lt;/code&gt; except it only strips the debug
symbol information. This all really depends on how much you need to shave
off to fit that binary on a floppy disk.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;--trace&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This flag is great for debugging. It prints the names of the input files as
&lt;code&gt;ld&lt;/code&gt; processes them.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;-nostdlib&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This flag forces the linker to only search the libraries you specify with
&lt;code&gt;--library-path&lt;/code&gt; or &lt;code&gt;-L&lt;/code&gt;. This is nice when &lt;em&gt;someone&lt;/em&gt; completely messes
with your library path and the world is burning and you just want to link
to those things you put in some random directory somewhere.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;--unresolved-symbols=ignore-all&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This flag is helpful when telling the linker you DGAF about unresolved
symbols and to stop yelling at you.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
</description>
                </item>
                    
            <item>
                <title>The Art of Closing</title>
                <link>https://blog.jessfraz.com/post/the-art-of-closing/</link>
                <pubDate>Sat, 04 Jun 2016 08:09:26 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/the-art-of-closing/</guid>
                    <description>&lt;p&gt;Being an open source software maintainer is hard. The following post is geared
towards maintainers and not contributors. If you are a new contributor to
open source I would stop reading now because I don&amp;rsquo;t want you to get the wrong
idea or discourage you. Tons of patch requests get merged per day, but this is
going to focus on the ones that don&amp;rsquo;t.&lt;/p&gt;

&lt;p&gt;I&amp;rsquo;ve talked to maintainers from several different open source projects, mesos,
kubernetes, chromium, and they all agree one of the hardest parts of
being a maintainer is saying &amp;ldquo;No&amp;rdquo; to patches you don&amp;rsquo;t want.&lt;/p&gt;

&lt;p&gt;To quote some very smart people I&amp;rsquo;ve worked with in the past:&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;One of the numerous examples of information asymmetry in open source: contributors put effort in a pet PR, but maintainers manage cattle. 🕒&lt;/p&gt;&amp;mdash; Arnaud Porterie (@icecrime) &lt;a href=&#34;https://twitter.com/icecrime/status/733682351943733249&#34;&gt;May 20, 2016&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;//platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;&lt;blockquote class=&#34;twitter-tweet&#34; data-lang=&#34;en&#34;&gt;&lt;p lang=&#34;en&#34; dir=&#34;ltr&#34;&gt;Rule #1 of open-source: no is temporary, yes is forever.&lt;/p&gt;&amp;mdash; Solomon Hykes (@solomonstre) &lt;a href=&#34;https://twitter.com/solomonstre/status/715277134978113536&#34;&gt;March 30, 2016&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src=&#34;//platform.twitter.com/widgets.js&#34; charset=&#34;utf-8&#34;&gt;&lt;/script&gt;&lt;/p&gt;

&lt;p&gt;To make this rather unpleasant experience of closing someone&amp;rsquo;s patch request
easier I have a few ways of going about it. Now of course I am no expert in this
area, but on the Docker project we have
&lt;a href=&#34;https://github.com/icecrime/vossibility-stack&#34;&gt;stats for just about everything&lt;/a&gt;.
I &lt;em&gt;might&lt;/em&gt; have used this data to make a &amp;ldquo;Ultimate
Dream Killers&amp;rdquo; chart with the maintainers who closed (without merging) the most
pull requests, AND I &lt;em&gt;might&lt;/em&gt; have been #1 on this chart for some time.&lt;/p&gt;

&lt;p&gt;None of the suggestions below are going to save you from that person hate
mailing you since you didn&amp;rsquo;t merge their patch. But hey anything helps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The ego stroke and close.&lt;/p&gt;

&lt;p&gt;People love hearing how awesome they are. They also love hearing how
awesome their code is. In this option you use this to your advantage.
Here&amp;rsquo;s an example:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Thanks so much for spending time on this amazing patch. We really
appreciate it. However I do not think this is something we want to add
right now, because of yadda yadda but in the future this can change. Thanks so much!&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AAAANNNDD close.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Close early.&lt;/p&gt;

&lt;p&gt;No one wants to have to do 300 rebases before learning the design of their
patch isn&amp;rsquo;t approved. If you know there is no way you will ever accept
their patch, close it right then. Making someone wait and/or do more work
while waiting will just make the situation worse when you &lt;em&gt;do&lt;/em&gt; close it.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;The &amp;ldquo;I kinda like this but it&amp;rsquo;s just not right&amp;rdquo;.&lt;/p&gt;

&lt;p&gt;If someone creates a new feature that you might like if it was done differently
but the current implementation has no way of being merged (maybe for design
flaws etc.) I believe it&amp;rsquo;s best to close with opportunity for the person
open another patch with the desired design. Here&amp;rsquo;s an example:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hi X,
We really appreciate you taking the time to make this patch. However the
design was not discussed prior to writing it. We do see potential in what
you are trying to build, but we think it would be more effective as
blah, blah, and blah.
We are going to close this but would love to see you open a patch that
takes the above direction. Thanks, this could really be an awesome feature!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;See how the ego stroke comes in handy here too :). AAAAAAND close.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;The carry.&lt;/p&gt;

&lt;p&gt;Carrying a patch is when a maintainer will take a user&amp;rsquo;s patch and add
edits on top of it so it is mergable. On the docker project we do this
every so often and for various reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Contributor disappeared but the patch is viable just needs some edits.&lt;/li&gt;
&lt;li&gt;Patch is like #3 above but it would be easier if we just did the
implementation itself.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It&amp;rsquo;s important to note if you are going to carry a patch, DO NOT close the
original patch request until you have opened your carry patch. You
obviously need to let the contributor know before hand you will be carrying
it so they don&amp;rsquo;t waste their time. Also be sure to keep their
original commit&amp;rsquo;s and add yours on top so the right people get
credit :)&lt;/p&gt;

&lt;p&gt;Here&amp;rsquo;s an example:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hi X, we really like your patch, but since there hasn&amp;rsquo;t been a response in
Y days we are going to carry this patch and make the edits ourselves. We
will link to the new pull request here when it&amp;rsquo;s ready.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Maintainer works on patch&amp;hellip; opens new patch&amp;hellip; then you can close the
original patch request. See if you close it before opening the new one, the
contributor will assume you are lying and never going to do it.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These are just a few of the techniques we&amp;rsquo;ve used in the past. I hope if you
are a maintainer of a project they are helpful for you, but I would love to
know your tips as well.&lt;/p&gt;

&lt;p&gt;Happy Maintaining and always be closing!&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/always-be-closing.gif&#34; alt=&#34;always-be-closing&#34; /&gt;&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Getting Towards Real Sandbox Containers</title>
                <link>https://blog.jessfraz.com/post/getting-towards-real-sandbox-containers/</link>
                <pubDate>Sun, 01 May 2016 12:17:58 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/getting-towards-real-sandbox-containers/</guid>
                    <description>

&lt;p&gt;Containers are all the rage right now.&lt;/p&gt;

&lt;p&gt;At the very core of containers are the same Linux primitives that are also used to create application sandboxes.
The most common sandbox you may be familiar with is the Chrome sandbox. You can read in detail about the Chrome sandbox
here: &lt;a href=&#34;https://chromium.googlesource.com/chromium/src/+/master/docs/linux_sandboxing.md&#34;&gt;chromium.googlesource.com/chromium/src/+/master/docs/linux_sandboxing.md&lt;/a&gt;.
The relevant aspect for this article is the fact it uses user namespaces and seccomp. Other deprecated features include AppArmor
and SELinux. Sound familiar? That&amp;rsquo;s because containers, as you&amp;rsquo;ve come to know them today, share the same features.&lt;/p&gt;

&lt;h2 id=&#34;why-are-containers-not-currently-being-considered-a-sandbox&#34;&gt;Why are containers not currently being considered a &amp;ldquo;sandbox&amp;rdquo;?&lt;/h2&gt;

&lt;p&gt;One of the key differences between how you run Chrome
and how you run a container are the privileges used. Chrome runs as your own unprivileged user. Most containers (be it docker, runc, or rkt) run as
root.&lt;/p&gt;

&lt;p&gt;Yes, we all know that containers run unprivileged processes; but creating and running the containers themselves requires root privileges at some point.&lt;/p&gt;

&lt;h2 id=&#34;how-can-we-run-containers-as-an-unprivileged-user&#34;&gt;How can we run containers as an unprivileged user?&lt;/h2&gt;

&lt;p&gt;Easy! With user namespaces, you might say. But it&amp;rsquo;s not exactly that simple. One of the main differences between the Chrome
sandbox and containers is cgroups. Cgroups control what a process can use. Whereas namespaces
control what a process can see. Containers have cgroup resource management built in. Creating cgroups from an unprivileged
user is a bit difficult, especially device control groups.&lt;/p&gt;

&lt;p&gt;If we ignore, for the time being, this huge tire fire that is creating cgroups as an unprivileged user, then
unprivileged containers are easy. User namespaces allow us to create all the namespaces without any further privileges.
The one key caveat being that the &lt;code&gt;{uid,gid}_map&lt;/code&gt; must have the current host user mapped to the container uid that the process
will be run as. The size of the &lt;code&gt;{uid,gid}_map&lt;/code&gt; can also only be 1. For example if you are running as uid 1000 to spawn the container, your
&lt;code&gt;{uid,gid}_map&lt;/code&gt; for the process would be &lt;code&gt;0 1000 1&lt;/code&gt; for uid 0 in the container. The 1 there refers to the size.&lt;/p&gt;

&lt;h2 id=&#34;how-is-this-different-than-the-user-namespace-support-currently-in-docker&#34;&gt;How is this different than the user namespace support currently in Docker?&lt;/h2&gt;

&lt;p&gt;This is quite different, but for very good reason. In Docker, by default, when the remapped user is created,
the &lt;code&gt;/etc/subuid&lt;/code&gt; and &lt;code&gt;/etc/subgid&lt;/code&gt; files are populated with a contiguous 65536 length range of subordinate user and group
IDs, starting at an offset based on prior entries in those files. Docker&amp;rsquo;s implementation has a larger range of users that can
exist in the container as well as having a more &amp;ldquo;anonymous&amp;rdquo; mapped host user.
If you want to read more about the user namespace implementation
in Docker I would checkout &lt;a href=&#34;https://integratedcode.us/2015/10/13/user-namespaces-have-arrived-in-docker/&#34;&gt;@estesp&amp;rsquo;s blog&lt;/a&gt; or the
the &lt;a href=&#34;https://docs.docker.com/engine/reference/commandline/daemon/#daemon-user-namespace-options&#34;&gt;docker docs&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&#34;poc-or-gtfo&#34;&gt;POC or GTFO&lt;/h2&gt;

&lt;p&gt;As a proof of concept of unprivileged containers without cgroups I made &lt;a href=&#34;https://github.com/jessfraz/binctr&#34;&gt;binctr&lt;/a&gt;. Which
spawned a
&lt;a href=&#34;https://groups.google.com/a/opencontainers.org/forum/#!topic/dev/yutVaSLcqWI&#34;&gt;mailing list thread for implementing this in runc/libcontainer&lt;/a&gt;.
&lt;a href=&#34;https://github.com/cyphar&#34;&gt;Aleksa Sarai&lt;/a&gt; has started on a few patches and this might actually be a reality pretty soon!&lt;/p&gt;

&lt;p&gt;Update: it took almost a year, but this was &lt;a href=&#34;https://github.com/opencontainers/runc/pull/774&#34;&gt;added to runc&lt;/a&gt; in Mar 2017.&lt;/p&gt;

&lt;h2 id=&#34;where-does-this-put-us-in-the-sandbox-landscape&#34;&gt;Where does this put us in the &amp;ldquo;sandbox&amp;rdquo; landscape?&lt;/h2&gt;

&lt;p&gt;With this implementation we get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;namespaces&lt;/li&gt;
&lt;li&gt;apparmor&lt;/li&gt;
&lt;li&gt;selinux&lt;/li&gt;
&lt;li&gt;seccomp&lt;/li&gt;
&lt;li&gt;capabilities limiting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;all created by an unprivileged user!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sandboxes should be very application-specific, using custom
AppArmor profiles, Seccomp profiles and the like. A generic container will
never be equivalent to a sandbox because it&amp;rsquo;s too universal to really lock down
the application.&lt;/p&gt;

&lt;p&gt;Containers are not going to be the answer to preventing your application from
being compromised, but they &lt;em&gt;can&lt;/em&gt; limit the damage from a compromise. The world
an attacker might see from inside a very strict container with custom
AppArmor/Seccomp profiles greatly differs than that without the use of
containers. With namespaces we limit the application from seeing various things
such as network, mounts, processes, etc. And with cgroups we can further limit
what the attacker can use, be it a large amount of memory, cpu, or even a fork
bomb.&lt;/p&gt;

&lt;h2 id=&#34;but-what-about-cgroups&#34;&gt;But what about cgroups?&lt;/h2&gt;

&lt;p&gt;We &lt;em&gt;can&lt;/em&gt; set up cgroups for memory, blkio, cpu, and
pids with an unprivileged user as long as the cgroup subsystem has been chowned to the
correct user. Devices are a different story though. Considering the fact you
cannot mknod in a user namespace it is not the worst thing in the world.&lt;/p&gt;

&lt;p&gt;Let&amp;rsquo;s not completely rule out the devices cgroup. In the future this might be entirely possible. In kernels 4.6+, there is a new
cgroup namespace. For now all this does is mask the cgroups path inside the container so it is not entirely useful
for unprivileged containers at all. But in the future maybe it &lt;em&gt;could&lt;/em&gt; be (if we ask nice enough?).&lt;/p&gt;

&lt;h2 id=&#34;what-is-the-awesome-sauce-we-all-gain-from-this&#34;&gt;What is the awesome sauce we all gain from this?&lt;/h2&gt;

&lt;p&gt;Well judging by the original GitHub issue about unprivileged runc containers, the largest group of commenters is from
the scientific community who are restricted to not run certain programs as root.&lt;/p&gt;

&lt;p&gt;But there is so much more that this can be used for. One of my most anticipated use cases is the work being done by
&lt;a href=&#34;https://blogs.gnome.org/alexl/&#34;&gt;Alex Larsson&lt;/a&gt; on &lt;a href=&#34;https://wiki.gnome.org/Projects/SandboxedApps&#34;&gt;xdg-app&lt;/a&gt; to run applications in sandboxes.
Definitely checkout &lt;a href=&#34;https://github.com/projectatomic/bubblewrap&#34;&gt;bubblewrap&lt;/a&gt; if you are interested in this.&lt;/p&gt;

&lt;p&gt;Also &lt;a href=&#34;https://subgraph.com/&#34;&gt;subgraph&lt;/a&gt;, the container based OS which specializes in security and privacy, have this same idea in mind.&lt;/p&gt;

&lt;p&gt;I am a huge fan of running desktop applications in containers as well as solving multi-tenancy for running containers.
I definitely hope to help evolve containers into real sandboxes in the future.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>The Brutally Honest Guide to Docker Graphdrivers</title>
                <link>https://blog.jessfraz.com/post/the-brutally-honest-guide-to-docker-graphdrivers/</link>
                <pubDate>Sat, 02 Apr 2016 11:47:47 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/the-brutally-honest-guide-to-docker-graphdrivers/</guid>
                    <description>

&lt;p&gt;Sup, let me give you fair warning here. Everything contained in this post is
&lt;em&gt;my&lt;/em&gt; opinion so don&amp;rsquo;t go getting your panties all in a knot on Hacker News
because you don&amp;rsquo;t agree with me. I could honestly care less, because that&amp;rsquo;s the
thing about &lt;em&gt;my opinion&lt;/em&gt;, it&amp;rsquo;s mine.&lt;/p&gt;

&lt;p&gt;I am going to give you my honest and dare I say it &amp;ldquo;blunt&amp;rdquo; opinion about each
of the Docker graphdrivers so you can decide for yourself which one is the best
one for you. None are perfect each has it&amp;rsquo;s flaws and I will be laying those
out. Let&amp;rsquo;s begin.&lt;/p&gt;

&lt;h3 id=&#34;overlay&#34;&gt;Overlay&lt;/h3&gt;

&lt;p&gt;Overlayfs was added in the 3.18 kernel. This is important to note because if
you are running overlay on an older kernel than 3.18 you are either:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Not running the same overlay.&lt;/li&gt;
&lt;li&gt;Running a kernel with overlayfs backported onto it, which is what we call
a &amp;ldquo;frankenkernel&amp;rdquo;. Frankenkernels are not to be trusted. This is not to say
it &lt;em&gt;won&amp;rsquo;t&lt;/em&gt; work, hey it &lt;em&gt;might&lt;/em&gt; work great, but it&amp;rsquo;s not to be trusted.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Overlay is great but you need a recent kernel. There are also some super
obscure kernel bugs with regard to sockets or certain python packages
&lt;a href=&#34;https://github.com/docker/docker/issues/12080&#34;&gt;docker/docker#12080&lt;/a&gt;. But
I will say personally I use overlay, I have not hit these bugs recently and
I have all my &lt;a href=&#34;https://github.com/jessfraz/dockerfiles&#34;&gt;100+ dockerfiles&lt;/a&gt;
running as continuous builds on my server with overlay and they all work.&lt;/p&gt;

&lt;h3 id=&#34;aufs&#34;&gt;Aufs&lt;/h3&gt;

&lt;p&gt;Aufs is another great one. But it is not in the kernel by default which blows.
On Ubuntu/Debian distros this is as easy as installing the kernel extras
package but other distros it might not be as simple.&lt;/p&gt;

&lt;h3 id=&#34;btrfs&#34;&gt;Btrfs&lt;/h3&gt;

&lt;p&gt;Btrfs is great too but you need to partition the disk you will use for
&lt;code&gt;/var/lib/docker&lt;/code&gt; as btrfs first. This is kinda a hurdle to jump that I don&amp;rsquo;t
think a lot of people are willing to do.&lt;/p&gt;

&lt;h3 id=&#34;zfs&#34;&gt;Zfs&lt;/h3&gt;

&lt;p&gt;Zfs is another good one, of course, like btrfs it takes some setup and installing
the &lt;code&gt;zfs.ko&lt;/code&gt; on your system. But this driver might become a whole lot more
popular if Ubuntu 16.04 ships with zfs support.&lt;/p&gt;

&lt;h3 id=&#34;devicemapper&#34;&gt;Devicemapper&lt;/h3&gt;

&lt;p&gt;Honestly it makes me super disappointed to say this, but buyer beware. Hey
on the plus side&amp;hellip;. it&amp;rsquo;s in the kernel. You must must must have all the
&lt;a href=&#34;https://github.com/docker/docker/blob/master/daemon/graphdriver/devmapper/README.md&#34;&gt;devicemapper options&lt;/a&gt;
set up perfectly or you will find yourself only being able to launch ~2
containers.&lt;/p&gt;

&lt;p&gt;Let me tell you a story.&lt;/p&gt;

&lt;p&gt;My mom once asked her friend for her famous chicken enchilada recipe so she
could make it herself. The friend gave the recipe but left out one key
ingredient so that my mom&amp;rsquo;s never tasted just right. There was always something
off about it.&lt;/p&gt;

&lt;p&gt;This is how I think of devicemapper.&lt;/p&gt;

&lt;p&gt;It works on RedHat.&lt;/p&gt;

&lt;h3 id=&#34;vfs&#34;&gt;Vfs&lt;/h3&gt;

&lt;p&gt;I sure hope to hell you are just testing something or clinically insane.&lt;/p&gt;

&lt;p&gt;That&amp;rsquo;s about all. Thanks for reading my opinion if you even made it this far.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>IPs for all the Things</title>
                <link>https://blog.jessfraz.com/post/ips-for-all-the-things/</link>
                <pubDate>Thu, 28 Jan 2016 13:00:14 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/ips-for-all-the-things/</guid>
                    <description>&lt;p&gt;This is so cool I can hardly stand it.&lt;/p&gt;

&lt;p&gt;In Docker 1.10, the awesome libnetwork team added the ability to specify
a specific IP for a container. If you want to see the pull request it&amp;rsquo;s here:
&lt;a href=&#34;https://github.com/docker/docker/pull/19001&#34;&gt;docker/docker#19001&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I have a IP Block on OVH for my server with 16 extra public IPs. I totally use
these for good and not for &lt;a href=&#34;https://github.com/jessfraz/tupperwarewithspears&#34;&gt;evil&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But to use these previously with Docker containers meant hackery with the
awesome &lt;a href=&#34;https://github.com/jpetazzo/pipework&#34;&gt;pipework&lt;/a&gt;. Or even worse some
homegrown, Jess bash scripts.&lt;/p&gt;

&lt;p&gt;But now MY LIFE JUST GOT SO MUCH EASIER. Let me show you how:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;# create a new bridge network with your subnet and gateway for your ip block
$ docker network create --subnet 203.0.113.0/24 --gateway 203.0.113.254 iptastic

# run a nginx container with a specific ip in that block
$ docker run --rm -it --net iptastic --ip 203.0.113.2 nginx

# curl the ip from any other place (assuming this is a public ip block duh)
$ curl 203.0.113.2

# BOOM golden
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;It&amp;rsquo;s so amazing I can rewrite
&lt;a href=&#34;https://github.com/jessfraz/tupperwarewithspears&#34;&gt;tupperwarewithspears&lt;/a&gt; to
use this :D&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Runc Containers on the Desktop</title>
                <link>https://blog.jessfraz.com/post/runc-containers-on-the-desktop/</link>
                <pubDate>Tue, 19 Jan 2016 02:17:14 +0000</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/runc-containers-on-the-desktop/</guid>
                    <description>

&lt;p&gt;Almost exactly a year ago, I wrote a post about running
&lt;a href=&#34;https://blog.jessfraz.com/post/docker-containers-on-the-desktop/&#34;&gt;Docker Containers on the Desktop&lt;/a&gt;.
Well it is a new year, and I have ended up converting all my docker containers to
&lt;a href=&#34;https://github.com/opencontainers/runc&#34;&gt;runc&lt;/a&gt; configs, so it&amp;rsquo;s the perfect time
for a new blog post.&lt;/p&gt;

&lt;p&gt;For those of you unfamiliar with the Open Container Initiative you should check
out &lt;a href=&#34;https://www.opencontainers.org/&#34;&gt;opencontainers.org&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Why the switch?&lt;/em&gt; you ask&amp;hellip; well let me explain.&lt;/p&gt;

&lt;p&gt;Our fellow Docker maintainer and pal &lt;a href=&#34;https://twitter.com/estesp&#34;&gt;Phil Estes&lt;/a&gt;
made an awesome patch to add user namespaces to Docker.&lt;/p&gt;

&lt;p&gt;Now me, being the completely insane containerizer that I am, desperately wanted to
run all my crazy sound/video device mounting containers in user namespaces.&lt;/p&gt;

&lt;p&gt;Well the way this could work is by having a custom &lt;code&gt;gid_map&lt;/code&gt; for the &lt;code&gt;audio&lt;/code&gt; and
&lt;code&gt;video&lt;/code&gt; groups to map to the host groups so we can have permission to access
these devices in the container. In layman&amp;rsquo;s terms, I basically wanted to poke a
teeny tiny map in the user namespace to be able to have permission to use my sound
and video devices.&lt;/p&gt;

&lt;p&gt;Obviously this was not the design of the feature, but since &lt;code&gt;runc&lt;/code&gt; exposes the
&lt;code&gt;uidMappings&lt;/code&gt; and &lt;code&gt;gidMappings&lt;/code&gt;, I knew I could have the power to do as I please.
This is the awesome thing about &lt;code&gt;runc&lt;/code&gt;. You, the user, have all the control.&lt;/p&gt;

&lt;p&gt;So for chrome, this is what you get for mappings:
&lt;a href=&#34;https://github.com/jessfraz/containers/blob/master/chrome/config.json#L223&#34;&gt;github.com/jessfraz/containers:chrome/config.json#L223&lt;/a&gt;.
If you look closely, or know what you are looking at, you can see group &lt;code&gt;29&lt;/code&gt; and &lt;code&gt;44&lt;/code&gt;
are mapped to the same group ids as the host.&lt;/p&gt;

&lt;p&gt;Then you can do cool things like listen to Taylor Swift in a container with a
user namespace.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/chrome-userns.png&#34; alt=&#34;chrome-userns&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Pretty cool right. So I went all OCD on this, like most things I encounter, and I
converted &lt;em&gt;all&lt;/em&gt; my containers. Obviously I found a way to generate them.&lt;/p&gt;

&lt;h3 id=&#34;riddler&#34;&gt;Riddler&lt;/h3&gt;

&lt;p&gt;Introducing &lt;a href=&#34;https://github.com/jessfraz/riddler&#34;&gt;github.com/jessfraz/riddler&lt;/a&gt;!
&lt;code&gt;riddler&lt;/code&gt; will take a running/stopped docker container and convert the inspect information
into the &lt;a href=&#34;https://github.com/opencontainers/specs&#34;&gt;oci spec&lt;/a&gt;
(which can be run by &lt;code&gt;runc&lt;/code&gt;, or any other oci compatible tool).&lt;/p&gt;

&lt;p&gt;It has some opinionated features in that it will always try to set up a &lt;code&gt;gid_map&lt;/code&gt;
that works with your devices. You can also pass custom hooks to automatically add
to the runc config as &lt;code&gt;prestart&lt;/code&gt;, &lt;code&gt;poststart&lt;/code&gt;, or &lt;code&gt;poststop&lt;/code&gt; hooks. Which leads
me to the next tool I built.&lt;/p&gt;

&lt;h3 id=&#34;netns&#34;&gt;Netns&lt;/h3&gt;

&lt;p&gt;Say hello to &lt;a href=&#34;https://github.com/jessfraz/netns&#34;&gt;github.com/jessfraz/netns&lt;/a&gt;!
So you want your runc containers to have networking, eh? How about something super
simple like a bridge? &lt;code&gt;netns&lt;/code&gt; does just that. It sets up a bridge network
for all your runc containers when added via the &lt;code&gt;prestart&lt;/code&gt; hook.&lt;/p&gt;

&lt;p&gt;It&amp;rsquo;s actually super simple code as well thanks to the awesome
&lt;a href=&#34;https://github.com/vishvananda/netlink&#34;&gt;&lt;code&gt;netlink&lt;/code&gt; pkg&lt;/a&gt; from
&lt;a href=&#34;https://github.com/vishvananda&#34;&gt;vishvananda&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;netns&lt;/code&gt; even saves the ip for the container in a &lt;code&gt;.ip&lt;/code&gt; file in the directory with
your config. Then other hooks can use this to do other things. For instance I use
the &lt;a href=&#34;https://github.com/cbednarski/hostess&#34;&gt;&lt;code&gt;hostess&lt;/code&gt; cli&lt;/a&gt; to then add an entry to
my host&amp;rsquo;s &lt;code&gt;/etc/hosts&lt;/code&gt; file, so I don&amp;rsquo;t have to remember the ip for the container
when I want to reach it.&lt;/p&gt;

&lt;p&gt;You can find all my hook scripts in
&lt;a href=&#34;https://github.com/jessfraz/containers/tree/master/hack/scripts&#34;&gt;github.com/jessfraz/containers:hack/scripts&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&#34;magneto&#34;&gt;Magneto&lt;/h3&gt;

&lt;p&gt;The last tool I made was a copy of &lt;code&gt;docker stats&lt;/code&gt; for &lt;code&gt;runc&lt;/code&gt;. But what I really wanted
was the new &lt;code&gt;pids&lt;/code&gt; cgroup stats, that &lt;a href=&#34;https://github.com/cyphar&#34;&gt;Aleksa Sarai&lt;/a&gt; added
to the kernel and runc (and soon docker ;).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;runc&lt;/code&gt; has a command &lt;code&gt;runc events&lt;/code&gt; which outputs json stats in an interval. All you have
to do is pipe that to &lt;a href=&#34;https://github.com/jessfraz/magneto&#34;&gt;magneto&lt;/a&gt; to get the awesome ux.&lt;/p&gt;

&lt;p&gt;The following is for my chrome container:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ sudo runc events | magneto
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/magneto.png&#34; alt=&#34;magneto&#34; /&gt;&lt;/p&gt;

&lt;h3 id=&#34;all-the-configs&#34;&gt;All the configs&lt;/h3&gt;

&lt;p&gt;If you are interested in all the configs for my containers, checkout
&lt;a href=&#34;https://github.com/jessfraz/containers&#34;&gt;github.com/jessfraz/containers&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I even included a &lt;a href=&#34;https://github.com/jessfraz/containers/blob/master/runc%40.service&#34;&gt;&lt;code&gt;systemd&lt;/code&gt; service file&lt;/a&gt;
that can easily run any container (without a tty) in this directory via:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ sudo systemctl start runc@foldername

# for example:
$ sudo systemctl start runc@chrome
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; Keep in mind since these are generated a lot of the filepaths are hardcoded for things on
&lt;em&gt;my&lt;/em&gt; host. So if you try to run these and aren&amp;rsquo;t me I don&amp;rsquo;t want to hear any whining.&lt;/p&gt;

&lt;p&gt;Happy namespacing!&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Docker run all the things with user namespaces</title>
                <link>https://blog.jessfraz.com/post/docker-run-all-the-things-with-userns/</link>
                <pubDate>Fri, 08 Jan 2016 17:33:46 +0000</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/docker-run-all-the-things-with-userns/</guid>
                    <description>

&lt;p&gt;If you weren&amp;rsquo;t aware user namepace support was added to Docker awhile back in
the &amp;ldquo;Experimental&amp;rdquo; builds. But with the upcoming release of Docker Engine
1.10.0, &lt;a href=&#34;https://twitter.com/estesp&#34;&gt;Phil Estes&lt;/a&gt; is working on
&lt;a href=&#34;https://github.com/docker/docker/pull/19187&#34;&gt;moving it into stable&lt;/a&gt;. Now this
is all super exciting and blah blah blah, but what I am going to talk about
today is how I started running all the containers from my
&lt;a href=&#34;https://blog.jessfraz.com/post/docker-containers-on-the-desktop&#34;&gt;Docker Containers on the Desktop&lt;/a&gt; with
the new user namespace support. The containers/images in that post were already
doing some linux-y magic, but with a little more, they are perfect. I&amp;rsquo;m not
going to go through them all but I will go through some interesting ones,
including even how to run Docker-in-Docker.&lt;/p&gt;

&lt;h3 id=&#34;chrome&#34;&gt;Chrome&lt;/h3&gt;

&lt;p&gt;This one was shockingly easy. The only things I needed to add to my original
command were &lt;code&gt;--group-add video&lt;/code&gt; and &lt;code&gt;--group-add audio&lt;/code&gt;. Makes sense right..
we obviously want to be a member of those groups to watch Taylor Swift music
videos.&lt;/p&gt;

&lt;p&gt;The full command is below. I even made a custom seccomp whitelist for chrome,
you can view it in my dotfiles repo: &lt;a href=&#34;https://github.com/jessfraz/dotfiles/blob/master/etc/docker/seccomp/chrome.json&#34;&gt;github.com/jessfraz/dotfiles&lt;/a&gt;. Seccomp will be shipped in 1.10 as well, along with a default whitelist! (But I degress that is not the point of this blog post.)&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/chrome/stable/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-console&#34;&gt;$ docker run -d \
    --memory 3gb \
    -v /etc/localtime:/etc/localtime:ro \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    -e DISPLAY=unix$DISPLAY \
    -v $HOME/Downloads:/root/Downloads \
    -v $HOME/.chrome:/data \
    -v /dev/shm:/dev/shm \
    --security-opt seccomp:/etc/docker/seccomp/chrome.json \
    --device /dev/snd \
    --device /dev/dri \
    --device /dev/video0 \
    --group-add audio \
    --group-add video \
    --name chrome \
    jess/chrome --user-data-dir=/data
&lt;/code&gt;&lt;/pre&gt;

&lt;h3 id=&#34;notify-osd-and-irssi&#34;&gt;Notify-osd and Irssi&lt;/h3&gt;

&lt;p&gt;Now I have always run my notifications daemon in a container, because that
stuff is nasty to install, so many dependencies, ewwww. This one was a bit more
tricky beacuase it involves dbus but it is a way cleaner solution than the way
I was originally running it.&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/notify-osd/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-console&#34;&gt;$ docker run -d \
    -v /etc/localtime:/etc/localtime:ro \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    -v /etc \
    -v /home/user/.dbus \
    -v /home/user/.cache/dconf \
    -e DISPLAY=unix$DISPLAY \
    --name notify_osd \
    jess/notify-osd

# you can test with
$ docker exec -it notify_osd notify-send hello
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Not to bad right? I am creating those volumes on run so that we can then share
them with our irssi container. This way when someone pings me I get
a notification, duh!&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/irssi/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-console&#34;&gt;$ docker run --rm -it \
    -v /etc/localtime:/etc/localtime:ro \
    -v $HOME/.irssi:/home/user/.irssi \
    --volumes-from notify_osd \
    -e DBUS_SESSION_DBUS_ADDRESS=&amp;quot;unix:abstract=/home/user/.dbus/session-bus/$(docker exec notify_osd ls /home/user/.dbus/session-bus/)&amp;quot; \
    --name irssi \
    jess/irssi
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;So this is pretty simple as well even considering the real gross part is trying
to get the &lt;code&gt;DBUS_SESSION_DBUS_ADDRESS&lt;/code&gt; from our &lt;code&gt;notify_osd&lt;/code&gt; container.&lt;/p&gt;

&lt;p&gt;Let&amp;rsquo;s get into the fun part.&lt;/p&gt;

&lt;h3 id=&#34;docker-in-docker&#34;&gt;Docker-in-Docker&lt;/h3&gt;

&lt;p&gt;When running the docker daemon with user namespace support, you cannot use
&lt;code&gt;docker run&lt;/code&gt; flags like &lt;code&gt;--privileged&lt;/code&gt;, &lt;code&gt;--net host&lt;/code&gt;, &lt;code&gt;--pid host&lt;/code&gt;, etc. These
are for pretty obvious reasons so I&amp;rsquo;m not going to get into it, if you want to
know more RTFM.&lt;/p&gt;

&lt;p&gt;Okay so we can&amp;rsquo;t use &lt;code&gt;--privileged&lt;/code&gt;, but but but that&amp;rsquo;s how I run
docker-in-docker&amp;hellip; ok let&amp;rsquo;s think about it. What is &lt;code&gt;--privileged&lt;/code&gt; actually
doing? Well for starters it&amp;rsquo;s allowing all capabilites, but&amp;hellip; do we really
need them all? The answer is no.. all we really need is &lt;code&gt;CAP_SYS_ADMIN&lt;/code&gt; and
&lt;code&gt;CAP_NET_ADMIN&lt;/code&gt;. So that gets us pretty far but we also need to disable the
default seccomp profile (because it drops &lt;code&gt;clone&lt;/code&gt; args, &lt;code&gt;mount&lt;/code&gt;, and a bunch of
others). Lastly we need to run with a different apparmor profile so we can have
more capabilities as well.&lt;/p&gt;

&lt;p&gt;This leaves us with:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
&lt;/code&gt;&lt;/pre&gt;
</description>
                </item>
                    
            <item>
                <title>How to use the new Docker Seccomp profiles</title>
                <link>https://blog.jessfraz.com/post/how-to-use-new-docker-seccomp-profiles/</link>
                <pubDate>Mon, 04 Jan 2016 23:21:07 +0000</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/how-to-use-new-docker-seccomp-profiles/</guid>
                    <description>&lt;p&gt;In case you missed it, we recently merged a &lt;a href=&#34;https://github.com/moby/moby/pull/18979&#34;&gt;default seccomp profile&lt;/a&gt; for Docker
containers. I urge you to try out the default seccomp profile, mostly so we can
rest easy knowing the defaults are sane and your containers work as before.
You can download the master version of Docker Engine from
&lt;a href=&#34;https://master.dockerproject.org&#34;&gt;master.dockerproject.org&lt;/a&gt; or
&lt;a href=&#34;https://experimental.docker.com&#34;&gt;experimental.docker.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We even have a doc describing the &lt;a href=&#34;https://github.com/jessfraz/docker/blob/52f32818df8bad647e4c331878fa44317e724939/docs/security/seccomp.md&#34;&gt;syscalls we purposely block&lt;/a&gt; and &lt;a href=&#34;https://github.com/jessfraz/docker/blob/6837cfc13cba842186a7261aa9bbd3a8755fd11e/docs/security/non-events.md&#34;&gt;security vulnerabilities the profile blocked&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But that&amp;rsquo;s not what this blog post is about. This post is about how you can
create your own custom seccomp profiles for your containers. And how to debug when
your profile is missing a syscall.&lt;/p&gt;

&lt;p&gt;So this is not the most sane thing in the world, I even tried in the process
to create a bash script that takes the output from strace, collects the
syscalls, and generates a profile. But like all tools of this sort (eg.
&lt;code&gt;aa-genprof&lt;/code&gt;) it missed some, well to be exact it missed &lt;strong&gt;6&lt;/strong&gt;. Which is no
small feat to debug, so this post is in the format: learn by example. I am
going to take you step by step through what I did.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Wake up go to starbucks&amp;hellip; just kidding&amp;hellip; not that specific.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I wanted to make a custom profile for my &lt;code&gt;chrome&lt;/code&gt; container.
I decided to get the syscalls it used by changing the entrypoint for my
&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/chrome/stable/Dockerfile&#34;&gt;&lt;code&gt;chrome/Dockerfile&lt;/code&gt;&lt;/a&gt;
to &lt;code&gt;ENTRYPOINT [ &amp;quot;strace&amp;quot;, &amp;quot;-ff&amp;quot;, &amp;quot;google-chrome&amp;quot; ]&lt;/code&gt;. So the only things that
changed was wrapping the command in &lt;code&gt;strace&lt;/code&gt; and of course installing &lt;code&gt;strace&lt;/code&gt;
in the container. The &lt;code&gt;-ff&lt;/code&gt; option makes sure &lt;code&gt;strace&lt;/code&gt; follows forks. Which is
essential for chrome because they fork a bunch of processes (&lt;em&gt;fun fact&lt;/em&gt;: each tab
is a process with it&amp;rsquo;s own PID namespace).&lt;/p&gt;

&lt;p&gt;Cool beans, moving on.&lt;/p&gt;

&lt;p&gt;So I used chrome the &lt;strong&gt;entire day&lt;/strong&gt; like this to create the most verbose
&lt;code&gt;strace&lt;/code&gt; output so I wouldn&amp;rsquo;t miss any syscalls.&lt;/p&gt;

&lt;p&gt;At the end of the day I saved this output into a file by running
&lt;code&gt;docker logs chrome &amp;gt; $HOME/chrome-strace.log 2&amp;gt;&amp;amp;1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Then I used the world&amp;rsquo;s most janky bash script to generate a profile:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;#!/bin/bash
set -e
set -o pipefail

main(){
	local file=$1
	local name=$(basename &amp;quot;$0&amp;quot;)

	if [[ -z &amp;quot;$file&amp;quot; ]]; then
		cat &amp;gt;&amp;amp;2 &amp;lt;&amp;lt;-EOF
		${name} [strace-output-filename]

		You must pass a filename that has the strace output.
		EOF
	fi

	# get just the syscalls
	local IFS=$&#39;\n&#39;
	raw=( $(perl -lne &#39;print $1 if /([a-zA-Z_]+\()/&#39; &amp;quot;$file&amp;quot; | sort -u) )
	unset IFS


	syscalls=( )

	tmpfile=$(mktemp /tmp/seccomp-strace.XXXXXX)

	curl -sSL -o &amp;quot;$tmpfile&amp;quot; https://raw.githubusercontent.com/torvalds/linux/master/arch/x86/entry/syscalls/syscall_64.tbl

	for syscall in &amp;quot;${raw[@]}&amp;quot;; do
		# clean the trailing (
		syscall=${syscall%(}

		if grep -R -q -w $syscall &amp;quot;$tmpfile&amp;quot;; then
			syscalls+=( $syscall )
		fi
	done

	# start the seccomp profile
	cat &amp;lt;&amp;lt;-EOF &amp;gt; &amp;quot;$tmpfile&amp;quot;
	{
		&amp;quot;defaultAction&amp;quot;: &amp;quot;SCMP_ACT_ERRNO&amp;quot;,
		&amp;quot;syscalls&amp;quot;: [
		EOF

		for syscall in &amp;quot;${syscalls[@]}&amp;quot;; do
			cat &amp;lt;&amp;lt;-EOF
			{
				&amp;quot;name&amp;quot;: &amp;quot;${syscall}&amp;quot;,
				&amp;quot;action&amp;quot;: &amp;quot;SCMP_ACT_ALLOW&amp;quot;,
				&amp;quot;args&amp;quot;: null
			},
			EOF
		done &amp;gt;&amp;gt; &amp;quot;$tmpfile&amp;quot;

		# remove trailing comma
		sed -i &#39;$s/,$//&#39; &amp;quot;$tmpfile&amp;quot;

		cat &amp;lt;&amp;lt;-EOF &amp;gt;&amp;gt; &amp;quot;$tmpfile&amp;quot;
		]
	}
	EOF

	cat &amp;quot;$tmpfile&amp;quot;
	rm &amp;quot;$tmpfile&amp;quot;
}

main $@
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You use this script like so:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ ./shitty-seccomp-profile-generator.sh chrome-strace.log
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now you have a whitelist generated from your strace output. But it&amp;rsquo;s super bad
and when you try to run your container with it you get a vague error and
&lt;code&gt;Operation not permitted&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Just for this example the error was:
&lt;code&gt;[1:1:0104/214046:ERROR:nacl_fork_delegate_linux.cc(314)] Bad NaCl helper startup ack (0 bytes)&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So now we have to use our brains. WHAT!? NOOOOO!&lt;/p&gt;

&lt;p&gt;So I opened the generated profile and took a look at what it was allowing.&lt;/p&gt;

&lt;p&gt;Now I know a little bit about how chrome uses namespaces/seccomp to create a
sandbox, so my first thought was let&amp;rsquo;s make sure we allow &lt;code&gt;unshare&lt;/code&gt;, &lt;code&gt;clone&lt;/code&gt;,
&lt;code&gt;seccomp&lt;/code&gt; and &lt;code&gt;setns&lt;/code&gt;. Sure enough, &lt;code&gt;unshare&lt;/code&gt; and &lt;code&gt;setns&lt;/code&gt; were missing&amp;hellip; thanks &lt;code&gt;strace&lt;/code&gt;
you really sucked that one up, even &lt;em&gt;I&lt;/em&gt; know chrome calls those.&lt;/p&gt;

&lt;p&gt;After further thought I realized it was also missing &lt;code&gt;setgid&lt;/code&gt; and
&lt;code&gt;exit&lt;/code&gt;/&lt;code&gt;exit_group&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This all took a super long time of guessing and checking but I ended up with
this &lt;a href=&#34;https://github.com/jessfraz/dotfiles/blob/master/etc/docker/seccomp/chrome.json&#34;&gt;profile&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Obviously noone else is going to do this, debug for hours the syscalls that are
missing. This is why the default profile is so important, we wanted to create
sane defaults that would protect people but also not cause all this pain.&lt;/p&gt;

&lt;p&gt;So please, please, please try it out and open an issue if you find your
container that used to run perfectly is now giving &lt;code&gt;Operation not permitted&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you are curious about syscalls or are trying to track down what you are
missing, this is a great syscall table: &lt;a href=&#34;https://filippo.io/linux-syscall-table/&#34;&gt;filippo.io/linux-syscall-table&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Also, things are going to get better. We are working on sane security profiles
for containers that don&amp;rsquo;t make you want to pull your hair out. You can read up
on the proposal at
&lt;a href=&#34;https://github.com/docker/docker/issues/17142#issuecomment-148974642&#34;&gt;docker/docker#17142&lt;/a&gt;.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Cgroups all the way down</title>
                <link>https://blog.jessfraz.com/post/cgroups-all-the-way-down/</link>
                <pubDate>Fri, 02 Oct 2015 11:47:47 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/cgroups-all-the-way-down/</guid>
                    <description>&lt;p&gt;I went to a meetup recently where a talk was given by Cara Marie of the NCC
Group. She talked about decompression bombs and the various compression
algorithms that can create these malicious artifacts. You might be familiar
with Russ Cox&amp;rsquo;s post &lt;a href=&#34;http://research.swtch.com/zip&#34;&gt;Zip Files All The Way Down&lt;/a&gt;,
which goes over self-reproducing zip files. However most programs will not
decompress the files fromm his blog post recursively. Which just leaves us with
the problem of the &lt;em&gt;more sofisticated&lt;/em&gt; decompression bomb.&lt;/p&gt;

&lt;p&gt;During the talk, I couldn&amp;rsquo;t help but think about how we recently got a pull
request to &lt;a href=&#34;https://github.com/docker/docker/pull/14466/files&#34;&gt;Add support for blkio read/write bps device&lt;/a&gt;.
Granted, this does not control disk space utilization, &lt;strong&gt;BUT&lt;/strong&gt; it does allow
for throttling the upper limit on write/read to the device.&lt;/p&gt;

&lt;p&gt;Let me give an example of how this works.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;# lets set read-bps-device to 1MB/second
# this will set a limit on the bandwidth rate of that device
# to 1MB/second
$ docker run --rm -it --read-bps-device /dev/zero:1mb debian:jessie bash

# now we are in the container, lets test that the cgroup is working correctly
$ dd if=/dev/zero of=/dev/null bs=4K count=1024 iflag=direct
1024+0 records in
1024+0 records out
4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s

# pretty cool right?
&lt;/code&gt;&lt;/pre&gt;
</description>
                </item>
                    
            <item>
                <title>Reverse VPN All The Things</title>
                <link>https://blog.jessfraz.com/post/reverse-vpn-for-all-the-things/</link>
                <pubDate>Fri, 02 Oct 2015 11:47:47 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/reverse-vpn-for-all-the-things/</guid>
                    <description>

&lt;p&gt;Usually when you think of a VPN, you think of accessing an office network from
somewhere &lt;em&gt;outside the office&lt;/em&gt;. A reverse VPN is for exposing things from your
home network into the public. Why? Well for one, you shouldn&amp;rsquo;t want to expose
your home network to the world. There are a lot of risks in doing that.
A reverse VPN allows you to securely control what you are exposing.&lt;/p&gt;

&lt;p&gt;Personally I use this for hooks that Amazon Lamda hits to interact with my
Alexa. I use &lt;a href=&#34;https://github.com/jishi/node-sonos-http-api&#34;&gt;this awesome project&lt;/a&gt;
to get Alexa to talk to my Sonos speakers. This runs in a docker container on
my Synology NAS in my apartment. I also use a reverse VPN for my plex, blah
blah, blah it&amp;rsquo;s super handy okay.&lt;/p&gt;

&lt;p&gt;Let&amp;rsquo;s set one up.&lt;/p&gt;

&lt;h2 id=&#34;on-the-remote-machine&#34;&gt;On the Remote Machine&lt;/h2&gt;

&lt;p&gt;I like to use &lt;a href=&#34;https://github.com/kylemanna/docker-openvpn&#34;&gt;kylemanna&amp;rsquo;s openvpn docker image&lt;/a&gt;.
I have this on my private docker registry at &lt;code&gt;r.j3ss.co/openvpn-server&lt;/code&gt;.
This image is publicly accessible and signed, etc.&lt;/p&gt;

&lt;p&gt;First, we are going to run commands on our remote machine. I just spun up
a micro instance on Google Cloud. You can host it where ever you like though
just make sure you have docker installed there.&lt;/p&gt;

&lt;p&gt;1) Generate the config, we are saving all the state into &lt;code&gt;/volumes/openvpn&lt;/code&gt; but
you can store it wherever.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;```sh
# substitude your own domain for rvpn.j3ss.co below
$ docker run --rm -it \
    -v /volumes/openvpn:/etc/openvpn \
    r.j3ss.co/openvpn-server \
    ovpn_genconfig -u udp://rvpn.j3ss.co:1194
```
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;2) Generate certificates:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;```sh
# This will prompt you for a passphrase and the information for your
# certificate request.
$ docker run --rm -it \
    -v /volumes/openvpn:/etc/openvpn \
    r.j3ss.co/openvpn-server \
    ovpn_initpki
```

**NOTE**: If you need help with entropy you can download something over and
over again: `while true; do sleep 1; curl
&#39;https://misc.j3ss.co/gifs/iptables.gif&#39; &amp;gt; /dev/null; done`
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;3) Start the openvpn server:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;```sh
$ docker run --restart always -d \
    --name openvpn \
    -v /volumes/openvpn:/etc/openvpn \
    --net host \
    --cap-add=NET_ADMIN \
    r.j3ss.co/openvpn-server
```
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;4) Create a client certificate, my client is &lt;code&gt;acidburn&lt;/code&gt; you can name yours
whatever you would like.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;```sh
# This will prompt you to enter the certificate&#39;s password you set in step 2
$ docker run --rm -it \
    -v /volumes/openvpn:/etc/openvpn \
    r.j3ss.co/openvpn-server \
    easyrsa build-client-full acidburn nopass
```
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;5) Get the client certificate (replace acidburn with your name from above):&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;```sh
$ docker run --rm -it \
    -v /volumes/openvpn:/etc/openvpn \
    r.j3ss.co/openvpn-server \
    ovpn_getclient acidburn &amp;gt; ~/acidburn.ovpn
```
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&#34;on-the-device&#34;&gt;On the Device&lt;/h2&gt;

&lt;p&gt;Take the &lt;code&gt;acidburn.ovpn&lt;/code&gt; (yours may be named differently) and copy it to your
device. Of course on my NAS, I have docker installed, but you can install
openvpn on your host too (but I will judge you).&lt;/p&gt;

&lt;p&gt;While ssh-ed into the machine I will start my openvpn client daemon:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run --restart always -d \
    --name openvpn \
    -v /path/to/config/acidburn.ovpn:/etc/openvpn/acidburn.ovpn:ro \
    --cap-add NET_ADMIN \
    --device /dev/net/tun \
    r.j3ss.co/openvpn /etc/openvpn/acidburn.ovpn
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now all you have to do it run all your other containers in &lt;code&gt;--net
container:openvpn&lt;/code&gt;!&lt;/p&gt;

&lt;p&gt;For example let&amp;rsquo;s run an nginx container:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run --restart always -d \
	--name nginx \
	--net=container:openvpn \
	nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now go to port 80 of your reverse vpn server. You should see the default nginx
page.&lt;/p&gt;

&lt;p&gt;It&amp;rsquo;s so easy! Alternatively if you would rather run everything on your server
over the vpn you can run the &lt;code&gt;openvpn&lt;/code&gt; container with &lt;code&gt;--net host&lt;/code&gt;.
I personally like the control though.&lt;/p&gt;

&lt;p&gt;Happy reverse vpn-ing!&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Tor Socks Proxy and Privoxy Containers</title>
                <link>https://blog.jessfraz.com/post/tor-socks-proxy-and-privoxy-containers/</link>
                <pubDate>Sat, 12 Sep 2015 11:47:47 -0700</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/tor-socks-proxy-and-privoxy-containers/</guid>
                    <description>

&lt;p&gt;Okay so this is part 2.5 in my series of posts combining my two
favorite things, Docker &amp;amp; Tor. If you are just starting here, to catch you up,
the first post was
&lt;a href=&#34;https://blog.jessfraz.com/post/routing-traffic-through-tor-docker-container/&#34;&gt;&amp;ldquo;How to Route all Traffic through a Tor Docker container&amp;rdquo;&lt;/a&gt;.
The second was on &lt;a href=&#34;https://blog.jessfraz.com/post/running-a-tor-relay-with-docker/&#34;&gt;&amp;ldquo;Running a Tor relay with Docker&amp;rdquo;&lt;/a&gt;.
I thought it only made sense to show how to set up a Tor socks5 proxy in
a container, for routing &lt;em&gt;some&lt;/em&gt; traffic through Tor; in contrast to the first
post, where I explained how to route &lt;em&gt;all&lt;/em&gt; your traffic.&lt;/p&gt;

&lt;h2 id=&#34;tor-socks5-proxy&#34;&gt;Tor Socks5 Proxy&lt;/h2&gt;

&lt;p&gt;I have made a Docker image for this which lives at
&lt;a href=&#34;https://hub.docker.com/r/jess/tor-proxy/&#34;&gt;jess/tor-proxy&lt;/a&gt;
on the Docker hub. But I will go over the details so you can build one
yourself.&lt;/p&gt;

&lt;p&gt;The Dockerfile looks like the following:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bsh&#34;&gt;FROM alpine:latest

# Note: Tor is only in testing repo -&amp;gt; http://pkgs.alpinelinux.org/packages?package=emacs&amp;amp;repo=all&amp;amp;arch=x86_64
RUN apk update &amp;amp;&amp;amp; apk add \
    tor \
    --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing/ \
    &amp;amp;&amp;amp; rm -rf /var/cache/apk/*

# expose socks port
EXPOSE 9050

# copy in our torrc file
COPY torrc.default /etc/tor/torrc.default

# make sure files are owned by tor user
RUN chown -R tor /etc/tor

USER tor

ENTRYPOINT [ &amp;quot;tor&amp;quot; ]
CMD [ &amp;quot;-f&amp;quot;, &amp;quot;/etc/tor/torrc.default&amp;quot; ]
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Which looks a lot like the Dockerfile for a relay, if you recall. But the key
difference is the &lt;code&gt;torrc&lt;/code&gt;. Now the only thing I have changed from the default
&lt;code&gt;torrc&lt;/code&gt; is the following line:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;SocksPort 0.0.0.0:9050
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This is so that it can bind correctly to the network namespace the container
is using.&lt;/p&gt;

&lt;p&gt;This image weighs in at only 11.51 MB!&lt;/p&gt;

&lt;p&gt;To run the image:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bsh&#34;&gt;$ docker run -d \
    --restart always \
    -v /etc/localtime:/etc/localtime:ro \ # i like this for all my containers, but it&#39;s optional
    -p 9050:9050 \ # publish the port
    --name torproxy \
    jess/tor-proxy
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Okay, awesome, now you have the socks5 proxy running on port &lt;code&gt;9050&lt;/code&gt;. Let&amp;rsquo;s test
it:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bsh&#34;&gt;# get your current ip
$ curl -L http://ifconfig.me

# get your ip through the tor socks proxy
$ curl --socks http://localhost:9050  -L http://ifconfig.me
# obviously they should be different ;)

# you can even curl the check.torproject.org api
$ curl --socks http://localhost:9050  -L https://check.torproject.org/api/ip
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If you are like me and use
&lt;a href=&#34;https://github.com/ioerror/duraconf/blob/master/configs/gnupg/gpg.conf&#34;&gt;@ioerror&amp;rsquo;s gpg.conf&lt;/a&gt;
you can uncomment the line:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;keyserver-options http-proxy=socks5-hostname://127.0.0.1:9050
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now you can import and search for keys on a key server with
improved anonymity. Obviously there are a bunch of other things you can use the
socks proxy for, but I wanted to give this as an example.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href=&#34;https://github.com/jessfraz/dotfiles/blob/master/.dockerfunc#L140&#34;&gt;You could even run chrome in a container through the proxy&amp;hellip;&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Can we take this even further? Yes.&lt;/p&gt;

&lt;h2 id=&#34;privoxy-http-proxy&#34;&gt;Privoxy HTTP Proxy&lt;/h2&gt;

&lt;p&gt;The socks proxy is awesome, but if you want to additionally have an http proxy
it is super easy!&lt;/p&gt;

&lt;p&gt;What we can do is link a Privoxy container to our Tor proxy container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; I have seen people have a Tor socks proxy &lt;em&gt;and&lt;/em&gt; Privoxy in the same container.
But I prefer my approach of 2 different containers, because it is cleaner,
maybe sometimes you do not need both, &lt;em&gt;and&lt;/em&gt; you completely eliminate the need for
having an init system starting 2 processes in one container. Not that there is
anything wrong with that, but it is not my personal preference.&lt;/p&gt;

&lt;p&gt;So on to the Dockerfile, which also lives at &lt;a href=&#34;https://hub.docker.com/r/jess/privoxy/&#34;&gt;jess/privoxy&lt;/a&gt;:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bsh&#34;&gt;FROM alpine:latest

RUN apk update &amp;amp;&amp;amp; apk add \
    privoxy \
    &amp;amp;&amp;amp; rm -rf /var/cache/apk/*

# expose http port
EXPOSE 8118

# copy in our privoxy config file
COPY privoxy.conf /etc/privoxy/config

# make sure files are owned by privoxy user
RUN chown -R privoxy /etc/privoxy

USER privoxy

ENTRYPOINT [ &amp;quot;privoxy&amp;quot;, &amp;quot;--no-daemon&amp;quot; ]
CMD [ &amp;quot;/etc/privoxy/config&amp;quot; ]
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This image is a whopping 6.473 MB :D&lt;/p&gt;

&lt;p&gt;The only change I made to the default privoxy config was the following:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;forward-socks5 / torproxy:9050 .
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This is so that when we link our torproxy container to the privoxy container,
privoxy can communicate with the sock.&lt;/p&gt;

&lt;p&gt;Let&amp;rsquo;s run it:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bsh&#34;&gt;$ docker run -d \
    --restart always \
    -v /etc/localtime:/etc/localtime:ro \ # again a personal preference
    --link torproxy:torproxy \ # link to our torproxy container
    -p 8118:8118 \ # publish the port
    --name privoxy \
    jess/privoxy
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Awesome, now to test the proxy:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bsh&#34;&gt;# get your current ip
$ curl -L http://ifconfig.me

# get your ip through the http proxy
$ curl -x http://localhost:8118 -L http://ifconfig.me
# obviously again, they should be different ;)

# curl the check.torproject.org api
$ curl -x http://localhost:8118  -L https://check.torproject.org/api/ip
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;That&amp;rsquo;s all for now! Stay anonymous on the interwebs :p&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Running a Tor relay with Docker</title>
                <link>https://blog.jessfraz.com/post/running-a-tor-relay-with-docker/</link>
                <pubDate>Sun, 23 Aug 2015 12:02:01 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/running-a-tor-relay-with-docker/</guid>
                    <description>

&lt;p&gt;This post is part two of what will be a three part series. If you missed it
part one was &lt;a href=&#34;https://blog.jessfraz.com/post/routing-traffic-through-tor-docker-container/&#34;&gt;How to Route Traffic through a Tor Docker container&lt;/a&gt;.
I figured it was important, if you are going to be a tor user, to document how
you can help the Tor community by hosting a Tor relay. And guess what? You can
use Docker to do this!&lt;/p&gt;

&lt;p&gt;There are three types of relays you can host, a bridge relay, a middle relay,
and an exit relay. Exit relays tend to be the ones recieving take down notices
because the IP is the one the public sees traffic from Tor as. A great reference
for hosting an exit node can be found here
&lt;a href=&#34;https://blog.torproject.org/blog/tips-running-exit-node-minimal-harassment&#34;&gt;blog.torproject.org/blog/tips-running-exit-node-minimal-harassment&lt;/a&gt;.
But I will go over how to host each from a Docker container.
My example will have a reduced exit policy and limit which ports you are willing
to route traffic through.&lt;/p&gt;

&lt;p&gt;If you don&amp;rsquo;t want to host an exit node, host a middle relay instead! And if you
want your relay not publically listed in the network then host a bridge.&lt;/p&gt;

&lt;h3 id=&#34;creating-the-base-image&#34;&gt;Creating the base image&lt;/h3&gt;

&lt;p&gt;I have created a Docker image
&lt;a href=&#34;https://hub.docker.com/r/jess/tor-relay/&#34;&gt;jess/tor-relay&lt;/a&gt; from this
&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/tor-relay/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;.
Feel free to create your own image with the following Dockerfile:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bsh&#34;&gt;FROM alpine:latest

# Note: Tor is only in testing repo
RUN apk update &amp;amp;&amp;amp; apk add \
    tor \
    --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing/ \
    &amp;amp;&amp;amp; rm -rf /var/cache/apk/*

# default port to used for incoming Tor connections
# can be changed by changing &#39;ORPort&#39; in torrc
EXPOSE 9001

# copy in our torrc files
COPY torrc.bridge /etc/tor/torrc.bridge
COPY torrc.middle /etc/tor/torrc.middle
COPY torrc.exit /etc/tor/torrc.exit

# make sure files are owned by tor user
RUN chown -R tor /etc/tor

USER tor

ENTRYPOINT [ &amp;quot;tor&amp;quot; ]
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;As you can see we are copying 3 different &lt;code&gt;torrc&lt;/code&gt;&amp;rsquo;s into the container. One for
each a bridge, middle, and exit relay.&lt;/p&gt;

&lt;p&gt;I used alpine linux because it is super minimal. The size of the image is
11.52MB! Crazyyyyyyy!&lt;/p&gt;

&lt;h3 id=&#34;running-a-bridge-relay&#34;&gt;Running a bridge relay&lt;/h3&gt;

&lt;p&gt;A bridge relay is not publically listed as part of the Tor network. This is
helpful in places that block all the IPs of publically listed Tor relays.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;torrc.bridge&lt;/code&gt; file for the bridge relay looks like the following:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;ORPort 9001
## A handle for your relay, so people don&#39;t have to refer to it by key.
Nickname hacktheplanet
ContactInfo ${CONTACT_GPG_FINGERPRINT} ${CONTACT_NAME} ${CONTACT_EMAIL}
BridgeRelay 1
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To run the image for a bridge relay:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bsh&#34;&gt;$ docker run -d \
    -v /etc/localtime:/etc/localtime \ # so time is synced
    --restart always \ # why not?
    -p 9001:9001 \ # expose/publish the port
    --name tor-relay \
    jess/tor-relay -f /etc/tor/torrc.bridge
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And now you are helping the tor network by running a bridge relay! Yayyy \o/&lt;/p&gt;

&lt;h3 id=&#34;running-a-middle-relay&#34;&gt;Running a middle relay&lt;/h3&gt;

&lt;p&gt;A middle relay is one of the first few relays traffic flows through. Traffic
will always pass through at least 3 relays. The last relay being an exit node
and all relays before that a middle relay.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;torrc.middle&lt;/code&gt; file for the middle relay looks like the following:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;ORPort 9001
## A handle for your relay, so people don&#39;t have to refer to it by key.
Nickname hacktheplanet
ContactInfo ${CONTACT_GPG_FINGERPRINT} ${CONTACT_NAME} ${CONTACT_EMAIL}
ExitPolicy reject *:*
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To run the image for a middle relay:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bsh&#34;&gt;$ docker run -d \
    -v /etc/localtime:/etc/localtime \ # so time is synced
    --restart always \ # why not?
    -p 9001:9001 \ # expose/publish the port
    --name tor-relay \
    jess/tor-relay -f /etc/tor/torrc.middle
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And now you are helping the tor network by running a middle relay!&lt;/p&gt;

&lt;h3 id=&#34;running-an-exit-relay&#34;&gt;Running an exit relay&lt;/h3&gt;

&lt;p&gt;The exit relay is the last relay traffic is filtered through.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;torrc.exit&lt;/code&gt;  file for the exit node looks like the following:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;ORPort 9001
## A handle for your relay, so people don&#39;t have to refer to it by key.
Nickname hacktheplanet
ContactInfo ${CONTACT_GPG_FINGERPRINT} ${CONTACT_NAME} ${CONTACT_EMAIL}

# Reduced exit policy from
# https://trac.torproject.org/projects/tor/wiki/doc/ReducedExitPolicy
ExitPolicy accept *:20-23     # FTP, SSH, telnet
ExitPolicy accept *:43        # WHOIS
ExitPolicy accept *:53        # DNS
ExitPolicy accept *:79-81     # finger, HTTP
ExitPolicy accept *:88        # kerberos
ExitPolicy accept *:110       # POP3
ExitPolicy accept *:143       # IMAP
ExitPolicy accept *:194       # IRC
ExitPolicy accept *:220       # IMAP3
ExitPolicy accept *:389       # LDAP
ExitPolicy accept *:443       # HTTPS
ExitPolicy accept *:464       # kpasswd
ExitPolicy accept *:465       # URD for SSM (more often: an alternative SUBMISSION port, see 587)
ExitPolicy accept *:531       # IRC/AIM
ExitPolicy accept *:543-544   # Kerberos
ExitPolicy accept *:554       # RTSP
ExitPolicy accept *:563       # NNTP over SSL
ExitPolicy accept *:587       # SUBMISSION (authenticated clients [MUA&#39;s like Thunderbird] send mail over STARTTLS SMTP here)
ExitPolicy accept *:636       # LDAP over SSL
ExitPolicy accept *:706       # SILC
ExitPolicy accept *:749       # kerberos
ExitPolicy accept *:873       # rsync
ExitPolicy accept *:902-904   # VMware
ExitPolicy accept *:981       # Remote HTTPS management for firewall
ExitPolicy accept *:989-995   # FTP over SSL, Netnews Administration System, telnets, IMAP over SSL, ircs, POP3 over SSL
ExitPolicy accept *:1194      # OpenVPN
ExitPolicy accept *:1220      # QT Server Admin
ExitPolicy accept *:1293      # PKT-KRB-IPSec
ExitPolicy accept *:1500      # VLSI License Manager
ExitPolicy accept *:1533      # Sametime
ExitPolicy accept *:1677      # GroupWise
ExitPolicy accept *:1723      # PPTP
ExitPolicy accept *:1755      # RTSP
ExitPolicy accept *:1863      # MSNP
ExitPolicy accept *:2082      # Infowave Mobility Server
ExitPolicy accept *:2083      # Secure Radius Service (radsec)
ExitPolicy accept *:2086-2087 # GNUnet, ELI
ExitPolicy accept *:2095-2096 # NBX
ExitPolicy accept *:2102-2104 # Zephyr
ExitPolicy accept *:3128      # SQUID
ExitPolicy accept *:3389      # MS WBT
ExitPolicy accept *:3690      # SVN
ExitPolicy accept *:4321      # RWHOIS
ExitPolicy accept *:4643      # Virtuozzo
ExitPolicy accept *:5050      # MMCC
ExitPolicy accept *:5190      # ICQ
ExitPolicy accept *:5222-5223 # XMPP, XMPP over SSL
ExitPolicy accept *:5228      # Android Market
ExitPolicy accept *:5900      # VNC
ExitPolicy accept *:6660-6669 # IRC
ExitPolicy accept *:6679      # IRC SSL
ExitPolicy accept *:6697      # IRC SSL
ExitPolicy accept *:8000      # iRDMI
ExitPolicy accept *:8008      # HTTP alternate
ExitPolicy accept *:8074      # Gadu-Gadu
ExitPolicy accept *:8080      # HTTP Proxies
ExitPolicy accept *:8082      # HTTPS Electrum Bitcoin port
ExitPolicy accept *:8087-8088 # Simplify Media SPP Protocol, Radan HTTP
ExitPolicy accept *:8332-8333 # Bitcoin
ExitPolicy accept *:8443      # PCsync HTTPS
ExitPolicy accept *:8888      # HTTP Proxies, NewsEDGE
ExitPolicy accept *:9418      # git
ExitPolicy accept *:9999      # distinct
ExitPolicy accept *:10000     # Network Data Management Protocol
ExitPolicy accept *:11371     # OpenPGP hkp (http keyserver protocol)
ExitPolicy accept *:19294     # Google Voice TCP
ExitPolicy accept *:19638     # Ensim control panel
ExitPolicy accept *:50002     # Electrum Bitcoin SSL
ExitPolicy accept *:64738     # Mumble
ExitPolicy reject *:*
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To run the image for an exit node:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bsh&#34;&gt;$ docker run -d \
    -v /etc/localtime:/etc/localtime \ # so time is synced
    --restart always \ # why not?
    -p 9001:9001 \ # expose/publish the port
    --name tor-relay \
    jess/tor-relay -f /etc/tor/torrc.exit
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And now you are helping the tor network by running an exit relay!&lt;/p&gt;

&lt;p&gt;After running for a couple hours, giving time to
propogate, you can check &lt;a href=&#34;https://atlas.torproject.org&#34;&gt;atlas.torproject.org&lt;/a&gt;
to check if your node has successfully registered in the network.&lt;/p&gt;

&lt;p&gt;Stay tuned for part three of the series where I go over how to run Docker
containers with a Tor networking plugin I am working with Docker&amp;rsquo;s new
networking plugins. But of course if you are going to use
the plugin or route all your traffic through a Tor Docker container (from my first
post), you should really consider hosting a relay. The more people who run
relays, the faster the Tor network will be.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>This Industry is Fucked</title>
                <link>https://blog.jessfraz.com/post/this-industry-is-fucked/</link>
                <pubDate>Sun, 05 Jul 2015 15:14:46 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/this-industry-is-fucked/</guid>
                    <description>&lt;p&gt;My least favorite topic in the world is &amp;lsquo;Women in Tech&amp;rsquo;, so I am going to make this short but I think it&amp;rsquo;s something that needs to be said.&lt;/p&gt;

&lt;p&gt;This industry is fucked.&lt;/p&gt;

&lt;p&gt;Ever since I started speaking at conferences and contributing to open source projects I have been endlessly harassed. I&amp;rsquo;ve gotten hundreds of private messages on IRC and emails about sex, rape, and death threats. People emailing me saying they jerked off to my conference talk video (you&amp;rsquo;re welcome btw) is mild in comparison to sending photoshopped pictures of me covered in blood.&lt;/p&gt;

&lt;p&gt;I wish I could do my job, something I very obviously love doing, without any of this bullshit. However that seems impossible at this point.&lt;/p&gt;

&lt;p&gt;But I&amp;rsquo;m not leaving and I&amp;rsquo;m not going to stop being me. So this is me saying &amp;lsquo;Fuck You.&amp;rsquo;&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Using an R Container for Analytical Models</title>
                <link>https://blog.jessfraz.com/post/r-containers-for-data-science/</link>
                <pubDate>Tue, 30 Jun 2015 11:25:24 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/r-containers-for-data-science/</guid>
                    <description>&lt;p&gt;So it turns out I&amp;rsquo;m pretty bad at vacation. I had this idea for a blog post and
one thing lead to another and here we are&amp;hellip;&lt;/p&gt;

&lt;p&gt;You probably know by now I hate installing things on my host. At my previous
job we did a lot of work with using Python and R for data science. I still love
plotting data with ggplot and my favorite R package, &lt;a href=&#34;https://github.com/karthik/wesanderson&#34;&gt;wes anderson color
palette&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here&amp;rsquo;s a fast intro into how to do this with an R Docker image.&lt;/p&gt;

&lt;p&gt;Now everyone loves their share of different packages, without a doubt I bet
most of them are written by Hadley Wickham ;). Can you imagine if the
percentage of packages contributed by Hadley to CRAN was mirrored by someone to
NPM or pip? It would be crazy.&lt;/p&gt;

&lt;p&gt;We are going to start with an R base and build our ideal (aka you can make
yours different, chill&amp;hellip;) R data science
container, with the following Dockerfile:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;# our R base image
FROM r-base

# install packages
# these are ones I like
RUN echo &#39;install.packages(c(&amp;quot;ggplot2&amp;quot;, &amp;quot;plyr&amp;quot;, &amp;quot;reshape2&amp;quot;, &amp;quot;RColorBrewer&amp;quot;, &amp;quot;scales&amp;quot;,&amp;quot;grid&amp;quot;, &amp;quot;wesanderson&amp;quot;), repos=&amp;quot;http://cran.us.r-project.org&amp;quot;, dependencies=TRUE)&#39; &amp;gt; /tmp/packages.R \
    &amp;amp;&amp;amp; Rscript /tmp/packages.R

# create an R user
ENV HOME /home/user
RUN useradd --create-home --home-dir $HOME user \
    &amp;amp;&amp;amp; chown -R user:user $HOME

WORKDIR $HOME
USER user

# set the command
CMD [&amp;quot;R&amp;quot;]
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Build the image:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker build --rm --force-rm -t jess/r-custom .
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Run and use the image:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;# we need X11 for the graph to display, alternatively
# you can save to a file that is in a bind-mounted dir
# or you can docker cp the file to the host :)
$ docker run -it --name analytics \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    -e DISPLAY=unix$DISPLAY \
    jess/r-custom

# bind mount your data
$ docker run -v $(pwd)/data:/home/user/data \
    -it --name analytics \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    -e DISPLAY=unix$DISPLAY \
    jess/r-custom
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now plot something:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-R&#34;&gt;library(wesanderson)

library(ggplot2)
ggplot(iris, aes(Sepal.Length, Sepal.Width, color = Species)) +
  geom_point(size = 3) +
  scale_color_manual(values = wes_palette(&amp;quot;Royal2&amp;quot;)) +
  theme_gray()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/R.png&#34; alt=&#34;R&#34; /&gt;&lt;/p&gt;

&lt;p&gt;See that was super easy, now I can go back to being on vacation and reading the
latest Vogue.&lt;/p&gt;

&lt;p&gt;Other resources for such things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/rocker-org/rocker/wiki&#34;&gt;Rocker Wiki&lt;/a&gt;: for R in Docker examples&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://blog.yhathq.com&#34;&gt;yhat Blog&lt;/a&gt;: for all things fun and data sciencey,
I might be biased though I used to work here&lt;/li&gt;
&lt;/ul&gt;
</description>
                </item>
                    
            <item>
                <title>How to Route Traffic through a Tor Docker container</title>
                <link>https://blog.jessfraz.com/post/routing-traffic-through-tor-docker-container/</link>
                <pubDate>Sat, 20 Jun 2015 19:40:01 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/routing-traffic-through-tor-docker-container/</guid>
                    <description>

&lt;p&gt;This blog post is going to explain how to route traffic on your host through
a Tor Docker container.&lt;/p&gt;

&lt;p&gt;It&amp;rsquo;s actually a lot simplier than you would think. But it involves dealing with
some unsavory things such as iptables.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://misc.j3ss.co/gifs/iptables.gif&#34; alt=&#34;iptables&#34; /&gt;&lt;/p&gt;

&lt;h3 id=&#34;run-the-image&#34;&gt;Run the Image&lt;/h3&gt;

&lt;p&gt;I have a fork of the tor source code and a branch with a Dockerfile. I have
submitted upstream&amp;hellip; we will see if they take it. The final result is the
image &lt;a href=&#34;https://hub.docker.com/r/jess/tor&#34;&gt;jess/tor&lt;/a&gt;, but you can
easily build locally from my repo
&lt;a href=&#34;https://github.com/jessfraz/tor/tree/add-dockerfile&#34;&gt;jessfraz/tor&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So let&amp;rsquo;s run the image:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bsh&#34;&gt;$ docker run -d \
    --net host \
    --restart always \
    --name tor \
    jess/tor
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Easy right? I can already hear the haters, &amp;ldquo;blah blah blah net host&amp;rdquo;. Chill
out, the point is to route all our traffic duhhhh so we may as well, otherwise
would need to change / overwrite some of Docker&amp;rsquo;s iptables rules, and really
who has time for that shit&amp;hellip;&lt;/p&gt;

&lt;p&gt;You do? Ok make a PR to &lt;a href=&#34;https://github.com/jessfraz/blog&#34;&gt;this blog post&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&#34;routing-traffic&#34;&gt;Routing Traffic&lt;/h3&gt;

&lt;p&gt;Contain yourselves, I am about to throw down some sick iptables rules.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bsh&#34;&gt;#!/bin/bash
# Most of this is credited to
# https://trac.torproject.org/projects/tor/wiki/doc/TransparentProxy
# With a few minor edits

# to run iptables commands you need to be root
if [ &amp;quot;$EUID&amp;quot; -ne 0 ]; then
    echo &amp;quot;Please run as root.&amp;quot;
    exit 1
fi

### set variables
# destinations you don&#39;t want routed through Tor
_non_tor=&amp;quot;192.168.1.0/24 192.168.0.0/24&amp;quot;

# get the UID that Tor runs as
_tor_uid=$(docker exec -u tor tor id -u)

# Tor&#39;s TransPort
_trans_port=&amp;quot;9040&amp;quot;
_dns_port=&amp;quot;5353&amp;quot;

### set iptables *nat
iptables -t nat -A OUTPUT -m owner --uid-owner $_tor_uid -j RETURN
iptables -t nat -A OUTPUT -p udp --dport 53 -j REDIRECT --to-ports $_dns_port

# allow clearnet access for hosts in $_non_tor
for _clearnet in $_non_tor 127.0.0.0/9 127.128.0.0/10; do
   iptables -t nat -A OUTPUT -d $_clearnet -j RETURN
done

# redirect all other output to Tor&#39;s TransPort
iptables -t nat -A OUTPUT -p tcp --syn -j REDIRECT --to-ports $_trans_port

### set iptables *filter
iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# allow clearnet access for hosts in $_non_tor
for _clearnet in $_non_tor 127.0.0.0/8; do
   iptables -A OUTPUT -d $_clearnet -j ACCEPT
done

# allow only Tor output
iptables -A OUTPUT -m owner --uid-owner $_tor_uid -j ACCEPT
iptables -A OUTPUT -j REJECT
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Check that we are routing via &lt;a href=&#34;https://check.torproject.org&#34;&gt;check.torproject.org&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/tor.png&#34; alt=&#34;tor&#34; /&gt;&lt;/p&gt;

&lt;p&gt;Woooohoooo! Success.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Tales of a Part-time Sysadmin: Dogfooding Docker to test Docker</title>
                <link>https://blog.jessfraz.com/post/dogfooding-docker-to-test-docker/</link>
                <pubDate>Sat, 06 Jun 2015 21:10:30 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/dogfooding-docker-to-test-docker/</guid>
                    <description>

&lt;p&gt;This is a tale about how we use Docker to test Docker. Yes, I am familiar with
the meme. Puhlease.&lt;/p&gt;

&lt;p&gt;Many of you are familiar with the fact I work on the Docker core team. Which
consists of fixing bugs, doing releases, reviewing PRs, hanging out on IRC,
mailing lists etc etc etc. But what you may not know is that in addition to all
these things I also manage our testing infrastructure. Now really this in
itself could be a fulltime job. However it is not &lt;em&gt;my&lt;/em&gt; fulltime job, nor would
I &lt;em&gt;ever&lt;/em&gt; want it to be. [insert gif about yak shaving here]&lt;/p&gt;

&lt;p&gt;This blog post is going to be about how I manage ~50 servers but don&amp;rsquo;t do
anything at all. Of course, I have my angry sysadmin moments when everything
breaks and I could punch a whole in a wall&amp;hellip; but who doesn&amp;rsquo;t?&lt;/p&gt;

&lt;h2 id=&#34;our-ci&#34;&gt;Our CI&lt;/h2&gt;

&lt;p&gt;First let me take a chance to familiarize you with how we test Docker. Docker&amp;rsquo;s
tests run in a Docker container. We use Jenkins as our CI mostly because we
needed a lot of flexibility and control.&lt;/p&gt;

&lt;p&gt;Obviously everything in our infrastructure runs in Docker, so that even goes
for Jenkins. We use the &lt;a href=&#34;https://registry.hub.docker.com/u/library/jenkins/&#34;&gt;official
image&lt;/a&gt; for our Jenkins container.&lt;/p&gt;

&lt;p&gt;Docker itself has 6 different storage driver options. These are &lt;code&gt;aufs&lt;/code&gt;,
&lt;code&gt;btrfs&lt;/code&gt;, &lt;code&gt;devmapper&lt;/code&gt;, &lt;code&gt;overlay&lt;/code&gt;, &lt;code&gt;vfs&lt;/code&gt;, and &lt;code&gt;zfs&lt;/code&gt;. We have servers that use
each of these hooked up to our Jenkins instance for testing.&lt;/p&gt;

&lt;p&gt;Along with all the storage driver options, Docker also runs on any linux
distro and a world of different linux kernel versions. In order to be able
to try and test all this differentiation, each server runs a different kernel
and all major linux distros are accounted for.&lt;/p&gt;

&lt;p&gt;With every push to master on the docker/docker repo, we run tests on the entire
storage driver matrix. We also trigger builds to test the unsupported &lt;code&gt;lxc&lt;/code&gt;
execdriver for Docker. &lt;em&gt;And&lt;/em&gt; we trigger builds to test the Docker Windows
client. Right there is three different jobs running on 8 different servers just
for 1 push to master.&lt;/p&gt;

&lt;p&gt;Did I mention we have 9 Windows servers and 9 linux remote hosts paired with
those servers for testing Docker on Windows?&lt;/p&gt;

&lt;p&gt;With every pull request to Docker we kick off 3 builds on 3 different servers.
We have 8 linux Docker nodes reserved exclusively for testing PRs. These run
the Docker tests and the new &amp;ldquo;Experimental Tests&amp;rdquo;. The last of the 3 is the
Windows client test.&lt;/p&gt;

&lt;p&gt;Considering the Docker project gets over 100 pull requests a week, with
multiple revision cycles you can only imagine the number of builds we process
in a day.&lt;/p&gt;

&lt;p&gt;The manager for the PR builds is a small service called
&lt;a href=&#34;https://github.com/jessfraz/leeroy&#34;&gt;leeroy&lt;/a&gt; which also makes
sure every PR has been signed with the Docker DCO before it even triggers
a build. This of course also runs in a container.&lt;/p&gt;

&lt;p&gt;Now of course not every build is perfect, sometimes you have to rebuild. To
make this easy for all maintainers of the project we have an IRC bot, named
lovingly after Docker&amp;rsquo;s turtle Gordon. The
&lt;a href=&#34;https://github.com/jessfraz/gordon-bot&#34;&gt;gordonbot&lt;/a&gt; runs in a container &lt;em&gt;duh&lt;/em&gt;, and can
kick off a rebuild on any of our bajillion servers.&lt;/p&gt;

&lt;p&gt;Now I know what you are thinking, thats a lot of servers, how do you manage to
know when &lt;em&gt;heaven forbid&lt;/em&gt; one of them goes down.&lt;/p&gt;

&lt;h2 id=&#34;consul&#34;&gt;Consul&lt;/h2&gt;

&lt;p&gt;We have consul running &lt;strong&gt;in a container&lt;/strong&gt; on all 50 servers in our
infrastructure. This is AMAZING. We use a sweet project, &lt;a href=&#34;https://github.com/AcalephStorage/consul-alerts&#34;&gt;consul
alerts&lt;/a&gt;, also running in a
container, to let us know when a node or service on a node goes down.&lt;/p&gt;

&lt;p&gt;I would honestly be lost without consul. It keeps track via tags of the kernel
version, storage driver, linux distro, etc of the server. When a server goes
down I can decifer if it is a bug with any of those things.&lt;/p&gt;

&lt;p&gt;A great example of this is we recently merged the awesome changes to the
container network stack via &lt;a href=&#34;https://github.com/docker/libnetwork&#34;&gt;libnetwork&lt;/a&gt;.
However, I noticed after the merge the servers with kernels 3.19.x and 3.18.x
were acting funny. We were able to fix kernel bugs that were
specific to those versions related to networking before an RC was even cut.&lt;/p&gt;

&lt;h2 id=&#34;github-hooks-for-the-github-hooks-throne&#34;&gt;Github Hooks for the Github Hooks Throne&lt;/h2&gt;

&lt;p&gt;We trigger a lot of cool things with every push to master. We use nsq to
collect the hooks and then pass the messages to all the consumers. Oh and
obviously nsq runs in a container, as well as the &lt;a href=&#34;https://github.com/crosbymichael/hooks&#34;&gt;hooks
service&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&#34;master-binaries&#34;&gt;Master Binaries&lt;/h3&gt;

&lt;p&gt;With every push to master we push new binaries to
&lt;a href=&#34;https://master.dockerproject.org&#34;&gt;master.dockerproject.org&lt;/a&gt;. This way people
can easily try out new features.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&#34;https://github.com/jessfraz/docker-bb&#34;&gt;docker-bb service&lt;/a&gt; is run in
a container ;). Hopefully you are catching on to a theme here&amp;hellip;&lt;/p&gt;

&lt;h3 id=&#34;master-docs&#34;&gt;Master Docs&lt;/h3&gt;

&lt;p&gt;What good would being able to try new features be, if you didn&amp;rsquo;t have docs for
how to use them?&lt;/p&gt;

&lt;p&gt;With every push to master, we deploy new docs to
&lt;a href=&#34;http://docs.master.dockerproject.org&#34;&gt;docs.master.dockerproject.org&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This is done with a &lt;a href=&#34;https://github.com/jessfraz/nsqexec&#34;&gt;nsqexec service&lt;/a&gt;,
wait for it&amp;hellip;. RUNNING IN A CONTAINER.&lt;/p&gt;

&lt;h2 id=&#34;always-testing&#34;&gt;Always Testing&lt;/h2&gt;

&lt;p&gt;The greatest thing about all these services, which I so subtly mentioned,
running in containers is that we can always be dogfooding and testing Docker.
Right now we are getting ready to release Docker v1.7.0 and with every RC that
is built I upgrade the servers so we can catch bugs.&lt;/p&gt;

&lt;p&gt;On the off season, I will randomly upgrade all the servers to the Docker master
binaries mentioned previously. This way we can catch things long before they
even hit an RC.&lt;/p&gt;

&lt;p&gt;All this is so seamless and runs so well that I have time to do my actual job
of being a Docker core maintainer, occasionally fix some servers, spin up new
servers if we add storage drivers, upgrade servers&amp;rsquo; kernels, and write this blog
post.&lt;/p&gt;

&lt;p&gt;Hope you enjoyed, also help us test RC&amp;rsquo;s and Docker master! Also thanks to
DigitalOcean and Azure for hosting our infrastructure.&lt;/p&gt;
</description>
                </item>
                    
            <item>
                <title>Docker Containers on the Desktop</title>
                <link>https://blog.jessfraz.com/post/docker-containers-on-the-desktop/</link>
                <pubDate>Sat, 21 Feb 2015 13:16:52 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/docker-containers-on-the-desktop/</guid>
                    <description>&lt;p&gt;Hello!&lt;/p&gt;

&lt;p&gt;If you are not familiar with &lt;a href=&#34;https://github.com/docker/docker&#34;&gt;Docker&lt;/a&gt;, it is the popular open source container engine.&lt;/p&gt;

&lt;p&gt;Most people use Docker for containing applications to deploy into production or for building their applications in a contained environment. This is all fine &amp;amp; dandy, and saves developers &amp;amp; ops engineers huge headaches, but I like to use Docker in a not-so-typical way.&lt;/p&gt;

&lt;p&gt;I use Docker to run all the desktop apps on my computers.&lt;/p&gt;

&lt;p&gt;But why would I even want to run all these apps in containers? Well let me explain. I used to be an OS X user, and the great thing about OS X is the OS X App Sandbox.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;App Sandbox is an access control technology provided in OS X, enforced at the kernel level. Its strategy is twofold:&lt;/p&gt;

&lt;p&gt;App Sandbox enables you to describe how your app interacts with the system. The system then grants your app the access it needs to get its job done, and no more.&lt;/p&gt;

&lt;p&gt;App Sandbox provides a last line of defense against the theft, corruption, or deletion of user data if an attacker successfully exploits security holes in your app or the frameworks it is linked against.&lt;/p&gt;

&lt;p&gt;&lt;small&gt;&lt;a href=&#34;https://developer.apple.com/library/mac/documentation/Security/Conceptual/AppSandboxDesignGuide/AboutAppSandbox/AboutAppSandbox.html&#34;&gt;Apple About App Sandbox&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I am using the Apple App Sandbox as an example so people can grasp the concept easily. I am &lt;strong&gt;not&lt;/strong&gt; saying this is exactly like that and has all the features. This is not a sandbox. It is more like a cool hack.&lt;/p&gt;

&lt;p&gt;I hate installing things on my host and the files getting everywhere. I wanted the ability to delete an app and know it is gone fully without some random file hanging around. This gave me that. Not only that, I can control how much CPU and Memory the app uses. Yes, the cpu/memory hungry chrome is now perfectly contained!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&amp;ldquo;What?!?!&amp;rdquo;&lt;/strong&gt;, you say. Let me show you.&lt;/p&gt;

&lt;p&gt;The following covers a few of my favorite applications I run in containers. Each of the commands written below is actually pulled directly from my bash aliases. So you can have the same user experience as running one command today.&lt;/p&gt;

&lt;h2 id=&#34;tuis-text-user-interface-pronounced-too-eee&#34;&gt;TUIs (Text User Interface, pronounced &lt;em&gt;too-eee&lt;/em&gt;)&lt;/h2&gt;

&lt;p&gt;Let&amp;rsquo;s start with some easy text-based applications:&lt;/p&gt;

&lt;h3 id=&#34;1-irssi&#34;&gt;1. Irssi&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/irssi/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Best IRC client.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run -it \
    -v /etc/localtime:/etc/localtime \
    -v $HOME/.irssi:/home/user/.irssi \ # mounts irssi config in container
    --read-only \ # cool new feature in 1.5
    --name irssi \
    jess/irssi
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/irssi.png&#34; alt=&#34;irssi&#34; /&gt;&lt;/p&gt;

&lt;h3 id=&#34;2-mutt&#34;&gt;2. Mutt&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/mutt/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The text based email client that rules!&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run -it \
    -v /etc/localtime:/etc/localtime \
    -e GMAIL -e GMAIL_NAME \ # pass env variables to config
    -e GMAIL_PASS -e GMAIL_FROM \
    -v $HOME/.gnupg:/home/user/.gnupg \ # so you can encrypt ;)
    --name mutt \
    jess/mutt
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/mutt.png&#34; alt=&#34;mutt&#34; /&gt;&lt;/p&gt;

&lt;h3 id=&#34;3-rainbowstream&#34;&gt;3. Rainbowstream&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/rainbowstream/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Awesome text based twitter client.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run -it \
    -v /etc/localtime:/etc/localtime \
    -v $HOME/.rainbow_oauth:/root/.rainbow_oauth \ # mount config files
    -v $HOME/.rainbow_config.json:/root/.rainbow_config.json \
    --name rainbowstream \
    jess/rainbowstream
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/rainbowstream.png&#34; alt=&#34;rainbowstream&#34; /&gt;&lt;/p&gt;

&lt;h3 id=&#34;4-lynx&#34;&gt;4. Lynx&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/lynx/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The browser everyone loves (to hate). &lt;em&gt;but secretly I love&lt;/em&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run -it \
    --name lynx \
    jess/lynx
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/lynx2.png&#34; alt=&#34;lynx&#34; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Yes I know my blog looks GREAT in lynx&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Okay, those text based apps are fun and all but how about we spice things up a bit.&lt;/p&gt;

&lt;h2 id=&#34;guis&#34;&gt;GUIs&lt;/h2&gt;

&lt;p&gt;None of the images below use &lt;code&gt;X11-Forwarding&lt;/code&gt; with ssh. Because why should you ever have to install &lt;code&gt;ssh&lt;/code&gt; into a container? EWWW UNNECESSARY BLOAT!&lt;/p&gt;

&lt;p&gt;The images work by mounting the &lt;code&gt;X11&lt;/code&gt; socket into the container! Yippeeeee!&lt;/p&gt;

&lt;p&gt;The commands listed below are run on a linux machine. But Mac users, I have a special surprise for you. You can also do fun hacks with X11. Details are described &lt;a href=&#34;https://github.com/docker/docker/issues/8710&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Note my patch was added for &lt;code&gt;--device /dev/snd&lt;/code&gt; in Docker 1.8, before that you needed &lt;code&gt;-v /dev/snd:/dev/snd --privileged&lt;/code&gt;.&lt;/p&gt;

&lt;h3 id=&#34;5-chrome&#34;&gt;5. Chrome&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/chrome/stable/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pretty sure everyone knows what chrome is, but my image comes with flash and the google talk plugin so you can do hangouts.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run -it \
    --net host \ # may as well YOLO
    --cpuset-cpus 0 \ # control the cpu
    --memory 512mb \ # max memory it can use
    -v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
    -e DISPLAY=unix$DISPLAY \ # pass the display
    -v $HOME/Downloads:/root/Downloads \ # optional, but nice
    -v $HOME/.config/google-chrome/:/data \ # if you want to save state
    --device /dev/snd \ # so we have sound
    --name chrome \
    jess/chrome
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/chrome.png&#34; alt=&#34;chrome&#34; /&gt;&lt;/p&gt;

&lt;h3 id=&#34;6-spotify&#34;&gt;6. Spotify&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/spotify/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All the 90s hits you ever wanted and more.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run -it \
    -v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
    -e DISPLAY=unix$DISPLAY \ # pass the display
    --device /dev/snd \ # sound
    --name spotify \
    jess/spotify
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/spotify.png&#34; alt=&#34;spotify&#34; /&gt;&lt;/p&gt;

&lt;h3 id=&#34;7-gparted&#34;&gt;7. Gparted&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/docker/docker/blob/master/contrib/desktop-integration/gparted/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Partition your device in a container.&lt;/p&gt;

&lt;p&gt;MIND BLOWN.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run -it \
    -v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
    -e DISPLAY=unix$DISPLAY \ # pass the display
    --device /dev/sda:/dev/sda \ # mount the device to partition
    --name gparted \
    jess/gparted
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/gparted.png&#34; alt=&#34;gparted&#34; /&gt;&lt;/p&gt;

&lt;h3 id=&#34;8-skype&#34;&gt;8. Skype&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/skype/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The other video conferencer. This relies on running pulseaudio also in
a container.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;# start pulseaudio
$ docker run -d \
    -v /etc/localtime:/etc/localtime \
    -p 4713:4713 \ # expose the port
    --device /dev/snd \ # sound
    --name pulseaudio \
    jess/pulseaudio
&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;# start skype
$ docker run -it \
    -v /etc/localtime:/etc/localtime \
    -v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
    -e DISPLAY=unix$DISPLAY \ # pass the display
    --device /dev/snd \ # sound
    --link pulseaudio:pulseaudio \ # link pulseaudio
    -e PULSE_SERVER=pulseaudio \
    --device /dev/video0 \ # video
    --name skype \
    jess/skype
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/skype1.png&#34; alt=&#34;skype&#34; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/skype2.png&#34; alt=&#34;skype2&#34; /&gt;&lt;/p&gt;

&lt;h3 id=&#34;9-tor-browser&#34;&gt;9. Tor Browser&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/tor-browser/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because Tor, duh!&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run -it \
    -v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
    -e DISPLAY=unix$DISPLAY \ # pass the display
    --device /dev/snd \ # sound
    --name tor-browser \
    jess/tor-browser
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/tor-browser.png&#34; alt=&#34;tor-browser&#34; /&gt;&lt;/p&gt;

&lt;h3 id=&#34;10-cathode&#34;&gt;10. Cathode&lt;/h3&gt;

&lt;p&gt;&lt;a href=&#34;https://github.com/jessfraz/dockerfiles/blob/master/cathode/Dockerfile&#34;&gt;Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That super old school terminal.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ docker run -it \
    -v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
    -e DISPLAY=unix$DISPLAY \ # pass the display
    --name cathode \
    jess/1995
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/cathode.png&#34; alt=&#34;cathode&#34; /&gt;&lt;/p&gt;

&lt;p&gt;So that&amp;rsquo;s enough examples for now. But of course I have more. All my Dockerfiles live here: &lt;a href=&#34;https://github.com/jessfraz/dockerfiles&#34;&gt;github.com/jessfraz/dockerfiles&lt;/a&gt; and all my docker images are on the hub: &lt;a href=&#34;https://hub.docker.com/u/jess/&#34;&gt;hub.docker.com/u/jess&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I gave a talk on this at &lt;a href=&#34;https://www.youtube.com/watch?v=cYsVvV1aVss&#34;&gt;Dockercon 2015&lt;/a&gt;,
check out the &lt;a href=&#34;https://www.youtube.com/watch?v=cYsVvV1aVss&#34;&gt;video&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Happy Dockerizing!!!&lt;/p&gt;</description>
                </item>
                    
            <item>
                <title>Linux or Death (aka How to install Linux on a Mac)</title>
                <link>https://blog.jessfraz.com/post/linux-on-mac/</link>
                <pubDate>Thu, 27 Nov 2014 13:16:52 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/linux-on-mac/</guid>
                    <description>&lt;p&gt;Hello!&lt;/p&gt;

&lt;p&gt;This blog post is going to go over how to create a Linux partition on your mac and have everything working successfully.&lt;/p&gt;

&lt;p&gt;Okay so lets begin with: &lt;code&gt;sudo rm -rf / &amp;amp;&amp;amp; sudo kill -9 1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Hold the phone.&lt;/p&gt;

&lt;p&gt;That was a test. I really hope you didn&amp;rsquo;t just copy, paste, and run a command on your host without knowing anything about the author. A bit about me&amp;hellip; I have run this install about a dozen times on my mac, with various different changes along the way. I can finally say I found the perfect way to install Linux, specifically Debian Jessie, on a mac.&lt;/p&gt;

&lt;p&gt;So now let&amp;rsquo;s actually get started.&lt;/p&gt;

&lt;h2 id=&#34;hardware&#34;&gt;Hardware&lt;/h2&gt;

&lt;p&gt;The below installation was done on my MacBook Pro Retina (15-inch, Late 2013).&lt;/p&gt;

&lt;p&gt;You will also need one of these &lt;a href=&#34;http://www.apple.com/shop/product/MD463LL/A/thunderbolt-to-gigabit-ethernet-adapter&#34;&gt;nifty ethernet to thunderbolt adapters&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&#34;refind-boot-manager&#34;&gt;rEFInd Boot Manager&lt;/h2&gt;

&lt;p&gt;The majority of times I installed Linux I ran &lt;code&gt;rEFInd&lt;/code&gt; on my mac, so I could keep my mac partition and have a separate Linux partition. This last time, however, I was so fed up with OSX and the fact I never used it, I nuked it entirely.I boot purely into the Debian Bootloader now. But I will save that doosey for another blog post if I think people are really as crazy as I. &lt;code&gt;rEFInd&lt;/code&gt; is the lesser of two evils between the other popular &lt;code&gt;rEFIt&lt;/code&gt;, you will probably see some pain points and reasons for my &lt;em&gt;fuck it, nuke it&lt;/em&gt; attitude towards OSX.&lt;/p&gt;

&lt;p&gt;Instructions for installing &lt;code&gt;rEFInd&lt;/code&gt; can be found &lt;a href=&#34;http://www.rodsbooks.com/refind/installing.html#installsh&#34;&gt;here&lt;/a&gt;, but I will go into detail about how I install since you can tell those are a bit hard to read.&lt;/p&gt;

&lt;p&gt;If you don&amp;rsquo;t know how to open terminal just stop now, sorry this isn&amp;rsquo;t going to be one of those blog posts.&lt;/p&gt;

&lt;p&gt;The following works for OSX Mountain Lion.
If you are running Yosemite you are SOL
(not really but read &lt;a href=&#34;http://www.rodsbooks.com/refind/yosemite.html&#34;&gt;this&lt;/a&gt;
and I wish you luck on your journey):&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ curl -O http://downloads.sourceforge.net/project/refind/0.8.3/refind-bin-0.8.3.zip
$ unzip refind-bin-0.8.3.zip
$ cd refind-bin-0.8.3/

# we are going to install with all drivers
# because you honestly never know what you
# will need, better be safe vs. sorry
$ sudo ./install.sh --alldrivers
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Okay now you need to edit &lt;code&gt;/EFI/refind/refind.conf&lt;/code&gt;.
The key differences you should make to the default config are as follows:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;# Enable the scan for file system drivers
scan_driver_dirs EFI/tools/drivers,drivers

# Choose which drives to scan. This will only scan the internal hard drive.
scanfor internal

# Load the Linux file system driver
fs0: load ext4_x64.efi
# I used ext4 (duh)
# if you want to use btrfs
# comment out ext4 line
# and uncomment the next line
# fs0: load btrfs_x64.efi
fs0: map -r
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let&amp;rsquo;s check it&amp;rsquo;s working. Restart your computer and you should see a super 90&amp;rsquo;s looking screen like:
&lt;img src=&#34;https://blog.jessfraz.com/img/refind.png&#34; alt=&#34;refind-boot-menu&#34; /&gt;&lt;/p&gt;

&lt;p&gt;If not, there are various debugging tips per version of Mac OSX &lt;a href=&#34;http://www.rodsbooks.com/refind/installing.html#sluggish&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;High five! Hard part&amp;rsquo;s done. Really. That is the hardest part.&lt;/p&gt;

&lt;h2 id=&#34;choose-your-linux-distro&#34;&gt;Choose your Linux Distro&lt;/h2&gt;

&lt;p&gt;Obviously my favorite is Debian Jessie, so I will
go into detail how to make a USB boot drive for that,
but you can substitute out whatever sub-par distro you choose.&lt;/p&gt;

&lt;p&gt;As of the writing of this article, Debian Jessie is on it&amp;rsquo;s Beta 2 release.
You can download the netist image from &lt;a href=&#34;https://www.debian.org/devel/debian-installer/&#34;&gt;here&lt;/a&gt;.
But detailed instructions follow:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;# download the iso
$ curl -O http://cdimage.debian.org/cdimage/jessie_di_beta_2/amd64/iso-cd/debian-jessie-DI-b2-amd64-netinst.iso

# convert the .iso file to .img
$ hdiutil convert -format UDRW -o debian-jessie.img debian-jessie-DI-b2-amd64-netinst.iso

# osx will most likely add the .dmg extension, rename it
$ mv debian-jessie.img.dmg debian-jessie.img

# view your mounted drives to find the usb device
$ diskutil list
# /dev/disk0
#    #:                       TYPE NAME                    SIZE       IDENTIFIER
#    0:      GUID_partition_scheme                        *500.3 GB   disk0
# /dev/disk1
#    #:                       TYPE NAME                    SIZE       IDENTIFIER
#    0:      USB_DEVICE                                   *100.1 GB   disk1

# unmount the usb device
$ diskutil unmountDisk /dev/disk1

# create the boot drive
$ sudo dd if=debian-jessie.img of=/dev/disk1

# eject the usb device
# mac osx will probably yell at you before you
# can even do this with a popup asking if you want
# to eject the unsupported device, you can click the
# eject button there, it&#39;s the same thing
$ diskutil eject /dev/disk1
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&#34;partition-your-hd&#34;&gt;Partition Your HD&lt;/h2&gt;

&lt;p&gt;Next you need to partition your hard drive so
there is enough space for your linux distro. Here are the steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open Disk Utility&lt;/li&gt;
&lt;li&gt;Select the disk on the left panel (for example &amp;ldquo;500GB APPLE SSD&amp;rdquo;)&lt;/li&gt;
&lt;li&gt;On the partition scheme resize the &amp;ldquo;Macintosh HD&amp;rdquo; partition, drag the bottom right edge of the partition scheme up unless you have enough space for Debian. Apply the changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Honestly the smaller you make the &amp;ldquo;Macintosh HD&amp;rdquo; partition the better, but maybe I am biased.&lt;/p&gt;

&lt;h2 id=&#34;installing-your-linux-distro&#34;&gt;Installing your Linux Distro&lt;/h2&gt;

&lt;p&gt;Make sure your computer is off. Connect your Ethernet adapter and your USB drive we made earlier.&lt;/p&gt;

&lt;p&gt;Turn on your computer and hold down the option/alt key.&lt;/p&gt;

&lt;p&gt;Select the EFI Boot relative to your USB drive (It&amp;rsquo;s going to be the bright orange drive looking thing) and continue with to the installer screen.&lt;/p&gt;

&lt;p&gt;If your linux distro has Advanced Options like Debian for installing a certain Desktop Environment
(and its not Ubuntu or XUbuntu) don&amp;rsquo;t even bother setting those we will handle that after nvidia drivers.&lt;/p&gt;

&lt;p&gt;Continue through your install.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: If you get a CD-ROM error, you need to mount the USB device to &lt;code&gt;/cdrom&lt;/code&gt;, super annoying.
The process will fail and you will be given some options,
choose the shell and run &lt;code&gt;mount /dev/sdc1 /cdrom&lt;/code&gt;. It might also be &lt;code&gt;/dev/sda1&lt;/code&gt; or &lt;code&gt;/dev/sdb1&lt;/code&gt;.
You will know it when you hit it because you &lt;em&gt;won&amp;rsquo;t&lt;/em&gt; get a mount error,
then return to the menu and continue where you left off on the &amp;ldquo;CD-ROM install&amp;rdquo;.&lt;/p&gt;

&lt;p&gt;When the installer arrives at the partitioning step,
you can use the auto partioning,
that&amp;rsquo;s what I did with all free space, then in the review
screen I used &lt;code&gt;ext4&lt;/code&gt;.
If you are going to be running Docker on your system I highly recommend &lt;code&gt;ext4&lt;/code&gt; with the &lt;code&gt;overlay&lt;/code&gt; storage driver and you should trust me.&lt;/p&gt;

&lt;p&gt;Complete the install and reboot.&lt;/p&gt;

&lt;h2 id=&#34;you-are-in-a-term-it-feels-bleek&#34;&gt;You are in a term, it feels bleek&lt;/h2&gt;

&lt;p&gt;Do not fret. I repeat do not fret.&lt;/p&gt;

&lt;p&gt;Login as root, yes I know you just created an actual user in the
installation steps but ROOT ACCESS OR DEATH. Really though we need to install &lt;code&gt;sudo&lt;/code&gt; and build a new kernel.
After all that is done, you can continue on your way as your user.&lt;/p&gt;

&lt;p&gt;Ok so at this point I know you are not copy and pasting this
shit into your terminal so I&amp;rsquo;ll try to keep it concise.
Remember, I&amp;rsquo;ve been here. We will get through this.&lt;/p&gt;

&lt;p&gt;View your &lt;code&gt;/etc/apt/sources.list&lt;/code&gt; and it is probably messed up and pointing to a CD-ROM.&lt;/p&gt;

&lt;p&gt;Change it to the following (or whatever your distro wants):&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;deb http://ftp.us.debian.org/debian jessie main contrib non-free
deb-src http://ftp.us.debian.org/debian/ jessie main contrib non-free

deb http://ftp.debian.org/debian/ jessie-updates main contrib non-free

deb http://security.debian.org/ jessie/updates main contrib non-free
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now we can:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ apt-get update
$ apt-get upgrade

# install sudo and add our other user to it
$ apt-get install sudo
$ adduser your_username sudo
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&#34;let-s-build-a-kernel-from-source-wooooo&#34;&gt;Let&amp;rsquo;s build a kernel from source wooooo&lt;/h2&gt;

&lt;p&gt;Now here&amp;rsquo;s the thing. Debian Jessie comes with a &lt;code&gt;3.16.x&lt;/code&gt; kernel.
&lt;code&gt;3.17.x&lt;/code&gt; is really where the awesome is at for Mac OS X,
because it has hotpugging for thunderbolt. WHAAAAA? YES!!!&lt;/p&gt;

&lt;p&gt;So if you are going to ride with me on the awesome thunderbolt train
we need to build ourselves a kernel from source. Or if you reallllllyyy
trust me you can download my &lt;code&gt;.deb&lt;/code&gt; for kernel &lt;code&gt;3.17.3&lt;/code&gt;
&lt;a href=&#34;https://misc.j3ss.co/kernels/3.17.3/linux-image-3.17.3_3.17.3_amd64.deb&#34;&gt;here&lt;/a&gt;,
but honestly I build my own everytime so take that as you will.&lt;/p&gt;

&lt;p&gt;Usually, I do these builds in a container.
But for the sake of this we can just do it on our host &lt;em&gt;cringe&lt;/em&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;# install deps to build kernel
$ apt-get install curl kernel-package fakeroot

# download the source
# which at the time of writing this the latest is 3.17.4
$ cd /usr/src
$ curl -O https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.17.4.tar.xz
$ tar -xvf linux-3.17.4.tar.xz
$ cd linux-3.17.4/

# Options:
# you can either use my kernel .config
# which has thunderbolt and all modules enabled
$ curl -O https://misc.j3ss.co/kernels/3.17.3/.config

# OR
# you can use the menu to configure yourself
# be sure to turn on thunderbolt, that&#39;s the whole point
$ apt-get install libncurses5-dev # install menu dependency
$ make menuconfig

# clean the source tree
$ make-kpkg clean

# compile the kernel
# this will take about 30 min
$ fakeroot make-kpkg --initrd --revision=3.17.4 kernel_image

# install the new kernel
$ dpkg -i ../linux-image-3.17.4_3.17.4_amd64.deb

# reboot the system
$ reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;After restarting, depending on your &lt;code&gt;refind.conf&lt;/code&gt;
file you may see a new option in your &lt;code&gt;rEFInd&lt;/code&gt; menu for the new kernel.
DO NOT select that, select the option that corresponds to the linux GRUB (or whichever)
bootloader you use. If you do not see one for GRUB or your flavor
bootloader you may need to bless the bootloader file on the Mac OSX side.
See &lt;a href=&#34;http://www.rodsbooks.com/refind/installing.html#osx&#34;&gt;these instructions on blessing&lt;/a&gt;.
Do you understand now why &lt;code&gt;rEFInd&lt;/code&gt; is the hardest part? It&amp;rsquo;s like iptables,
change one thing and everything comes crashing down.&lt;/p&gt;

&lt;p&gt;So I am going to assume you figured your shit out and
were able to enter your linux distro through &lt;code&gt;rEFInd&lt;/code&gt;
then through the distro bootloader (ex. GRUB).&lt;/p&gt;

&lt;p&gt;Let&amp;rsquo;s clean things up.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;# Make sure we have the right kernel
$ uname -a
# Linux debian 3.17.4 #1 SMP Wed Nov 12 01:11:57 PST 2014 x86_64 GNU/Linux

# uninstall the shit we don&#39;t need now
$ apt-get purge --auto-remove kernel-package fakeroot

# you can even uninstall the kernel that came with
$ apt-get purge --auto-remove linux-image-3.16.*
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To avoid random controller freeze you need to set a particular kernel boot option.
Edit &lt;code&gt;/etc/default/grub&lt;/code&gt; and add the option &lt;code&gt;libata.force=noncq&lt;/code&gt;
(es. &lt;code&gt;GRUB_CMDLINE_LINUX_DEFAULT=&amp;quot;quiet libata.force=noncq&amp;quot;&lt;/code&gt;)
then run &lt;code&gt;update-grub&lt;/code&gt; and reboot your system.
If you are going to be installing Docker you may as well add
&lt;code&gt;GRUB_CMDLINE_LINUX=&amp;quot;cgroup_enable=memory swapaccount=1&amp;quot;&lt;/code&gt; while
you are there as well.&lt;/p&gt;

&lt;h2 id=&#34;drivers&#34;&gt;Drivers&lt;/h2&gt;

&lt;p&gt;Okay now we are to the important part, let&amp;rsquo;s get shit to work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wifi&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ apt-get install firmware-linux-nonfree broadcom-sta-dkms
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Graphics&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ apt-get install nvidia-driver xorg xserver-xorg-video-intel

# probably want to restart after
$ reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Reverse Scroll (like Mac) Touchpad&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ clickpad_settings=&amp;quot;Section \&amp;quot;InputClass\&amp;quot;
    Identifier \&amp;quot;touchpad catchall\&amp;quot;
    Driver \&amp;quot;synaptics\&amp;quot;
    MatchIsTouchpad \&amp;quot;on\&amp;quot;
    Option \&amp;quot;VertScrollDelta\&amp;quot; \&amp;quot;-111\&amp;quot;
    Option \&amp;quot;HorizScrollDelta\&amp;quot; \&amp;quot;-111\&amp;quot;
EndSection&amp;quot;

$ mkdir -p /etc/X11/xorg.conf.d/
$ printf %s &amp;quot;$clickpad_settings&amp;quot; &amp;gt; /etc/X11/xorg.conf.d/50-synaptics-clickpad.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Font Anti-Aliasing&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ config=&amp;amp;quot;&amp;amp;lt;?xml version=&amp;amp;#39;1.0&amp;amp;#39;?&amp;amp;gt;
&amp;amp;lt;!DOCTYPE fontconfig SYSTEM &amp;amp;#39;fonts.dtd&amp;amp;#39;&amp;amp;gt;
&amp;amp;lt;fontconfig&amp;amp;gt;
&amp;amp;lt;match target=\&amp;amp;quot;font\&amp;amp;quot;&amp;amp;gt;
&amp;amp;lt;edit mode=\&amp;amp;quot;assign\&amp;amp;quot; name=\&amp;amp;quot;rgba\&amp;amp;quot;&amp;amp;gt;
&amp;amp;lt;const&amp;amp;gt;rgb&amp;amp;lt;/const&amp;amp;gt;
&amp;amp;lt;/edit&amp;amp;gt;
&amp;amp;lt;/match&amp;amp;gt;
&amp;amp;lt;match target=\&amp;amp;quot;font\&amp;amp;quot;&amp;amp;gt;
&amp;amp;lt;edit mode=\&amp;amp;quot;assign\&amp;amp;quot; name=\&amp;amp;quot;hinting\&amp;amp;quot;&amp;amp;gt;
&amp;amp;lt;bool&amp;amp;gt;true&amp;amp;lt;/bool&amp;amp;gt;
&amp;amp;lt;/edit&amp;amp;gt;
&amp;amp;lt;/match&amp;amp;gt;
&amp;amp;lt;match target=\&amp;amp;quot;font\&amp;amp;quot;&amp;amp;gt;
&amp;amp;lt;edit mode=\&amp;amp;quot;assign\&amp;amp;quot; name=\&amp;amp;quot;hintstyle\&amp;amp;quot;&amp;amp;gt;
&amp;amp;lt;const&amp;amp;gt;hintslight&amp;amp;lt;/const&amp;amp;gt;
&amp;amp;lt;/edit&amp;amp;gt;
&amp;amp;lt;/match&amp;amp;gt;
&amp;amp;lt;match target=\&amp;amp;quot;font\&amp;amp;quot;&amp;amp;gt;
&amp;amp;lt;edit mode=\&amp;amp;quot;assign\&amp;amp;quot; name=\&amp;amp;quot;antialias\&amp;amp;quot;&amp;amp;gt;
&amp;amp;lt;bool&amp;amp;gt;true&amp;amp;lt;/bool&amp;amp;gt;
&amp;amp;lt;/edit&amp;amp;gt;
&amp;amp;lt;/match&amp;amp;gt;
&amp;amp;lt;match target=\&amp;amp;quot;font\&amp;amp;quot;&amp;amp;gt;
&amp;amp;lt;edit mode=\&amp;amp;quot;assign\&amp;amp;quot; name=\&amp;amp;quot;lcdfilter\&amp;amp;quot;&amp;amp;gt;
&amp;amp;lt;const&amp;amp;gt;lcddefault&amp;amp;lt;/const&amp;amp;gt;
&amp;amp;lt;/edit&amp;amp;gt;
&amp;amp;lt;/match&amp;amp;gt;
&amp;amp;lt;/fontconfig&amp;amp;gt;
&amp;amp;quot;

$ printf %s &amp;quot;$config&amp;quot; &amp;gt; /etc/fonts/local.conf

$ dpkg-reconfigure fontconfig-config
# Choose:
#    Autohinter
#    Automatic
#    No
$ dpkg-reconfigure fontconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Desktop Environment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now is the time to install whatever desktop environment you love. &lt;code&gt;i3&lt;/code&gt; is my personal flavor:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;$ apt-get install dunst feh i3 i3lock i3status scrot suckless-tools
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Screen Backlight&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I have a bash script &lt;a href=&#34;https://raw.githubusercontent.com/jessfraz/dotfiles/master/bin/screen-backlight&#34;&gt;https://misc.j3ss.co/binaries/screen-backlight&lt;/a&gt; made for the sole purpose of adjusting the screen-backlight.&lt;/p&gt;

&lt;p&gt;You will want to add to your sudoers file the following line, so password is not required for the script to run:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;# where your user is called user
# and your host is called host
user host = (root) NOPASSWD: /usr/bin/local/screen-backlight
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;then for the example of &lt;code&gt;i3&lt;/code&gt; you can add the following to your config:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;bindsym XF86MonBrightnessUp exec sudo screen-backlight up
bindsym XF86MonBrightnessDown exec sudo screen-backlight down
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Keyboard Backlight&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The same goes for the keyboard backlight. I have a bash script &lt;a href=&#34;https://raw.githubusercontent.com/jessfraz/dotfiles/master/bin/keyboard-backlight&#34;&gt;https://misc.j3ss.co/binaries/keyboard-backlight&lt;/a&gt; made for the sole purpose of adjusting the keyboard-backlight.&lt;/p&gt;

&lt;p&gt;You will want to add to your sudoers file the following line, so password is not required for the script to run:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;# where your user is called user
# and your host is called host
user host = (root) NOPASSWD: /usr/bin/local/keyboard-backlight
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;then for the example of &lt;code&gt;i3&lt;/code&gt; you can add the following to your config:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-sh&#34;&gt;bindsym XF86KbdBrightnessUp exec sudo keyboard-backlight up
bindsym XF86KbdBrightnessDown exec sudo keyboard-backlight down
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Things that won&amp;rsquo;t work in Debian&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I have not gotten the iSight camera or Screen Brightness to work.
Other than that, everything is perfect, and thunderbolt hotplugging is a dream.
The retina resolution is absolutely stunning, it&amp;rsquo;s seriously hard for me to switch to my Thinkpad
which has 32GB of memory (so I should want to switch).&lt;/p&gt;

&lt;p&gt;Feel free to reach out to me via twitter &lt;a href=&#34;https://twitter.com/jessfraz&#34;&gt;@jessfraz&lt;/a&gt; with any updates or how much you love your linux partition.&lt;/p&gt;</description>
                </item>
                    
            <item>
                <title>How to Make Foursquare your Bitch</title>
                <link>https://blog.jessfraz.com/post/how-to-make-foursquare-your-bitch/</link>
                <pubDate>Thu, 01 Dec 2011 13:16:52 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/how-to-make-foursquare-your-bitch/</guid>
                    <description>&lt;p&gt;I would just like to preface this by saying I do not condone cheating but I thought of this as a &amp;ldquo;challenge&amp;rdquo; and not so much as &amp;ldquo;cheating&amp;rdquo;.&lt;/p&gt;

&lt;p&gt;A project I am working on required me to checkin to places on foursquare that I was not currently near (or even close to). Now the answer to this was pretty simple. Checkin through the API using the lat and long of the venue I was &amp;ldquo;supposedly&amp;rdquo; at. Boom. Worked without a flaw. Ok I will admit it I am kinda a competitive person and well, the foursquare badges are so pretty I immediately started thinking about how I could check in remotely and collect them all. But surely, surely foursquare must have some sort of catches in place that do not allow this. Because I was ever so curious to find out what they may be (&amp;#8230;and how to get around them) I decided to try.&lt;/p&gt;

&lt;h2 id=&#34;authentication&#34;&gt;Authentication&lt;/h2&gt;

&lt;p&gt;Let&amp;rsquo;s start with the auth. If a user has not authed your application or is not currently logged into foursquare (assuming you created an app in the &lt;a href=&#34;https://developer.foursquare.com/&#34; target=&#34;_blank&#34;&gt;foursquare for developers dashboard&lt;/a&gt;) redirect them as follows.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-php&#34;&gt;$clientId = &amp;quot;YOUR-CLIENT-ID&amp;quot;;
$redirectUri = &amp;quot;YOUR-REDIRECT-URI&amp;quot;;
header(&amp;quot;Location:https://foursquare.com/oauth2/authenticate?client_id=&amp;quot; . $clientId .&amp;quot;&amp;amp;response_type=code&amp;amp;redirect_uri=&amp;quot; . $redirectUri);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;After authenticating, grab the authentication code foursquare redirected the user with.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-php&#34;&gt;$code = $_REQUEST[&#39;code&#39;];
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now start a session and save your access token to it. This way we can easily see if the user is an authenticated app user by checking the session variable. You could also save it as a cookie if you want it to last longer.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-php&#34;&gt;session_start();

if (!isset($_SESSION[&#39;access_token&#39;])) {
   $app_token_url = &amp;quot;https://foursquare.com/oauth2/access_token?client_id=&amp;quot; . $client_id . &amp;quot;&amp;amp;amp;client_secret=&amp;quot; . $client_secret . &amp;quot;&amp;amp;amp;grant_type=authorization_code&amp;amp;amp;redirect_uri=&amp;quot; . $redirect_uri . &amp;quot;&amp;amp;amp;code=&amp;quot; . $code;

   $ch = curl_init();
   curl_setopt($ch, CURLOPT_URL, $app_token_url);
   curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
   $foursquare_token = curl_exec($ch);
   curl_close($ch);

   $array_token              = json_decode($foursquare_token, true);
   $token                    = $array_token[&#39;access_token&#39;];
   $_SESSION[&#39;access_token&#39;] = $token;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&#34;get-location-data&#34;&gt;Get Location Data&lt;/h2&gt;

&lt;p&gt;Ok now you have your token and we can get into the fun part, winning at foursquare! To check into a venue you need to post the following parameters to foursquare: &lt;code&gt;venueId&lt;/code&gt;, &lt;code&gt;ll&lt;/code&gt; (latitude, longitude), &lt;code&gt;llAcc&lt;/code&gt; (accuracy of previous points), &lt;code&gt;oauth_token&lt;/code&gt;, and &lt;code&gt;v&lt;/code&gt; (version, which foursquare takes in as todays date in the form &amp;ldquo;Ymd&amp;rdquo;).&lt;/p&gt;

&lt;p&gt;So to make checking into various different venues easier I decided the only thing I want to pass to this function is the venueId, v, and oauth_token. This requires making a function to return the lat and long of the venue from the foursquare api.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-php&#34;&gt;function getLatLong($venue_id, $v, $oauth_token) {
   $venue_url = &#39;https://api.foursquare.com/v2/venues/&#39; . $venue_id . &#39;?oauth_token=&#39; . $oauth_token . &#39;&amp;amp;v=&#39; . $v;

   $response       = file_get_contents($venue_url);
   $venue          = json_decode($response, true);
   $venue_response = $venue[&#39;response&#39;];
   $location       = $venue_response[&#39;venue&#39;][&#39;location&#39;];
   $lat            = $location[&#39;lat&#39;];
   $long           = $location[&#39;lng&#39;];

   return $lat . &#39;, &#39; . $long;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&#34;checkin&#34;&gt;Checkin&lt;/h2&gt;

&lt;p&gt;Now we can send this value into the checkin function.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-php&#34;&gt;function checkin($venue_id, $v, $oauth_token, $latlong) {
   $checkin_url = &amp;quot;https://api.foursquare.com/v2/checkins/add&amp;quot;;

   parameters = array(
       &#39;venueId&#39; =&amp;gt; $venue_id,
       &#39;broadcast&#39; =&amp;gt; &#39;private&#39;, //now i set this private, but can be public
       &#39;ll&#39; =&amp;gt; $latlong,
       &#39;llAcc&#39; =&amp;gt; &#39;1&#39;,
       &#39;oauth_token&#39; =&amp;gt; $oauth_token,
       &#39;v&#39; =&amp;gt; $v
   );

   $curl = curl_init($checkin_url);
   curl_setopt($curl, CURLOPT_POST, true);
   curl_setopt($curl, CURLOPT_POSTFIELDS, $parameters);
   curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
   $response = curl_exec($curl);

   return $response;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&#34;response&#34;&gt;Response&lt;/h2&gt;

&lt;p&gt;The response from this will be in the following format.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-json&#34;&gt;{
    meta: {
        code: 200
    },
    notifications: [
        {
            type: &amp;quot;notificationTray&amp;quot;,
            item: {
                unreadCount: 0
            }
        }
    ],
    response: {
        checkin: {
            id: &amp;quot;4d627f6814963704dc28ff94&amp;quot;,
            createdAt: 1298300776,
            type: &amp;quot;checkin&amp;quot;,
            shout: &amp;quot;Another one of these days. #snow&amp;quot;,
            timeZoneOffset: -300,
            user: {
                id: &amp;quot;32&amp;quot;,
                firstName: &amp;quot;Dens&amp;quot;,
                photo: {
                    prefix: &amp;quot;https://irs0.4sqi.net/img/user/&amp;quot;,
                    suffix: &amp;quot;/32_1239135232.jpg&amp;quot;,

                },

            },
            venue: {
                id: &amp;quot;408c5100f964a520c6f21ee3&amp;quot;,
                name: &amp;quot;Tompkins Square Park&amp;quot;,
                contact: {
                    phone: &amp;quot;2123877685&amp;quot;,
                    formattedPhone: &amp;quot;(212) 387-7685&amp;quot;,

                },
                location: {
                    address: &amp;quot;E 7th St. to E 10th St.&amp;quot;,
                    crossStreet: &amp;quot;btwn Ave. A &amp;amp;amp; B&amp;quot;,
                    lat: 40.72651075083395,
                    lng: -73.98171901702881,
                    postalCode: &amp;quot;10009&amp;quot;,
                    city: &amp;quot;New York&amp;quot;,
                    state: &amp;quot;NY&amp;quot;,
                    country: &amp;quot;United States&amp;quot;,
                    cc: &amp;quot;US&amp;quot;,

                },
                categories: [
                    {
                        id: &amp;quot;4bf58dd8d48988d163941735&amp;quot;,
                        name: &amp;quot;Park&amp;quot;,
                        pluralName: &amp;quot;Parks&amp;quot;,
                        shortName: &amp;quot;Park&amp;quot;,
                        icon: {
                            prefix: &amp;quot;https://foursquare.com/img/categories_v2/parks_outdoors/park_&amp;quot;,
                            suffix: &amp;quot;.png&amp;quot;,

                        },
                        primary: true,

                    },

                ],
                verified: true,
                stats: {
                    checkinsCount: 25523,
                    usersCount: 8932,
                    tipCount: 85,

                },
                url: &amp;quot;http://www.nycgovparks.org/parks/tompkinssquarepark&amp;quot;,
                likes: {
                    count: 0,
                    groups: [

                    ],

                },
                specials: {
                    count: 0,

                },

            },
            source: {
                name: &amp;quot;foursquare for Web&amp;quot;,
                url: &amp;quot;https://foursquare.com/&amp;quot;
            },
            photos: {
                count: 1,
                items: [
                    {
                        id: &amp;quot;4d627f80d47328fd96bf3448&amp;quot;,
                        createdAt: 1298300800,
                        prefix: &amp;quot;https://irs3.4sqi.net/img/general/&amp;quot;,
                        suffix: &amp;quot;/UBTEFRRMLYOHHX4RWHFTGQKSDMY14A1JLHURUTG5VUJ02KQ0.jpg&amp;quot;,
                        width: 720,
                        height: 540,
                        user: {
                            id: &amp;quot;32&amp;quot;,
                            firstName: &amp;quot;Dens&amp;quot;,
                            photo: {
                                prefix: &amp;quot;https://irs0.4sqi.net/img/user/&amp;quot;,
                                suffix: &amp;quot;/32_1239135232.jpg&amp;quot;,

                            },

                        },
                        visibility: &amp;quot;priviate&amp;quot;
                    }
                ],

            },
            likes: {
                count: 0,
                groups: [

                ],

            },
            like: false,
            score: {
                total: 1,
                scores: [
                    {
                        points: 1,
                        icon: &amp;quot;https://foursquare.com/img/points/defaultpointsicon2.png&amp;quot;,
                        message: &amp;quot;Have fun out there!&amp;quot;,

                    },

                ],

            },

        },

    },

}
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&#34;summary&#34;&gt;Summary&lt;/h2&gt;

&lt;p&gt;So what I found was this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can check into 15-20 places in a loop without loosing points or disabling the chance to win badges, but then you have to take a break for a few hours&lt;/li&gt;
&lt;li&gt;When changing locations over a vast distance (ex. Los Angeles &amp;#8211;&amp;gt; San Francisco), you must wait the amount of time it takes to reasonably cover that distance before checking in or else you will not be able to earn badges.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With these in mind this is how I approached earning as many badges in as little time as possible. Once I was &amp;ldquo;in&amp;rdquo; a location area, I looped through a set array of about 15 venues. I made these arrays based off the places most blogs said you needed to win a badge. The expertise badges are easy; checkin to 3 different venues categorized as BBQ Joints, earn the badge. The city badges all have lists in foursquare that house the venues you need to go, hit five and you get the badge.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-php&#34;&gt;$windy_city_badge = array(
   &#39;4b876c65f964a520e2be31e3&#39;,
   &#39;4b4e0d9ff964a520c0df26e3&#39;,
   &#39;4e1e0e65aeb75f77be667547&#39;,
   &#39;4e70c1aa814dd2cb962265cb&#39;,
   &#39;49dce128f964a520b65f1fe3&#39;
);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I would recommend conquering the city badges first because you will probably earn all the expertise badges in the process.&lt;/p&gt;

&lt;p&gt;Go get &amp;#8216;em! Haters gonna hate, but you just made foursquare yo biotch.&lt;/p&gt;</description>
                </item>
                    
            <item>
                <title>What would Tupac do?</title>
                <link>https://blog.jessfraz.com/post/what-would-2pac-do/</link>
                <pubDate>Wed, 30 Nov 2011 13:16:52 -0400</pubDate>
                    
                    <guid>https://blog.jessfraz.com/post/what-would-2pac-do/</guid>
                    <description>&lt;p&gt;&lt;img src=&#34;https://blog.jessfraz.com/img/2pac.jpg&#34; alt=&#34;2pac&#34; /&gt;&lt;/p&gt;

&lt;p&gt;I saw this sign outside a coffee shop. Most people would just walk by and laugh, but it got me thinking. What would 2PAC do? Seeing as 2PAC is one of my favorite artists and I was already walking with earbuds on, I started playing an oldie but goodie on my iPhone, &amp;ldquo;Changes&amp;rdquo;.&lt;/p&gt;

&lt;p&gt;Now if you have never heard of &lt;a href=&#34;http://rapgenius.com&#34; target=&#34;_blank&#34;&gt;rapgenius.com&lt;/a&gt; before, you should definitely check it out. It has translations of basically every rap song thats popular, and the artists can login and say what the lyrics to the song actually meant. Seeing as 2PAC is deceased (saddness), and I don&amp;rsquo;t think the holographic 2PAC is going to be logging into rap genius anytime soon&amp;#8230; I pondered the meaning of the lyrics myself.&lt;/p&gt;

&lt;p&gt;My conclusion was, if you are not happy with the way things are currently going, you should stand up and try to change it. &amp;ldquo;Some things will never change,&amp;rdquo; but what you do have control of changing is yourself. Just like Gandhi said, &amp;ldquo;If we could change ourselves, the tendencies in the world would also change. As a man changes his own nature, so does the attitude of the world change towards him… We need not wait to see what others do.&amp;rdquo;&lt;/p&gt;

&lt;p&gt;Presently, I have made some rather large changes in my life. I have decided to take a new job in New York. The hardest part of this decision was leaving behind my job now. Over the past year, my co-workers have gone from being friends to being my family. But the world of web is constantly changing and I am excited to get to learn new things and expand my experience.&lt;/p&gt;

&lt;p&gt;2PAC&amp;rsquo;s interpretation of change (from what I gather) is that it should be sparked by something, but the truth is change can come about for a variety of reasons. I chose change to grow more knowledge in the field I care so dearly about, not to mention my love for the city of New York.&lt;/p&gt;

&lt;p&gt;So, yes the next time you are faced with a decision you can think &amp;lsquo;What would 2PAC do?&amp;rsquo;, but also trust yourself because when it comes down to it you are your own best advocate.&lt;/p&gt;</description>
                </item>
                    
            </channel>
        </rss>
