This is my 6th DEF CON and I plan on coming back for more! There is a lot of life and energy at the con that I haven't been able to find at other conferences. A big appeal to me is that DEF CON itself is kind of a wrapper event where you find a number of mini-conferences (called Villages), so even if the main tracks don't interest you, odds are you'll find something at the 27-ish villages that run at the same time.
Registration & Badge
I got my badge through Black hat this year, so there was 0 wait time for me. Friends of mine who got in line early were able to get through pretty quick, other people I spoke to at the con said that they had to wait awhile to get their badges.
Some badges at black hat had their batteries pop, spilling electrolytic acid on people (reportedly due to batteries being put in backwards?) There were other reports of badges that stopped working, but from some of the antics I've seen (people poking at the badge with the metal clip) I wonder how much of that is actual hardware problems
This is the Human badge I received:
As in years past you get a paper book which includes information about the villages and main track events with maps in the back. Helpful if you don't want to risk navigating the WiFi at the convention. Also back this year is the Hacker Tracker app for Android and iOS.
The mobile app worked ok this year, but its still pretty bare bones and has some issues. It's FAST which is a refreshing change from the Black Hat app that seems to call home every time you switch pages. My suggestion is that you set calendar reminders for anything you care to attend, though, as I would regularly get notifications from the app that were delayed from 4 to 10 hours AFTER the events had ended. Trying to find village presentations was occasionally a challenge as well as some villages have their own rooms while others are lumped into a 'Villages' area. The App made it difficult to differentiate the 2 areas.
Sessions, Demo Labs and Workshops
This was a pretty good year for DEF CON- I found a number of good presentations that kept me interested. While I was not able to make it into any official DEF CON sponsored workshops (preregistration slots fill up fast well in advance of the conference), I was able to attend an AI Village workshop which covered foundations of machine learning.
DEF CON has 'demo labs' similar to the Arsenal at Black hat, which is pretty cool. Not having to listen to demos in a vendor area was refreshing and there were a number of interesting presentations:
- boofuzz - open source network protocol fuzzing framework
- EAPHammer - toolkit for performing targeted evil twin attacks against WPA2-Enterprise networks
- Halcyon IDE - An interesting IDE for developing Nmap scripts
- Lossy Hash Table - described an intriguing approach to brute force cracking passwords
Sessions From Black Hat 2018
Every year there is some overlap in the presentations given at both conferences and this year is no exception. Seeing them at DEF CON is much, much cheaper than ponying up the cash for Black Hat. Here are a few overlaps that I noticed (not a comprehensive list):
- An Attacker Looks at Docker: Approaching Multi-Container Applications
- Breaking Parser Logic: Take Your Path Normalization Off and Pop 0days Out!
- Finding Xori: Malware Analysis Triage with Automated Disassembly
- Automated Discovery of Deserialization Gadget Chains
- Compression Oracle Attacks on VPN Networks
- Fire & Ice: Making and Breaking macOS Firewalls
- Reverse Engineering Windows Defender's Emulator
- Edge Side Include Injection: Abusing Caching Servers into SSRF and Transparent Session Hijacking
Notes from Sessions and Villages I visited
Presenter: Kat Sweet [c/o Duo Security]
A prevailing notion is that security education is something that only smallish teams can successfully achieve and that the larger your organization the more diluted and ineffective security education becomes. A key question is: How can you do security education on a budget and make it effective?
Issues and challenges to effective security education include:
- Employee engagement / generating interest
- Perception of employees that security education is generic and does not apply to their roles
- Usability of education tools
- Retention of skills learned
- Compliance (requirements, tracking participation)
- Scale: how do you replicate success in a larger organization?
- Limited resources (money, tools)
- Limited resources (time)
Traditional 'role-based' approaches to security education tend to include only very coarse buckets:
- Senior Management and Executives
- Technical teams like IT or Software Engineering (with perhaps a mention of the OWASP top 10 and nothing about OpSec)
- Everyone else
When trying to go more fine-grained you quickly run up against your constraints (time/money/tooling/engagement). What approach does the presenter suggest we take?:
- Focus on the problem you are trying to solve. How does this align with business interests?
- Ask yourself: What are the high value assets that need protecting? Where is the greatest risk that needs mitigation? WHO deals with the highest valued assets or riskiest business processes?
- Prioritize a list of groups to educate
- Basically: effective threat modeling. We tell people to 'think like a hacker', but how do we help them get there? Have them consider how their level of access could negatively impact the company and what would happen if an attacker got that same level of access.
Once you know what assets need protecting and which business processes have the greatest risk associated with them, group employees by their interaction with these assets and processes. Ideally you would have a mix of technical and non-technical team members across seniority horizontals to build collaboration and help everyone understand their role more holistically.
Kat engaged with the audience and came up with a list of potentially high risk teams at companies in general:
- Executive Assistants (gatekeepers for highest profile people in the company)
- Sales/marketing (deal with high volumes of emails, attachments, URLs and phone calls. A membrane between your company and the outside world)
- Customer Service (again, high volumes of communication and another membrane between your business and the outside world)
- Internal IT support (exhausted, overworked and fielding lots of tedious requests)
- InfoSec (lots of data about what is 'wrong' at the company, high levels of access to systems, distracted during incidents)
- Facilities (can't have information security without physical security!)
Your team needs to learn and understand the attack surface of the entire company and target education that can be easily consumed and retained by employees. Remember: Team members likely know and understand more about their roles than the InfoSec team does. Use these training times as an opportunity to learn. Build a conversation and collaborate/listen to the people that make up your company (nobody likes a lecture, most people like context for security decisions).
From an education standpoint, Kat emphasises the human touch. Automation has its uses, but education is more effective when you have person to person communication. Sometimes it can be hard to keep the pulse on emerging/shifting business needs and having regular contact with workers can help you build that picture.
- DNS foundations (including things like MDNS and LLMNR)
- Text based steganography
- Traditional dns exfiltration
- Combining dns queries and text based steganography
- packetwhisperer exfiltration toolset
The problem with traditional DNS exfiltration is that it lights up SIEM alerts like a christmas tree and alerts defenders to what you are doing. Since you would have to control the destination DNS server, there is a good way to try and point a finger at you for attribution. So, he was looking for a better way. The presenter has had a long-time interest in text based steganography and ended up applying it to the problem of DNS exfiltration.
He presents a tool to use text based steganography to enhance the reliability of exfiltrating small amounts of information via DNS. Using his tool you can accomplish the exfiltration without an attacker controlled DNS server. The sending and receiving systems don't have to directly connect. If the resolving DNS server is outside the organization and the attacker can see the message traffic, exfiltration is possible. I could not find a repo for his PacketWhisper tool that does this
A good suggestion he had for leveraging this approach is to spoof cloudfront subdomains, since those look weird anyway (easier to hide meaning with steganography). Overall a very good talk, well worth attending! Just wish I could find a repo. :)
Presenter: William Suthers
The point of this presentation was to illustrate the value of layer 2 passive recon. He found some interesting results that he has communicated back to network device manufacturers.
In a pen test engagement you are often limited by a short timeline, limited target scope and expectation that you'll find results. Networks are dynamic and change frequently at customer sites and you know just about nothing before going in. There are also new products out there that try to inject false intelligence about target environments that can slow you down. Moreover, a lot of compliance groups within the company are looking for a 'checkbox' instead of trying to secure and protect the environment.
Malicous actors know that a well-defined 'scope' of engagement is not a defense mechanism. Hackers won't care about words on a piece of paper if something is network accessible. Often times a company blue team is not kept in the loop on issues, so they can't plan for effective defenses. Attackers know that it is hard for pen testers to demonstrate evidence for why scope should increase in a way that non technical decision makers can understand. These factors play into the development of the prebellico tool.
If your layer 2 is not secure, it's incredibly challenging to build a secure network. Prebellico is a layer 2 passive recon tool that can help you:
- Enumerate and validate assumptions prough a pre-engagement passive scan
- Fingerprint the environment prior to your pen testing (to give you a baseline on the environment)
- Lets you validate the engagement scope and intent (being passive it won't harm active production systems)
- Helps you find other exploitable issues
- Shows your customers technical teams more of what their real attack surface looks like
Layer 2 is a great place to find out:
- Network topologies and trust relationships
- Network service info
- Authentication methods that are in use
- Network host configuration
- Network egress policies (if you want to get your traffic out, it should look as much like legit traffic as possible)
- Whether or not a managed service provider is being used
- Supported protocols and open or used TCP/UDP ports ('reverse port scanning')
- Other switch configuration meta-data
Overall I found the presentation to be engaging and I appreciate the value in further examining passive reconnaissance. My thought is that a similar approach could be taken to analyzing things like AWS VPC flow logs.
Presenter: Fedor Sakharov
Fedor presented a way to build a Web Application Firewall (WAF) with neural networks based on HTTP request/response patterns (sql injection, xss, XXE, path traversal, command injection, object injection... anything via http parameters). The goal is to have Machine Learning detect previously unseen issues with a deep learning model that doesn't require additional training before deployment and that can yield interpretable results.
Their first try used a recombinent neural network since it works well with text. It was a 'somewhat good' approach, but had several key issues:
- The results were not interpretable
- They had to construct a sample 'malicious data' training set, which is hard to get right with all the permutations of expoits out there (manual labeling is a time killer)
On the second try they added an 'attention' layer which aided in the learning process. It improved their ability to interpret the models decisions, yet did not solve the other classification issues they ran into (training set, manual classification)
The third try included 'anomaly detection' since attacks are more similar to anomalies than regular requests. They could avoid manually labeling data and avoided the need to generate malicious samples. It used a recurrent neural network/music generation model with two multi-layered LSTM encoder and decoder. They built a model where the outputs are the probabilities of each letter in a sequence of characters in an HTTP requests.
I had a hard time reading the projector screen to get their github repo. I think this is it (since it ties back to 'Positive Technologies', which was on the presenters tagline on the opening slide: Seq2Seq for Web Attack Detection
Presenter: Sebastian Garcia
This was a pretty thorough presentation on the foundations of ML as can be applied to analyzing network security logs. I'm pretty new to the topic so all I can say is that it was well done and that the google doc / jupyter notebook he shared with the class helped me see a step by step progression of how these concepts should be applied.
This is the process he presented for how tackle this:
- Start with a goal for WHY you want to use Machine Learning (ML). Using it 'just because' is not an objective with staying power
- Select a good LABELED dataset (you need a reliable way to distinguish what data is in the dataset)
- Clean/Preprocess the dataset
- Select features from the dataset (I've learned that features are defining characteristics you can use in making decisions about datapoints)
- Pick a performance metric
- Select a classifier
- Evaluate the performance (I think this relates to how effective the model is as opposed to how fast it operates
- Interpret the results
One other note I think I can make here is the importance of CATEGORICAL data. If you have a bunch of strings you are evaluating, that is hard for an algorithm to compute variance/distance betweeen intervals. You can use python tools to convert string data to numbers so that the algorithm has something it can work with.
Presenters: Anshuman Bhartiya and Glenn ‘devalias’ Grant
- Relevant xkcd
- Which of the OWASP Top 10 Caused the World’s Biggest Data Breaches? [synk.io]
These guys cover how they made software that automates the busywork of finding targets for bug bounty work. At the time of the presentation, they do not have any plans to open source their tool. While creating the tool they identified issues with existing tooling and favor people enhancing existing tools instead of writing their own.
They propose and have been collaborating to create a JSON based recon data standard to improve interoperability between tools (see the ReconJSON github)
bountyMachine is the name given to their in-house tool. It is written in golang, and runs in docker/kubernetes and argo. It is queue based and allows chaining worker tasks to create a pipeline of analysis while storing results in a database. They have it running like a cron job to give them up to the minute notification of new targets that are in scope for bug bounties. Their interface is via slack slash commands so they can get interact with their tools wherever they have their phone
What I found interesting is that a decent amount of time during their talk was dedicated to project/people management lessons they learned like:
- Check your ego
- Communicate openly, honestly and thoroughly
- Stay open to new suggestions
- Delegate responsibilities
- Be flexible
- Watch out for assumptions (Real world data should trump assumption)
- Focus on an MVP, have a measurable 'done' that can be used to define this. (Done is better than perfect)
- Never stop, it's hard to get your team going again after a break
Presenter: Micah Hoffman
Micah covers the value of documentation when performing an OSINT assessment. The key point is to document as you go so you don't lose context. Be sure to take actual copies of what you observe so you don't go back later to see a post is now marked as 'private' or an image or other content is now removed. By documenting to a HIGH degree of detail while you work, it makes it easy to come back later and target reports to different audiencies (executives, security teams, etc...)
While documenting you need to be sensitive to your approach or things could go wrong. How do you control sensitive data? Classified, proprietary or illegal data? How about data duplication in your report, you may find a shared telephone or address but did you document in a clear enough way that you can make that association?
For larger teams working together on OSINT you need to consider where you store your data and who gets access. How do you coordinate and normalize data? Do you trust something like google docs or do you need another system? What tooling can support multiple team members?
He rounds out the presentation with a list of tool suggestions
- Mind map (good for visual learners, hard to export data to other formats)
- Hunchly (awesome tool, subscription based at $130/yr)
- cytoscope (visualizer to help find relationships between objects)
- buscador linux distro
Micah gave another presentation that I was not able to attend (but wanted to see): Introducing YOGA: Your OSINT Graphical Analyzer
Presenter: Nick Cano
- Peering Inside the PE: A Tour of the Win32 Portable Executable File Format (MSDN)
Nick discovered a fascinating behavior that allowed him to weaponize and hide attacks in executables using relocations. He built a PE rebuilder for 32bit windows binaries that automates the process for existing windows PE executables. The tool works for Windows 7 and Windows 10. For Windows 10 he uses an approach where he loops and relaunches the executable file until he gets the base address he's looking for (takes about 200 iterations on average in his testing to get there).
To put the approach to the test, he packed known malware and submitted the results to virus total: only 2 AV engines detected it (these two engines were not listed in the presentation).
- You can 'trick' ASLR to give a consistent base address (sometimes) by using 0x00010000
- No current reversing tools could (at the time of the presentation) make sense of binaries packed with his approach (including: CFF Explorer, Binary Ninja, Hex Rays tools, ResourceHacker, PE Verifier)
Presenter: Nicholas Doiron
- Jupyter notebook (with the mock example described below): https://github.com/georeactor/crypto-geofense
Description (Can't deep link into their google calendar):
How often are apps asking for your location? Lat/lng coordinates reveal a lot about you, but we share them every day with web services to look up our location and find nearby businesses.
---------------Speaker --- -- Nick is a web developer and mapmaker currently at McKinsey & Company's New York City office. Previously he worked at One Laptop per Child, Code for America, and the Museum of Modern Art.
Twitter handle of presenter(s)
Website of presenter(s) or content
This talk presented an interesting 'what if...' proposition with an invite to the community to take it to the next level. Even though there was no specific solution proposed to the problem of Apple or Google knowing your location while giving you location derived mapping services, it was an interesting jaunt through an interesting subject. His example centered around a geocaching challenge where if an attacker were to compromise the server that knows where the prize is they would not be able to get logs or identify individuals submitting their GPS coordinates during the competition.
Notes on Homomorphic encryption from the session:
- 1978: First descriptions of partially homomorphic cryptosystems (addition and multiplication)
- 2009: First fully homomorphic cryptosystem, down to the logic gate level (inventors currently work at IBM)
- Why is this not adopted more? 1) People are conditioned to share their data and 2) this is computationally expensive
This talk is a follow up to a skytalk the presenter gave a few years back where he listed an approach to be anonymous while stalking on social media and twitter. The basic approach is to monitor a 'subject' by following their followers and extrapolate the subject's online activity via mentions, geolocation tags and picture/post sharing.
By not following a target directly, they are less likely to get suspicious. Now the question is: how do you find the stalker? That's where the antistalkerbot comes into the picture. I gather that the tool helps you perform this set of steps he walked through during the presentation:
- Find all your followers
- Find all of THEIR followers
- Get a list of all instances of all followers of followers (In his example he had a million follower instances on his test account)
- Find the unique follower IDs (In the example he had 815,000 unique twitter accounts
- Group followers into people who might have similar shared interests (i.e. they follow similar people that you do)
- Sort the followers to find the one(s) that have a suspiciously high follow rate. Is an account following 92% of your followers? This is suspicious
He mentioned that after giving the SkyTalk there were people asking how to be more effective at hiding/cyberstalking and this was discomforting. That's one reason why he released Antistalker Bot, to try and help potential victims of stalking be aware of who could be stalking them.
Presenter: Christian Paquin
Reference: openquantumsafe.org (test versions of OpenSSL and OpenSSH with post-quantum crypto enabled)
Description (Can't deep link into their google calendar):
Quantum computers pose a grave threat to the public-key cryptography we use today. Many quantum-safe alternatives have been proposed to alleviate this problem. None of these, however, provide a perfect replacement for our conventional algorithms. Indeed, they either result in increased bandwidth, bigger keys, and/or slower runtime, thus greatly impacting their integration into crypto applications.
In this talk, I’ll give an overview of the emerging post-quantum cryptography (PQC) schemes. I’ll then present the lessons we have learned from our prototype integrations into real-life protocols and applications (such as TLS, SSH, and VPN), and our experiments on a variety of devices, ranging from IoT devices, to cloud servers, to HSMs. I’ll discuss the Open Quantum Safe project for PQC development, and related open-source forks of OpenSSL, OpenSSH, and OpenVPN that can be used to experiment with PQC today. I’ll present a demo of a full (key exchange + authentication) PQC TLS 1.3 connection.
This work sheds lights on the practicality of PQC, encouraging early adoption and experimentation by the security community.
I am a crypto specialist in MSR’s Security and Cryptography team . I’m currently involved in projects related to post-quantum cryptography, such as the Open Quantum Safe project , and leading the development of the U-Prove technology . I’m also interested in privacy-enhancing technologies, smart cloud encryption (e.g., searchable and homomorphic encryption), and the intersection of AI and security.
Prior to joining Microsoft in 2008, I was the Chief Security Engineer at Credentica, a crypto developer at Silanis Technology working on digital signature systems, and a security engineer at Zero-Knowledge Systems working on TOR-like systems.
 https://www.microsoft.com/en-us/research/group/security-and-cryptography/  https://github.com/open-quantum-safe  https://microsoft.com/uprove
Twitter handle of presenter(s)
Website of presenter(s) or content
I'll be brief since the summary block is so long. In the 1990's a couple of algorithms were discovered that (when coupled with a quantum computer) can break our encryption systems:
- Shor algorithm (1994): solves factoring and discrete log problems in polynomial time (breaks RSA, DSA, DH, some EC variants)
- Grover (1996): Speeds up database search and function inversion, which improves brute forcing hash functions (like SHA) and block ciphers (like AES). Fortunately the solution here is to just double block sizes (AES 128 -> AES 256)
The presenter indicates that some day there will be a quantum computer, maybe sooner than you think. If you want your secrets to survive undiscovered it makes sense to implement post-quantum cryptography (crypto that is resistant against attacks fielded by malicious actors with quantum computers) soon. He is actively engaged in the effort to standardize post-quantum crypto and has prototype code available for OpenSSL.
NIST has a competition related to post-quantum crypto where they are looking for signature algorithms and encryption key establishment schemes (so far they have 64 submissions, 19 of which are signatures and 45 are encryption). There are a few families of cryptography that can withstand quantum computing that form the bases of these submissions:
- Lattice based systems (NTRU, 1996 and LWE, 2005)
- Code-based systems (similar to ECC [mceliece, niederreiter])
- Multivariate systems (1990's, polynomials over finite fields)
- Hash based systems (signatures from hash functions [lamport, merkle] -- These are as old as public key crypto. LMS, xmss)
- Others (SIDH/SIKE [isogenies on elliptic curves] and picnic [symmetric ciphers and zero knowledge proofs])
There is an effort to standardize protocols now so that when quantum computing becomes viable we can essentially flip a switch to protect ourselves.
Presenters: Yaniv Balmas and Eyal Itkin
I was impressed by the great lengths these guys went to in order to find a way to infiltrate a network by sending a malicious Fax. Many organizations use Fax technology today, and some governments/banks/healthcare entities either require or streamline processes to happen over fax lines.
Some interesting notes about Fax:
- Black and white faxes use a variation on the TIFF image format (minus the headers which are reconstructed by the receiver)
- Color faxes rely on JPEGs
- Color fax was introduced in the early 2000's
- Apparently over 90% of people in Japan make regular use of fax technology.
Their goal was to attack an All-in-one printer/scanner/fax machine to demonstrate the feasibility of the concept and ended up targeting an HP system. They went through an elaborate process of physically dismantling the unit and decompiling firmware (ultimately found on HP's public FTP Server) and along the way they came up with their own debugger and tooling to decipher and navigate this tricky realm (see their github repo here)
The presentation was thorough and guided the audience through the chain of events that lead to them owning the all-in-one printer/fax unit they were targeting. Once they made it into the operating system they found out of date open source libraries and an interesting/challenging OS to work with (hence the creation of their debugger). After poking around they settled on attacking the JPEG parser and were able to do a live demo showing the fax machine receiving a fax and calling home over the network. Pretty impressive work!