Пропускане към основното съдържание

Exploit Hacks 2-Factor Authentication

"WARNING: This software cannot work without Let's Encrypt! This is a new generation of phishing SSL-Strip attack. I will develop this tool more effectively ;) Because Soon maybe very soon the Let's Encrypt company may have a problem with this tool! So, the situation is that. First of all, you must create a domain similar to this what you want to attack and SSL certificate, using your real IP address and then you must request a certificate from Let's Encrypt company. On that way, you will be trackable. Where and what you sent on the "victim". This is not good for you if you are an attacker :D So, this tool on 100% is not usable if you can't use every time a different real IP, domain and SSL certificate :). This is so stupid if you do this, only for steal credentials of someone, just like that :)

BR V.Varbanovski @nu11secur1ty - System Administrator - Infrastructure Engineer!"

Explaining by Kevin Mitnick

Explaining by Kuba Gretzky

How to protect yourself? There is one major flaw in this phishing technique that anyone can and should exploit to protect themselves - the attacker must register their own domain. By registering a domain, attacker will try to make it look as similar to real, legitimate domain as possible. For example if the attacker is targeting Facebook (real domain is facebook.com), they can, for example, register a domain faceboook.com or faceb00k.com, maximizing their chances that phished victims won't spot the difference in the browser's address bar. That said - always check the legitimacy of website's base domain, visible in the address bar, if it asks you to provide any private information. By base domain I mean the one that precedes the top-level domain.

As an example, imagine this is the URL and the website, you arrived at, asks you to log into Facebook:


The top-level domain is .com and the base domain would be the preceeding word, with next . as a separator. Combined with TLD, that would be faceboook.com. When you verify that faceboook.com is not the real facebook.com, you will know that someone is trying to phish you. As a side note - Green lock icon seen next to the URL, in the browser's address bar, does not mean that you are safe! Green lock icon only means that the website you've arrived at, encrypts the transmission between you and the server, so that no-one can eavesdrop on your communication. Attackers can easily obtain SSL/TLS certificates for their phishing sites and give you a false sense of security with the ability to display the green lock icon as well. Figuring out if the base domain you see is valid, sometimes may not be easy and leaves room for error. It became even harder with the support of Unicode characters in domain names. This made it possible for attackers to register domains with special characters (e.g. in Cyrillic) that would be lookalikes of their Latin counterparts. This technique recieved a name of a homograph attack. As a quick example, an attacker could register a domain facebooĸ.com, which would look pretty convincing even though it was a completely different domain name (ĸ is not really k). It got even worse with other Cyrillic characters, allowing for ebаy.com vs ebay.com. The first one has an Cyrillic counterpart for a character, which looks exactly the same. Major browsers were fast to address the problem and added special filters to prevent domain names from being displayed in Unicode, when suspicious characters were detected. If you are interested in how it works, check out the IDN spoofing filter source code of the Chrome browser. Now you see that verifying domains visually is not always the best solution, especially for big companies, where it often takes just one employee to get phished and allow attackers to steal vast amounts of data. This is why FIDO Alliance introduced U2F (Universal 2nd Factor Authentication) to allow for unphishable 2nd factor authentication. In short, you have a physical hardware key on which you just press a button when the website asks you to. Additionally it may ask you for account password or a complementary 4 digit PIN. The website talks directly with the hardware key plugged into your USB port, with the web browser as the channel provider for the communication.

What is different with this form of authentication, is that U2F protocol is designed to take the website's domain as one of the key components in negotiating the handshake. This means that if the domain in the browser's address bar, does not match the domain used in the data transmission between the website and the U2F device, the communication will simply fail. This solution leaves no room for error and is totally unphishable using Evilginx method. Citing the vendor of U2F devices - Yubico (who co-developed U2F with Google): With the YubiKey, user login is bound to the origin, meaning that only the real site can authenticate with the key. The authentication will fail on the fake site even if the user was fooled into thinking it was real. This greatly mitigates against the increasing volume and sophistication of phishing attacks and stops account takeovers. It is important to note here that Markus Vervier (@marver) and Michele Orrù (@antisnatchor) did demonstrate a technique on how an attacker can attack U2F devices using the newly implemented WebUSB feature in modern browsers (which allows websites to talk with USB connected devices). It is also important to mention that Yubico, the creator of popular U2F devices YubiKeys, tried to steal credit for their research, which they later apologized for. You can find the list of all websites supporting U2F authentication here. Coinciding with the release of Evilginx 2, WebAuthn is coming out in all major web browsers. It will introduce the new FIDO2 password-less authentication standard to every browser. Chrome, Firefox and Edge are about to receive full support for it. To wrap up - if you often need to log into various services, make your life easier and get a U2F device! This will greatly improve your accounts' security. Under the hood Interception of HTTP packets is possible since Evilginx acts as an HTTP server talking to the victim's browser and, at the same time, acts as an HTTP client for the website where the data is being relayed to. To make it possible, the victim has to be contacting Evilginx server through a custom phishing URL that will point to Evilginx server. Simply forwarding packets from victim to destination website would not work well and that's why Evilginx has to do some on-the-fly modifications. In order for the phishing experience to be seamless, the proxy overcomes the following obstacles: 1. Making sure that the victim is not redirected to phished website's true domain. Since the phishing domain will differ from the legitimate domain, used by phished website, relayed scripts and HTML data have to be carefully modified to prevent unwanted redirection of victim's web browser. There will be HTML submit forms pointing to legitimate URLs, scripts making AJAX requests or JSON objects containing URLs. Ideally the most reliable way to solve it would be to perform regular expression string substitution for any occurrence of https://legit-site.com and replacing it with https://our-phishing-site.com. Unfortunately this is not always the case and it requires some trial and error kung-fu, working with web inspector to track down all strings the proxy needs to replace to not break website's functionality. If target website uses multiple options for 2FA, each route has to be inspected and analyzed. For example, there are JSON objects transporting escaped URLs like https:\/\/legit-site.com. You can see that this will definitely not trigger the regexp mentioned above. If you replaced all occurrences of legit-site.com you may break something by accident. 2. Responding to DNS requests for multiple subdomains. Websites will often make requests to multiple subdomains under their official domain or even use a totally different domain. In order to proxy these transmissions, Evilginx has to map each of the custom subdomains to its own IP address. Previous version of Evilginx required the user to set up their own DNS server (e.g. bind) and set up DNS zones to properly handle DNS A requests. This generated a lot of headache on the user part and was only easier if the hosting provider (like Digital Ocean) provided an easy-to-use admin panel for setting up DNS zones. With Evilginx 2 this issue is gone. Evilginx now runs its own in-built DNS server, listening on port 53, which acts as a nameserver for your domain. All you need to do is set up the nameserver addresses for your domain (ns1.yourdomain.com and ns2.yourdomain.com) to point to your Evilginx server IP, in the admin panel of your domain hosting provider. Evilginx will handle the rest on its own. 3. Modification of various HTTP headers. Evilginx modifies HTTP headers sent to and received from the destination website. In particular the Origin header, in AJAX requests, will always hold the URL of the requesting site in order to comply with CORS. Phishing sites will hold a phishing URL as an origin. When request is forwarded, the destination website will receive an invalid origin and will not respond to such request. Not replacing the phishing hostname with the legitimate one in the request would make it also easy for the website to notice suspicious behavior. Evilginx automatically changes Origin and Referer fields on-the-fly to their legitimate counterparts. Same way, to avoid any conflicts with CORS from the other side, Evilginx makes sure to set the Access-Control-Allow-Origin header value to * (if it exists in the response) and removes any occurrences of Content-Security-Policy headers. This guarantees that no request will be restricted by the browser when AJAX requests are made. Other header to modify is Location, which is set in HTTP 302 and 301 responses to redirect the browser to different location. Naturally the value will come with legitimate website URL and Evilginx makes sure this location is properly switched to corresponding phishing hostname. 4. Cookies filtering. It is common for websites to manage cookies for various purposes. Each cookie is assigned to a specific domain. Web browser's task is to automatically send the stored cookie, with every request to the domain, the cookie was assigned to. Cookies are also sent as HTTP headers, but I decided to make a separate mention of them here, due to their importance. Example cookie sent from the website to client's web browser would look like this:

   Set-Cookie: qwerty=219ffwef9w0f; Domain=legit-site.com; Path=/; Expires=Wed, 30 Aug 2019 00:00:00 GMT

As you can see the cookie will be set in client's web browser for legit-site.com domain. Since the phishing victim is only talking to the phishing website with domain our-phishing-site.com, such cookie will never be saved in the browser, because of the fact the cookie domain differs from the one the browser is communicating with. Evilginx will parse every occurrence of Set-Cookie in HTTP response headers and modify the domain, replacing it with the phishing one, as follows:

   Set-Cookie: qwerty=219ffwef9w0f; Domain=our-phishing-site.com; Path=/; 

Evilginx will also remove expiration date from cookies, if the expiration date does not indicate that the cookie should be deleted from browser's cache. Evilginx also sends its own cookies to manage the victim's session. These cookies are filtered out from every HTTP request, to prevent them from being sent to the destination website. 5. SSL splitting. As the whole world of world-wide-web migrates to serving pages over secure HTTPS connections, phishing pages can't be any worse. Whenever you pick a hostname for your phishing page (e.g. totally.not.fake.linkedin.our-phishing-domain.com), Evilginx will automatically obtain a valid SSL/TLS certificate from LetsEncrypt and provide responses to ACME challenges, using the in-built HTTP server. This makes sure that victims will always see a green lock icon next to the URL address bar, when visiting the phishing page, comforting them that everything is secured using "military-grade" encryption! 6. Anti-phishing tricks There are rare cases where websites would employ defenses against being proxied. One of such defenses I uncovered during testing is using javascript to check if window.location contains the legitimate domain. These detections may be easy or hard to spot and much harder to remove, if additional code obfuscation is involved. Improvements The greatest advantage of Evilginx 2 is that it is now a standalone console application. There is no need to compile and install custom version of nginx, which I admit was not a simple feat. I am sure that using nginx site configs to utilize proxy_pass feature for phishing purposes was not what HTTP server's developers had in mind, when developing the software.

Evilginx 1 was pretty much a combination of several dirty hacks, duct taped together. Nonetheless it somehow worked! Additionally to fully responsive console UI, here are the greatest improvements: Tokenized phishing URLs In previous version of Evilginx, entering just the hostname of your phishing URL address in the browser, with root path (e.g. https://totally.not.fake.linkedin.our-phishing-domain.com/), would still proxy the connection to the legitimate website. This turned out to be an issue, as I found out during development of Evilginx 2. Apparently once you obtain SSL/TLS certificates for the domain/hostname of your choice, external scanners start scanning your domain. Scanners gonna scan. The scanners use public certificate transparency logs to scan, in real-time, all domains which have obtained valid SSL/TLS certifcates. With public libraries like CertStream, you can easily create your own scanner.

For some phishing pages, it took usually one hour for the hostname to become banned and blacklisted by popular anti-spam filters like Spamhaus. After I had three hostnames blacklisted for one domain, the whole domain got blocked. Three strikes and you're out! I began thinking how such detection can be evaded. Easiest solution was to reply with faked response to every request for path /, but that would not work if scanners probed for any other path. Then I decided that each phishing URL, generated by Evilginx, should come with a unique token in the URL as a GET parameter. For example, Evilginx responds with redirection response when scanner makes a request to URL:


But it responds with proxied phishing page, instead, when the URL is properly tokenized, with a valid token:


When tokenized URL is opened, Evilginx sets a validation cookie in victim's browser, whitelisting all subsequent requests, even for the non-tokenized ones. This works very well, but there is still risk that scanners will eventually scan tokenized phishing URLs when these get out into the interwebz.


Popular Posts

DVWA - Brute Force (High Level) - Anti-CSRF Tokens

This is the final "how to" guide which brute focuses Damn Vulnerable Web Application (DVWA), this time on the high security level. It is an expansion from the "low" level (which is a straightforward HTTP GET form attack). The main login screen shares similar issues (brute force-able and with anti-CSRF tokens). The only other posting is the "medium" security level post (which deals with timing issues). For the final time, let's pretend we do not know any credentials for DVWA.... Let's play dumb and brute force DVWA... once and for all! TL;DR: Quick copy/paste 1: CSRF=$(curl -s -c dvwa.cookie "" | awk -F 'value=' '/user_token/ {print $2}' | cut -d "'" -f2) 2: SESSIONID=$(grep PHPSESSID dvwa.cookie | cut -d $'\t' -f7) 3: curl -s -b dvwa.cookie -d "username=admin&password=password&user_token=${CSRF}&Login=Login" "192.168.1

Facebook structure_intentional uncertainty

+ Server: No banner retrieved + X-XSS-Protection header has been set to disable XSS Protection. There is unlikely to be a good reason for this. + Uncommon header 'x-fb-debug' found, with contents: SA17Z/1jGOMUff7U39k20M0c/6sSZAD/Jvv00FPyIR603jOZAx91mrwQ5WVjhkAOm0FI683ditjc0KXc2o+5DQ== + Entry '/album.php' in robots.txt returned a non-forbidden or redirect HTTP code (200) + Entry '/checkpoint/' in robots.txt returned a non-forbidden or redirect HTTP code (302) + Entry '/contact_importer/' in robots.txt returned a non-forbidden or redirect HTTP code (200) + Entry '/file_download.php' in robots.txt returned a non-forbidden or redirect HTTP code (200) + Entry '/live/' in robots.txt returned a non-forbidden or redirect HTTP code (302) + Entry '/moments_app/' in robots.txt returned a non-forbidden or redirect HTTP code (302) + Entry '/p.php' in robots.txt returned a non-forbidden or redi

List of TCP and UDP port numbers

This is a list of Internet socket port numbers used by protocols of the transport layer of the Internet Protocol Suite for the establishment of host-to-host connectivity. Originally, port numbers were used by the Network Control Program (NCP) in the ARPANET for which two ports were required for half-duplex transmission. Later, the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) needed only one port for full-duplex, bidirectional traffic. The even-numbered ports were not used, and this resulted in some even numbers in the well-known port number range being unassigned. The Stream Control Transmission Protocol (SCTP) and the Datagram Congestion Control Protocol (DCCP) also use port numbers. They usually use port numbers that match the services of the corresponding TCP or UDP implementation, if they exist. The Internet Assigned Numbers Authority (IANA) is responsible for maintaining the official assignments of port numbers for specific uses. However, many unoff