Fork me on GitHub





събота, 30 януари 2016 г.

Kali Linux, Rolling Edition Released – 2016.1




Our First Release of Kali-Rolling (2016.1) Today marks an important milestone for us with the first public release of our Kali Linux rolling distribution. Kali switched to a rolling release model back when we hit version 2.0 (codename “sana”), however the rolling release was only available via an upgrade from 2.0 to kali-rolling for a select brave group. After 5 months of testing our rolling distribution (and its supporting infrastructure), we’re confident in its reliability – giving our users the best of all worlds – the stability of Debian, together with the latest versions of the many outstanding penetration testing tools created and shared by the information security community.



CentOS-6.5-x86_64


четвъртък, 28 януари 2016 г.

Kerberos Protocol






Installation on RedHat
Kerberos packages may be installed by default, but make sure that the appropriate packages are installed for the Kerberos server or client being configured.

To install packages for a Kerberos server:
 # yum install krb5-server krb5-libs krb5-auth-dialog  

To install packages for a Kerberos client:
 # yum install krb5-workstation krb5-libs krb5-auth-dialog  

If the Red Hat Enterprise Linux system will use Kerberos as part of single sign-on with smart cards, then also install the required PKI/OpenSSL package:
 # yum install krb5-pkinit-openssl  


onfiguring a Kerberos 5 Server






6.858 Fall 2014 Lecture for Kerberos by nu11secur1ty

събота, 16 януари 2016 г.

Installing ELK (CentOS (6 - NOTE: with your own modified) ,7)




SCHEME OF VISUALIZATION



Setting Up an Advanced Logstash Pipeline

A Logstash pipeline in most use cases has one or more input, filter, and output plugins. The scenarios in this section build Logstash configuration files to specify these plugins and discuss what each plugin is doing. The Logstash configuration file defines your Logstash pipeline. When you start a Logstash instance, use the -f option to specify the configuration file that defines that instance’s pipeline. A Logstash pipeline has two required elements, input and output, and one optional element, filter. The input plugins consume data from a source, the filter plugins modify the data as you specify, and the output plugins write the data to a destination.


The following text represents the skeleton of a configuration pipeline:
 # The # character at the beginning of a line indicates a comment. Use  
 # comments to describe your configuration.  
 input {  
 }  
 # The filter part of this file is commented out to indicate that it is  
 # optional.  
 # filter {  
 #  
 # }  
 output {  
 }  

This skeleton is non-functional, because the input and output sections don’t have any valid options defined. The examples in this tutorial build configuration files to address specific use cases. Paste the skeleton into a file named first-pipeline.conf in your home Logstash directory. Parsing Apache Logs into Elasticsearch edit This example creates a Logstash pipeline that takes Apache web logs as input, parses those logs to create specific, named fields from the logs, and writes the parsed data to an Elasticsearch cluster. You can download the sample data set used in this example here. Unpack this file. Configuring Logstash for File Input edit To start your Logstash pipeline, configure the Logstash instance to read from a file using the file input plugin. Edit the first-pipeline.conf file to add the following text:
 input {  
   file {  
     path => "/path/to/logstash-tutorial.log"  
     start_position => beginning   
   }  
 }  

The default behavior of the file input plugin is to monitor a file for new information, in a manner similar to the UNIX tail -f command. To change this default behavior and process the entire file, we need to specify the position where Logstash starts processing the file. Replace /path/to/ with the actual path to the location of logstash-tutorial.log in your file system. Parsing Web Logs with the Grok Filter Plugin edit The grok filter plugin is one of several plugins that are available by default in Logstash. For details on how to manage Logstash plugins, see the reference documentation for the plugin manager. Because the grok filter plugin looks for patterns in the incoming log data, configuration requires you to make decisions about how to identify the patterns that are of interest to your use case. A representative line from the web server log sample looks like this:
 83.149.9.216 - - [04/Jan/2015:05:13:42 +0000] "GET /presentations/logstash-monitorama-2013/images/kibana-search.png  
 HTTP/1.1" 200 203023 "http://semicomplete.com/presentations/logstash-monitorama-2013/" "Mozilla/5.0 (Macintosh; Intel  
 Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36"  

The IP address at the beginning of the line is easy to identify, as is the timestamp in brackets. In this tutorial, use the %{COMBINEDAPACHELOG} grok pattern, which structures lines from the Apache log using the following schema:
Edit the first-pipeline.conf file to add the following text:
 filter {  
   grok {  
     match => { "message" => "%{COMBINEDAPACHELOG}"}  
   }  
 }  

After processing, the sample line has the following JSON representation:
 {  
 "clientip" : "83.149.9.216",  
 "ident" : ,  
 "auth" : ,  
 "timestamp" : "04/Jan/2015:05:13:42 +0000",  
 "verb" : "GET",  
 "request" : "/presentations/logstash-monitorama-2013/images/kibana-search.png",  
 "httpversion" : "HTTP/1.1",  
 "response" : "200",  
 "bytes" : "203023",  
 "referrer" : "http://semicomplete.com/presentations/logstash-monitorama-2013/",  
 "agent" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36"  
 }  

Indexing Parsed Data into Elasticsearch Now that the web logs are broken down into specific fields, the Logstash pipeline can index the data into an Elasticsearch cluster. Edit the first-pipeline.conf file to add the following text after the input section:
 output {  
   elasticsearch {  
   }  
 }  

With this configuration, Logstash uses http protocol to connect to Elasticsearch. The above example assumes Logstash and Elasticsearch to be running on the same instance. You can specify a remote Elasticsearch instance using hosts configuration like hosts => "es-machine:9092". Enhancing Your Data with the Geoip Filter Plugin edit In addition to parsing log data for better searches, filter plugins can derive supplementary information from existing data. As an example, the geoip plugin looks up IP addresses, derives geographic location information from the addresses, and adds that location information to the logs. Configure your Logstash instance to use the geoip filter plugin by adding the following lines to the filter section of the first-pipeline.conf file:
 geoip {  
   source => "clientip"  
 }  

The geoip plugin configuration requires data that is already defined as separate fields. Make sure that the geoip section is after the grok section of the configuration file. Specify the name of the field that contains the IP address to look up. In this tutorial, the field name is clientip. Testing Your Initial Pipeline edit At this point, your first-pipeline.conf file has input, filter, and output sections properly configured, and looks like this:
 input {  
   file {  
     path => "/Users/palecur/logstash-1.5.2/logstash-tutorial-dataset"  
     start_position => beginning  
   }  
 }  
 filter {  
   grok {  
     match => { "message" => "%{COMBINEDAPACHELOG}"}  
   }  
   geoip {  
     source => "clientip"  
   }  
 }  
 output {  
   elasticsearch {}  
   stdout {}  
 }  

To verify your configuration, run the following command:
 bin/logstash -f first-pipeline.conf --configtest  

The --configtest option parses your configuration file and reports any errors. When the configuration file passes the configuration test, start Logstash with the following command:
 bin/logstash -f first-pipeline.conf  

Try a test query to Elasticsearch based on the fields created by the grok filter plugin:
 curl -XGET 'localhost:9200/logstash-$DATE/_search?q=response=200'  

Replace $DATE with the current date, in YYYY.MM.DD format.
Since our sample has just one 200 HTTP response, we get one hit back:
 {"took":2,  
 "timed_out":false,  
 "_shards":{"total":5,  
  "successful":5,  
  "failed":0},  
 "hits":{"total":1,  
  "max_score":1.5351382,  
  "hits":[{"_index":"logstash-2015.07.30",  
   "_type":"logs",  
   "_id":"AU7gqOky1um3U6ZomFaF",  
   "_score":1.5351382,  
   "_source":{"message":"83.149.9.216 - - [04/Jan/2015:05:13:45 +0000] \"GET /presentations/logstash-monitorama-2013/images/frontend-response-codes.png HTTP/1.1\" 200 52878 \"http://semicomplete.com/presentations/logstash-monitorama-2013/\" \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36\"",  
    "@version":"1",  
    "@timestamp":"2015-07-30T20:30:41.265Z",  
    "host":"localhost",  
    "path":"/path/to/logstash-tutorial-dataset",  
    "clientip":"83.149.9.216",  
    "ident":"-",  
    "auth":"-",  
    "timestamp":"04/Jan/2015:05:13:45 +0000",  
    "verb":"GET",  
    "request":"/presentations/logstash-monitorama-2013/images/frontend-response-codes.png",  
    "httpversion":"1.1",  
    "response":"200",  
    "bytes":"52878",  
    "referrer":"\"http://semicomplete.com/presentations/logstash-monitorama-2013/\"",  
    "agent":"\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36\""  
    }  
   }]  
  }  
 }  

Try another search for the geographic information derived from the IP address:
 curl -XGET 'localhost:9200/logstash-$DATE/_search?q=geoip.city_name=Buffalo'  

Replace $DATE with the current date, in YYYY.MM.DD format.
Only one of the log entries comes from Buffalo, so the query produces a single response:
 {"took":3,  
 "timed_out":false,  
 "_shards":{  
  "total":5,  
  "successful":5,  
  "failed":0},  
 "hits":{"total":1,  
  "max_score":1.03399,  
  "hits":[{"_index":"logstash-2015.07.31",  
   "_type":"logs",  
   "_id":"AU7mK3CVSiMeBsJ0b_EP",  
   "_score":1.03399,  
   "_source":{  
    "message":"108.174.55.234 - - [04/Jan/2015:05:27:45 +0000] \"GET /?flav=rss20 HTTP/1.1\" 200 29941 \"-\" \"-\"",  
    "@version":"1",  
    "@timestamp":"2015-07-31T22:11:22.347Z",  
    "host":"localhost",  
    "path":"/path/to/logstash-tutorial-dataset",  
    "clientip":"108.174.55.234",  
    "ident":"-",  
    "auth":"-",  
    "timestamp":"04/Jan/2015:05:27:45 +0000",  
    "verb":"GET",  
    "request":"/?flav=rss20",  
    "httpversion":"1.1",  
    "response":"200",  
    "bytes":"29941",  
    "referrer":"\"-\"",  
    "agent":"\"-\"",  
    "geoip":{  
     "ip":"108.174.55.234",  
     "country_code2":"US",  
     "country_code3":"USA",  
     "country_name":"United States",  
     "continent_code":"NA",  
     "region_name":"NY",  
     "city_name":"Buffalo",  
     "postal_code":"14221",  
     "latitude":42.9864,  
     "longitude":-78.7279,  
     "dma_code":514,  
     "area_code":716,  
     "timezone":"America/New_York",  
     "real_region_name":"New York",  
     "location":[-78.7279,42.9864]  
    }  
   }  
  }]  
  }  
 }  

Multiple Input and Output Plugins The information you need to manage often comes from several disparate sources, and use cases can require multiple destinations for your data. Your Logstash pipeline can use multiple input and output plugins to handle these requirements. This example creates a Logstash pipeline that takes input from a Twitter feed and the Filebeat client, then sends the information to an Elasticsearch cluster as well as writing the information directly to a file. Reading from a Twitter feed edit To add a Twitter feed, you need several pieces of information: A consumer key, which uniquely identifies your Twitter app, which is Logstash in this case. A consumer secret, which serves as the password for your Twitter app. One or more keywords to search in the incoming feed. An oauth token, which identifies the Twitter account using this app. An oauth token secret, which serves as the password of the Twitter account. Visit https://dev.twitter.com/apps to set up a Twitter account and generate your consumer key and secret, as well as your OAuth token and secret. Use this information to add the following lines to the input section of the first-pipeline.conf file:
 twitter {  
   consumer_key =>  
   consumer_secret =>  
   keywords =>  
   oauth_token =>  
   oauth_token_secret =>  
 }  

The Filebeat Client edit The filebeat client is a lightweight, resource-friendly tool that collects logs from files on the server and forwards these logs to your Logstash instance for processing. The Filebeat client uses the Beats protocol to communicate with your Logstash instance. The Beats protocol is designed for reliability and low latency. Filebeat uses the computing resources of the machine hosting the source data, and the Beats input plugin minimizes the resource demands on the Logstash instance.
 Note  
 In a typical use case, Filebeat runs on a separate machine from the machine running your Logstash instance. For the purposes of this tutorial, Logstash and Filebeat are running on the same machine.  

Default Logstash configuration includes the Beats input plugin, which is designed to be resource-friendly. To install Filebeat on your data source machine, download the appropriate package from the Filebeat product page. Create a configuration file for Filebeat similar to the following example:
 filebeat:  
  prospectors:  
   -  
    paths:  
     - "/path/to/sample-log"   
    fields:  
     type: syslog  
 output:  
  elasticsearch:  
   enabled: true  
   hosts: ["http://localhost:5043"]  
  tls:  
   certificate: /path/to/ssl-certificate.crt   
   certificate_key: /path/to/ssl-certificate.key  
   certificate_authorities: /path/to/ssl-certificate.crt  
   timeout: 15  
  Path to the file or files that Filebeat processes.  
  Path to the SSL certificate for the Logstash instance.  

Save this configuration file as filebeat.yml. Configure your Logstash instance to use the Filebeat input plugin by adding the following lines to the input section of the first-pipeline.conf file:
 beats {  
   port => "5043"  
   ssl => true  
   ssl_certificate => "/path/to/ssl-cert"   
   ssl_key => "/path/to/ssl-key"   
 }  

Path to the SSL certificate that the Logstash instance uses to authenticate itself to Filebeat.

Path to the key for the SSL certificate.
Writing Logstash Data to a File edit You can configure your Logstash pipeline to write data directly to a file with the file output plugin. Configure your Logstash instance to use the file output plugin by adding the following lines to the output section of the first-pipeline.conf file:
 file {  
   path => /path/to/target/file  
 }  

Writing to multiple Elasticsearch nodes edit Writing to multiple Elasticsearch nodes lightens the resource demands on a given Elasticsearch node, as well as providing redundant points of entry into the cluster when a particular node is unavailable. To configure your Logstash instance to write to multiple Elasticsearch nodes, edit the output section of the first-pipeline.conf file to read:
 output {  
   elasticsearch {  
     hosts => ["IP Address 1:port1", "IP Address 2:port2", "IP Address 3"]  
   }  
 }  

Use the IP addresses of three non-master nodes in your Elasticsearch cluster in the host line. When the hosts parameter lists multiple IP addresses, Logstash load-balances requests across the list of addresses. Also note that default port for Elasticsearch is 9200 and can be omitted in the configuration above. Testing the Pipeline edit At this point, your first-pipeline.conf file looks like this:
 input {  
   twitter {  
     consumer_key =>  
     consumer_secret =>  
     keywords =>  
     oauth_token =>  
     oauth_token_secret =>  
   }  
   beats {  
     port => "5043"  
     ssl => true  
     ssl_certificate => "/path/to/ssl-cert"  
     ssl_key => "/path/to/ssl-key"  
   }  
 }  
 output {  
   elasticsearch {  
     hosts => ["IP Address 1:port1", "IP Address 2:port2", "IP Address 3"]  
   }  
   file {  
     path => /path/to/target/file  
   }  
 }  

Logstash is consuming data from the Twitter feed you configured, receiving data from Filebeat, and indexing this information to three nodes in an Elasticsearch cluster as well as writing to a file. At the data source machine, run Filebeat with the following command:
 sudo ./filebeat -e -c filebeat.yml -d "publish"  

Filebeat will attempt to connect on port 5403. Until Logstash starts with an active Beats plugin, there won’t be any answer on that port, so any messages you see regarding failure to connect on that port are normal for now. To verify your configuration, run the following command:
 bin/logstash -f first-pipeline.conf --configtest  

The --configtest option parses your configuration file and reports any errors. When the configuration file passes the configuration test, start Logstash with the following command:
 bin/logstash -f first-pipeline.conf  

Use the grep utility to search in the target file to verify that information is present:
 grep Mozilla /path/to/target/file  

Run an Elasticsearch query to find the same information in the Elasticsearch cluster:
 curl -XGET 'localhost:9200/logstash-2015.07.30/_search?q=agent=Mozilla'  

събота, 9 януари 2016 г.

Searching, Downloading, and Installing Updates for windows cmd cscript




The scripting sample in this topic shows you how to use Windows Update Agent (WUA) to scan, download, and install updates. The sample searches for all the applicable software updates and then lists those updates. Next, it creates a collection of updates to download and then downloads them. Finally, it creates a collection of updates to install and then installs them. If you want to search, download, and install a specific update that you identify by using the update title, see Searching, Downloading, and Installing Specific Updates. Before you attempt to run this sample, note the following: WUA must be installed on the computer. For more information about how to determine the version of WUA that is installed, see Determining the Current Version of WUA. The sample can download updates only by using WUA. It cannot download updates from a Software Update Services (SUS) 1.0 server. Running this sample requires Windows Script Host (WSH). For more information about WSH, see the WSH section of the Platform Software Development Kit (SDK). If the sample is copied to a file named WUA_SearchDownloadInstall.vbs, you can run the sample by opening a Command Prompt window and typing the following command at the command prompt.

cscript WUA_SearchDownloadInstall.vbs
 Set updateSession = CreateObject("Microsoft.Update.Session")  
 updateSession.ClientApplicationID = "MSDN Sample Script"  
 Set updateSearcher = updateSession.CreateUpdateSearcher()  
 WScript.Echo "Searching for updates..." & vbCRLF  
 Set searchResult = _  
 updateSearcher.Search("IsInstalled=0 and Type='Software' and IsHidden=0")  
 WScript.Echo "List of applicable items on the machine:"  
 For I = 0 To searchResult.Updates.Count-1  
   Set update = searchResult.Updates.Item(I)  
   WScript.Echo I + 1 & "> " & update.Title  
 Next  
 If searchResult.Updates.Count = 0 Then  
   WScript.Echo "There are no applicable updates."  
   WScript.Quit  
 End If  
 WScript.Echo vbCRLF & "Creating collection of updates to download:"  
 Set updatesToDownload = CreateObject("Microsoft.Update.UpdateColl")  
 For I = 0 to searchResult.Updates.Count-1  
   Set update = searchResult.Updates.Item(I)  
   addThisUpdate = false  
   If update.InstallationBehavior.CanRequestUserInput = true Then  
     WScript.Echo I + 1 & "> skipping: " & update.Title & _  
     " because it requires user input"  
   Else  
     If update.EulaAccepted = false Then  
       WScript.Echo I + 1 & "> note: " & update.Title & _  
       " has a license agreement that must be accepted:"  
       WScript.Echo update.EulaText  
       WScript.Echo "Do you accept this license agreement? (Y/N)"  
       strInput = WScript.StdIn.Readline  
       WScript.Echo   
       If (strInput = "Y" or strInput = "y") Then  
         update.AcceptEula()  
         addThisUpdate = true  
       Else  
         WScript.Echo I + 1 & "> skipping: " & update.Title & _  
         " because the license agreement was declined"  
       End If  
     Else  
       addThisUpdate = true  
     End If  
   End If  
   If addThisUpdate = true Then  
     WScript.Echo I + 1 & "> adding: " & update.Title   
     updatesToDownload.Add(update)  
   End If  
 Next  
 If updatesToDownload.Count = 0 Then  
   WScript.Echo "All applicable updates were skipped."  
   WScript.Quit  
 End If  
 WScript.Echo vbCRLF & "Downloading updates..."  
 Set downloader = updateSession.CreateUpdateDownloader()   
 downloader.Updates = updatesToDownload  
 downloader.Download()  
 Set updatesToInstall = CreateObject("Microsoft.Update.UpdateColl")  
 rebootMayBeRequired = false  
 WScript.Echo vbCRLF & "Successfully downloaded updates:"  
 For I = 0 To searchResult.Updates.Count-1  
   set update = searchResult.Updates.Item(I)  
   If update.IsDownloaded = true Then  
     WScript.Echo I + 1 & "> " & update.Title   
     updatesToInstall.Add(update)   
     If update.InstallationBehavior.RebootBehavior > 0 Then  
       rebootMayBeRequired = true  
     End If  
   End If  
 Next  
 If updatesToInstall.Count = 0 Then  
   WScript.Echo "No updates were successfully downloaded."  
   WScript.Quit  
 End If  
 If rebootMayBeRequired = true Then  
   WScript.Echo vbCRLF & "These updates may require a reboot."  
 End If  
 WScript.Echo vbCRLF & "Would you like to install updates now? (Y/N)"  
 strInput = WScript.StdIn.Readline  
 WScript.Echo   
 If (strInput = "Y" or strInput = "y") Then  
   WScript.Echo "Installing updates..."  
   Set installer = updateSession.CreateUpdateInstaller()  
   installer.Updates = updatesToInstall  
   Set installationResult = installer.Install()  
   'Output results of install  
   WScript.Echo "Installation Result: " & _  
   installationResult.ResultCode   
   WScript.Echo "Reboot Required: " & _   
   installationResult.RebootRequired & vbCRLF   
   WScript.Echo "Listing of updates installed " & _  
   "and individual installation results:"   
   For I = 0 to updatesToInstall.Count - 1  
     WScript.Echo I + 1 & "> " & _  
     updatesToInstall.Item(i).Title & _  
     ": " & installationResult.GetUpdateResult(i).ResultCode    
   Next  
 End If  

сряда, 6 януари 2016 г.

OSCP


What it means to be an OSCP

OpenStack Installation Guide for Red Hat Enterprise Linux 7, CentOS 7...






Net-SNMP




DEVELOPMENT





Extending Net-SNMP with


The NetSNMP::agent Perl module provides an agent object which is used to handle requests for a part of the agent's OID tree. The agent object's constructor has options for running the agent as a sub-agent of snmpd or a standalone agent. No arguments are necessary to create an embedded agent:

1:  use NetSNMP::agent (':all');  
2:  my $agent = new NetSNMP::agent();  


The agent object has a register method which is used to register a callback function with a particular OID . The registerfunction takes a name, OID , and pointer to the callback function. The following example will register a callback function named hello_handler with the SNMP Agent which will handle requests under the OID . 1. 3. 6 . 1. 4 . 1. 8072. 9999 . 9999:

1:  $agent->register("hello_tester", ".1.3.6.1.4.1.8072.9999.9999",  
2:  \& hello_handler);  


NOTE

The OID . 1. 3. 6 . 1. 4 . 1. 8072. 9999.9999 (NET -SNMP -MIB: : netSnmpPlaypen) is typically used for demonstration purposes only. If your organization does not already have a root OID , you can obtain one by contacting an ISO Name Registration Authority (ANSI in the United States).

събота, 2 януари 2016 г.

How to defend yourself against MITM or Man-in-the-middle attack




Example
Protecting our data online is never going to be an easy task, especially nowadays when attackers are regularly inventing some new techniques and exploits to steal your data. Sometimes their attacks will not be so harmful for individual users. But large-scale attacks on some popular web sites or financial databases, could be highly dangerous. In most cases, the attackers first try to push some malware on to user’s machine. Sometimes this technique doesn’t work out, however.



What is Man-in-the-middle attack

A popular method is Man-in-the-middle attack. It is also known as a bucket brigade attack, or sometimes Janus attack in cryptography. As its name suggests, the attacker keeps himself / herself between two parties, making them believe that they are talking directly to each other over a private connection, when actually the entire conversation is being controlled by the attacker. A man-in-the-middle attack can be successful only when the attacker forms a mutual authentication between two parties. Most cryptographic protocols always provides some form of endpoint authentication, specifically to block MITM attacks on users. Secure Sockets Layer (SSL) protocol is always being used to authenticate one or both parties using a mutually trusted certification authority.


How it works


Lets say there are 3 characters in this story: Mike, Rob, and Alex. Mike wants to communicate with Rob. Meanwhile, Alex (attacker) inhibit the conversation to eavesdrop and carry on a false conversation with Rob, behalf on Mike. First, Mike asks Rob for his public key. If Rob provides his key to Mike, Alex intercepts, and this is how “man-in-the-middle attack” begins. Alex then sends a forged message to Mike that claims to be from Rob, but including Alex’s public key. Mike easily believes that the received key does belong to Rob, when actually that’s not true. Mike innocently encrypts his message with Alex’s key and sends the converted message back to Rob. In the most common MITM attacks, attacker mostly uses a WiFi router to intercept user’s communication. This technique can be work out by exploiting a router with some malicious programs to intercept user’s sessions on the router. Here, the attacker first configures his laptop as a WiFi hotspot, choosing a name commonly used in a public area, such as an airport or coffee shop. Once user connects to that malicious router to reach websites such as online banking sites or commerce sites, attacker then logs user’s credentials for later use.


Man-in-the-middle attack prevention & tools


Most of the effective defences against MITM can be found only on router or server-side. You won’t be having any dedicated control over the security of your transaction. Instead, you can use a strong encryption between the client and the server. In this case server authenticates client’s request by presenting a digital certificate, and then only connection could be established. Another method to prevent such MITM attacks is, to never connect to open WiFi routers directly. If you wish to so, you can use a browser plug-in such as HTTPS Everywhere or ForceTLS. These plug-ins will help you establishing a secure connection whenever the option is available.