Data Networking/Spring 2017/ERB

TELE 5330 Project 3 focuses on application-layer protocols required to configure a basic enterprise network: The HTTP, DNS and DHCP services are implemented on separate servers for robustness and load-balancing. Servers run Ubunto 16.04 on virtual host machines running on VMWare Workstation Pro.
 * Secure (HTTPS) Webserver for delivering HTML content using TLS
 * Firewall & Network Security for restricting traffic from outside the private project subnet (192.168.58.0/24)
 * DHCP (Dynamic Host Configuration Protocol) for dynamic IPv4 and IPv6 address allocation
 * DNS (Domain Name System) lookups for domain name resolution

Team Members

 * Elliot Landsman
 * Bhoomi Waghela
 * Rishabh Waghela

Secure (HTTPS) Webserver
We configured a webserver serving a template website (index.html and several other associated pages) using the native Apache 2 service on Ubuntu 16.04. HTTP traffic is delivered securely using Transport Level Security (TLS); connections from clients over the older SSL 2.0 and SSL 3.0 implementations are not allowed. Unsecured HTTP requests to ports 80 and 8080 are redirected to the secure HTTPS port 443. Instead of relying on the built-in Ubuntu HTTPS certificate (ssl-cert-snakeoil), we issued a new, self-signed certificate. This offers marginally better security then relying on a mass-distributed, default Ubuntu certificate; however, the certificate can still be falsified as its identity cannot be verified with an established Certificate Authority like Verisign.

Protocol & Component Overview
The following application-layer components were used to configure a secure Apache webserver in Ubuntu.

HTTP (Hypertext Transfer Protocol)
HTTP uses a client-server paradigm where a server, with a known IP address and name, listens on well-known ports 80 or 8080 for specially-constructed TCP packets that contain an HTTP request with the following format: GET /path/index.html HTTP/v.v Host: www.host.tld:80 AdditionalHeader1: Value AdditionalHeader2: Value  In HTTP 1.1, the Host header is required because it is expected that a single server may host several websites. Additional headers make provisions for supporting caching (If-Modified-Since), cookies (Cookie), and different languages and character encoding, among other features. If the requested resource is found on the server, it will respond with 200 OK. Otherwise, it will respond with 404 Not Found. Additional possible responses are 201 Accepted, 202 Created or 206 Partial Content, which indicate that the request is successful, but the content is queued or being created for the client. These codes are not shown to the user because they usually indicate that the browser must wait for the request to complete. The code 301 Moved Permanently indicates that a resource has permanently moved and indicates its new location; the browser typically automatically makes a request to the new location without showing the message to the user either. A response message has the following format:

HTTP/v.v NNN CODENAME Date: [Date/Time Stamp] Content-Type: [text, image...] Content-Length: [bytes] AdditionalHeader1: Value AdditionalHeader2: Value  HTML content with the byte size specified in header Content-Length 

Additional headers may include support for cookies, caching, and headers for data formatting and text encoding.

TLS (Transport Level Security)
TLS 1.2 replaces the older SSL (Secure Sockets Layer) 2.0 and 3.0 standards, both of which have been compromised and retired in 2011 and 2015, respectively. TLS and HTTPS use public-key cryptography for the server-side only, which means that only the server must carry identity certificates verified by a known party (such as Verisign). Once the server shares its public key with the client, the client generates session keys (either using a random number or via the Hellman-Duffey key exchange), encrypts them with the server's public key, and sends them back. The following steps are required:
 * 1) The Client contact a Server requesting a secure connection.
 * 2) The Server responds back with its certificate, which the Client verifies with a 3rd party before proceeding.
 * 3) The Server also responds with its public key.
 * 4) The Client generates session keys using one of several accepted methods (either symmetric, or asymmetric - the latter ensures forward secrecy if the server's certificate is later compromised), encrypts them with the Server's public key, and sends them back.
 * 5) A session is now established, and all communications will be made using the symmetric, or asymmetric session keys.

Apache 2 HTTP Server
Apache is a suite of webserver technologies first released in 1996. It is an evolution of an earlier Linux component called HTTPd (HTTP Daemon), which was released in 1993 and offered one of the earliest widely available webserver capability. Apache splits different features into modular components, each running in a separate process. This improves performance in multi-processor environments, and improves stability.

Apache 2 is available for free with Ubuntu, and can be enabled and configured quickly. However, care must be taken to properly configure security settings, as well as generally configure the webserver host for secure network access. The default Apache 2 configuration is inherently insecure and should not be exposed to external users.

Implementation
The following broad configuration steps must be taken to configure a basic website with TLS: Detailed Configuration Instructions:
 * 1) Install and enable Apache 2 and HTTPS components
 * 2) Create a configuration file for custom website based on default-ssl.conf
 * 3) Import custom HTML content, including an index.html page
 * 4) Configure a self-signed SSL certificate in place of the default ssl-cert-snakeoil
 * 5) * Note that this only marginally improves security because self-signed certificates are easy to fake if generated with the same input parameters on a different machine
 * 6) Configure redirection of all insecure HTTP traffic from ports 80/8080 to 443
 * 7) Configure Apache 2 settings for enhanced security

Enable Apache 2 HTTP and HTTPS Components
Webserver components on Ubuntu distros are not enabled by default; this explains how to deploy and start them.  Install Apache 2 Services: sudo apt install apache2 Enable the SSL service module: ssudo a2enmod ssl Navigate to site templates directory: cd /etc/apache2/sites-available Create the Project 3 site configuration from the default-ssl template: sudo cp default-ssl.conf project3-site.conf Enable site Project 3: sudo a2ensite project3-site <li>Restart the Apache 2 server to apply settings:</li> sudo service apache2 reload <li>Test configuration from a client machine that has IP network access to this server:</li> <li>Open a browser (e.g. Firefox)</li> <li>Navigate to the webserver's IP address with the HTTPS prefix; e.g.</li> https://192.168.58.128 <li>When a warning is shown that the server's certificate is not signed by a valid Certificate Authority, add the site to an exception list, or bypass the warning</li> <li>The default Apache 2 webpage ("It Works!") is shown</li> </ol> </ol>

Upload Custom HTML Content
This explains how to upload custom index.html and other content to your site. <ol> <li>Download a website template online. Free templates are widely available.</li> <li>Ensure that the template includes an index.html page on the root level.</li> <li>Alternatively, construct a simple Index.html page in a text editor.</li> <li>Note: if an index.html page is not available, additional configuration steps must be taken to hide the websites file structure from visitors for security.</li> </ol> <li>Create a directory for your site where all the HTML content will be stored in the Ubuntu www store (e.g. project3):</li> mkdir /var/www/project3 <li>Copy HTML files for selected template into the site's HTML folder; -r option indicates recursive copy (include subfolders):</li> sudo cp /home/elandsman/desktop/lawfirm/* /var/www/project3 –r <li>Navigate to enabled Apache 2 sites directory:</li> cd /etc/apache2/sites-enabled <li>Edit your site's configuration file, which is based on default-ssl.conf:</li> sudo vim project3-site.conf <li>Set the DocumentRoot property to the site's www folder; e.g. /var/www/project3</li> <li>Set the ServerName property to the site's URL path; e.g. project3.home</li> </ol> Example site .conf configuration: ServerName project3.home DocumentRoot /var/www/project3 <li>Restart the Apache 2 server to apply settings:</li> sudo service apache2 reload <li>Test configuration from a client machine that has IP network access to this server:</li> <li>Open a browser (e.g. Firefox)</li> <li>Navigate to the webserver's IP address with the HTTPS prefix; e.g.</li> https://192.168.58.128 <li>If a warning is shown that the server's certificate is not signed by a valid Certificate Authority, add the site to an exception list, or bypass the warning</li> <li>The custom template content is shown</li> <li>The default Apache 2 webpage ("It Works!") is NOT shown</li> </ol> </ol>

Configure Self-Signed SSL Certificate
Note that a self-signed certificate is easy to fake when recreated on a different machine with the same input settings. It is better than using the distro default key (snakeoil), however. <ol> <li>Instantiate new a 2048-bit SSL certificate with 365 day expiration:</li> sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/project3.key -out /etc/ssl/certs/project3.crt <li>Edit the Project 3 site configuration to reference correct key and certificate files:</li> sudo vim /etc/apache2/sites-enabled/project3-site.conf <li>Set the following properties:</li> <li>Set the SSLCertificateFile property to the generated certificate file; e.g. /etc/ssl/certs/project3.crt</li> <li>Set the SSLCertificateKeyFile property to the generated private key file; e.g. /etc/ssl/private/project3.key</li> </ol> Example secure site .conf file configuration: SSLCertificateFile     /etc/ssl/certs/project3.crt SSLCertificateKeyFile  /etc/ssl/private/project3.key </ol>

Redirect HTTP Requests to Secure HTTPS Content
Visitors that request content from the default HTTP 80/8080 ports must be redirected to secure HTTPS content in port 443. <ol> <li>Edit your site's configuration file:</li> sudo vim /etc/apache2/sites-enabled/project3-site.conf <li>Add the following VirtualHost XML nodes, for *:80 and *:8080, before the regular site configuration:</li> </ol> <li>Test configuration from a client machine that has IP network access to this server:</li> <li>Open a browser (e.g. Firefox)</li> <li>Navigate to the webserver's IP address with the non-secure HTTP prefix; e.g.:</li> http://192.168.58.128 <li>Traffic is redirected to the secure HTTPS site automatically:</li> https://192.168.58.128 <li>The custom template content is shown</li> <li>The default Apache 2 webpage ("It Works!") is NOT shown</li> </ol> </ol>

Configure Apache 2 for Enhanced Security
The following settings must be changed from default to ensure structural information about the OS, Apache, and installed modules is hidden. <ol> <li>Edit the Apache 2 core configuration file:</li> sudo vim /etc/apahce2/apache2.conf <li>Add the following properties to the .conf file:</li> <li>Verify that the properties are not set elsewhere in the document, which could conflict with your settings.</li> </ol>

Configure Webserver Backups
This explains how to backups the entire contents of the project www folder to a remote backup machine. We selected the DHCP server as the backup location for HTML content. On the selected remote backup machine: <ol> <li>Install the SSH server components:</li> sudo apt-get install openssh-server <li>Make a .ssh directory for the webserver's public key:</li> mkdir ~/.ssh/ </ol> On the webserver machine: <ol> <li>Install the SSH client components:</li> sudo apt-get install openssh-client <li>Generate public/private key pair for secure SSH connection without password entry:</li> sudo ssh-keygen -t rsa <li>Push the public key to the remote backup server:</li> cat ~/.ssh/id_rsa.pub | ssh user@192.168.58.2 'cat >> .ssh/authorized_keys' <li>Create a folder to stage the backup files, and to contain the backup shell script:</li> mkdir ~/backup <li>Create a shell script called backup.sh in the backup directory:</li> vim ~/backup/backup.sh <li>Add the following shell script logic to backup the www/project3 directory periodically:</li> <li>Note that the username and private key file must be explicitly specified in the ssh call to ensure public key authentication is successful.</li> <li>If authentication falls back on password entry, the cron job will fail, since it is automated and cannot supply a password autonomously.</li> </ol> <li>Add a cron job to run the script at 2:00am every day - off-hours (e.g. nighttime) are preferred to ensure files are not being edited:</li> sudo crontab -e <li>Add the following job schedule definition to execute the backup shell script:</li> 0 2 * * * /home/elandsman/backup/backup.sh </ol> </ol>

Component Overview
We used IPTables, the built-in firewall mechanism in Linux. Note that the newer ufw front-end for IPTables has been disabled to ensure its settings do not conflict with IPTables. Note that policies and rules configured in IPTables are not persistent, and will get erased on reboot. Several mechanisms exist, including the IPTables-Persistent aptitude package, and startup scripts. We used IPTables-Persistent. Standard firewalling practices recommend implicitly denying all traffic except for explicitly enabled components. This is appropriate in server configuration, as only explicitly used/enabled components should be allowed to communicate. We set the following implicit policies: The following incoming traffic formats were permitted: All other incoming traffic is implicitly blocked by policy 1 above. Note that since HTML content is only served to IPv4 clients at this time, incoming HTTP/HTTPS traffic was not permitted in IPv6 Tables.
 * 1) Policy: Deny all incoming traffic
 * 2) Policy: Deny all forward traffic
 * 3) Policy: Permit all outgoing traffic
 * 4) * Note: Permitting all outgoing traffic has the potential to allow compromised applications to download malicious content from a remote server. Proper security policy guidelines recommend only allowing outgoing traffic to specific subnets, or from specific trusted applications (such as Apache modules for a Linux Webserver). However, timing constraints did not allow for fully configuring a proper outgoing traffic policy, as exceptions must be made for DHCP, DNS, HTTP/HTTPS and Ping traffic, greatly complicating the policy implementation.
 * 1) Rule: Incoming traffic belonging to established connections. This permits TCP (e.g. HTTP), ICMP (e.g. Ping), and UDP (e.g. DNS and DHCP) traffic initiated from this server.
 * 2) Rule: Incoming ping requests (ICMP Type 8) from the project IPv4 subnet (192.168.58.0/24) only
 * 3) Rule: Incoming HTTP/HTTPS traffic (TCP to port 80, 8080 and 443) from the project IPv4 subnet (192.168.58.0/24) only
 * 4) Rule: Incoming traffic to interface lo (Localhost); this helps with maintenance and debugging activities

IPv4 Firewall Rules & Policies
We used IPSec to implement the above policies and rules. <ol> <li>Disable the Ubuntu firewall to it does not conflict with IPTables configuration:</li> sudo ufw disable <li>Install the IPTables-Peristent package:</li> sudo apt-get install iptables-persistent <li>Flush (clear) all existing IPTables rules - the default rules are inappropriate for a secure sever configuration:</li> sudo iptables -f <li>Configure the base firewall policies:</li> iptables -p input drop   #Block all incoming traffic iptables -p forward drop #Block all forward traffic iptables -p output accept #Allow all outgoing traffic <li>Permit incoming return traffic for established connections, including TCP, DHCP (UDP), DNS (UDP), and Ping (ICMP):</li> iptables -a input -m conntrack --ctstate established,related -j accept <li>Permit ping (ICMP Echo Request) packets from 192.168.58.0/24 subnet only - for troubleshooting:</li> sudo iptables -i input -s 192.168.58.0/24 -p icmp --icmp-type 8 -j accept <li>Permit HTTP and HTTPS (TCP to 80, 8080 and 443) traffic from 192.168.58.0/24 subnet only:</li> sudo iptables -i input -s 192.168.58.0/24 -p tcp --dport 80 -j accept sudo iptables -i input -s 192.168.58.0/24 -p tcp --dport 8080 -j accept sudo iptables -i input -s 192.168.58.0/24 -p tcp --dport 443 -j accept <li>Permit incoming and outgoing traffic on the Localhost interface - for testing and maintenance:</li> iptables -a input -i lo -j accept iptables -a output -o lo -j accept <li>Inspect active IPTables rules:</li> sudo iptables -s Active network policy rules on server: <li>Save configured rules using IPTables-Persistent:</li> sudo netfilter-persistent save </ol>

IPv6 Firewall Rules & Policies
We used IPSec to implement the above policies and rules. Note that since HTTP/HTTPS content is not served to IPv6 clients at this point, no provisions were made to accept TCP traffic at ports 80, 8080 and 443. <ol> <li>Flush (clear) all policies and rules from IPv6 Tables:</li> sudo ip6tables -F <li>Block all incoming traffic:</li> sudo ip6tables -P INPUT DROP sudo ip6tables -P FORWARD DROP <li>Accept incoming return traffic for connections originating from this server:</li> sudo ip6tables -I INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT </ol>

Protocol & Component Overview
The Domain Name System (DNS) is a hierarchical decentralized naming system for computers, services, or other resources connected to the Internet or a private network. It associates various information with domain names assigned to each of the participating entities. Most prominently, it translates more readily memorized domain names to the numerical IP addresses needed for locating and identifying computer services and devices with the underlying network protocols. By providing a worldwide, distributed directory service, the Domain Name System is an essential component of the functionality of the Internet, that has been in use since 1985.

Domain Terminology
Domain Name System

The domain name system, more commonly known as "DNS" is the networking system in place that allows us to resolve human-friendly names to unique addresses.

Domain Name

A domain name is the human-friendly name that we are used to associating with an internet resource. For instance, "google.com" is a domain name. Some people will say that the "google" portion is the domain, but we can generally refer to the combined form as the domain name.

The URL "google.com" is associated with the servers owned by Google Inc. The domain name system allows us to reach the Google servers when we type "google.com" into our browsers.

IP Address

An IP address is what we call a network addressable location. Each IP address must be unique within its network. When we are talking about websites, this network is the entire internet.

IPv4, the most common form of addresses, are written as four sets of numbers, each set having up to three digits, with each set separated by a dot. For example, "111.222.111.222" could be a valid IPv4 IP address. With DNS, we map a name to that address so that you do not have to remember a complicated set of numbers for each place you wish to visit on a network.

Top-Level Domain

A top-level domain, or TLD, is the most general part of the domain. The top-level domain is the furthest portion to the right (as separated by a dot). Common top-level domains are "com", "net", "org", "gov", "edu", and "io".

Top-level domains are at the top of the hierarchy in terms of domain names. Certain parties are given management control over top-level domains by ICANN (Internet Corporation for Assigned Names and Numbers). These parties can then distribute domain names under the TLD, usually through a domain registrar.

Hosts

Within a domain, the domain owner can define individual hosts, which refer to separate computers or services accessible through a domain. For instance, most domain owners make their web servers accessible through the bare domain (example.com) and also through the "host" definition "www" (www.example.com).

You can have other host definitions under the general domain. You could have API access through an "api" host (api.example.com) or you could have ftp access by defining a host called "ftp" or "files" (ftp.example.com or files.example.com). The host names can be arbitrary as long as they are unique for the domain.

SubDomain

A subject related to hosts are subdomains.

DNS works in a hierarchy. TLDs can have many domains under them. For instance, the "com" TLD has both "google.com" and "ubuntu.com" underneath it. A "subdomain" refers to any domain that is part of a larger domain. In this case, "ubuntu.com" can be said to be a subdomain of "com". This is typically just called the domain or the "ubuntu" portion is called a SLD, which means second level domain.

Likewise, each domain can control "subdomains" that are located under it. This is usually what we mean by subdomains. For instance you could have a subdomain for the history department of your school at "www.history.school.edu". The "history" portion is a subdomain.

The difference between a host name and a subdomain is that a host defines a computer or resource, while a subdomain extends the parent domain. It is a method of subdividing the domain itself.

IPv4
1) Initially, the network manager assigns a dynamic IP address to the port, but servers need to have a static IP address. This can be done changing the configuration in the “/etc/network/interfaces” file.    sudo nano /etc/network/interfaces

In this file, we need to add the address for the port required and save it using control + X followed by Y.

auto lo    iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.58.3 netmask 255.255.255.0 network  192.168.58.0 broadcast 192.168.58.255

2) After changing the /network/interfaces file reboot the system by the following command

sudo init 6

3) Restart the network-manger   sudo service network-manger restart

4) Install the bind9 server    sudo apt-get install bind9

5) After installing the bind9 server we need to make changes in the configuration file in the Bind directory.   cd /etc    cd bind    sudo nano named.conf.options

6) In the named.conf.options we need to add the forwarders   forwarders    {     192.168.58.3;    };

7) Configure forward and reverse lookup zones in the named.conf.local Forward lookup zone for IPv4 in Slave

sudo nano named.conf.local forward lookup zones zone "project3.home" { type slave; masters {192.168.58.3;}; file "/etc/bind/for.project3.home";};

zone "58.168.192.in-addr.arpa" { type slave; masters {192.168.58.3;}; file "/etc/bind/rev.project3.home";};

Reverse lookup zone for IPv4 in Master zone "project3.home" { type master; file "/etc/bind/for.project3.home";}; zone "58.168.192.in-addr.arpa" { type master; file "/etc/bind/rev.project3.home";};

8) Create a sub directory called ‘zones’ and create forward and reverse database files   $TTL    604800    @       IN      SOA     project3.home. dns1.project3.home. ( 6        ; Serial 604800        ; Refresh 86400        ; Retry 2419200        ; Expire 604800 )      ; Negative Cache TTL    ;    @       IN      NS      dns1.project3.home.    @       IN      A       192.168.58.3    @       IN      AAAA    ::1


 * Additional computers in network:

www    IN      A       192.168.58.128 dns1   IN      A       192.168.58.3 dns2   IN      A       192.168.58.4

9) Create the reverse lookup database file

$TTL   604800 @      IN      SOA     project3.home. dns1.project3.home. (                             6         ; Serial                         604800         ; Refresh                          86400         ; Retry                        2419200         ; Expire                         604800 )       ; Negative Cache TTL; @      IN      NS      dns1. 3      IN      PTR     dns1.project3.home.;

Other computers in network

128    IN      PTR     www.project3.home. 4      IN      PTR     dns2.project3.home.

10) Set the nameservers in the resolv.conf file   sudo nano /etc/resolv.conf    nameserver 192.168.58.3     nameserver 192.168.58.4     search www.project3.com

11) Restart the bind9 server    sudo service bind9 restart

12) Configure the resolv.conf file as in step 10 and restart the bind9 server

IPv6
1) Set static IPv6 address to the master and slave server by the following commands

sudo nano /etc/network/interfaces auto eth0 iface eth0 inet6 static address fe80::aba9:79f6:5321:963c netmask 64 2) In the named.conf.local file add the reverse IPv6 domain for master and slave.

In the master configuration file add zone "project3.home" {  type master; allow-transfer {192.168.58.3;}; file "/etc/bind/rev.project3.home"; };

In the slave configuration file add zone "project3.home" {  type slave; masters {192.168.58.4;}; file "/etc/bind/rev.project3.home"; };

3) Restart both master and slave dns servers.

DHCP
We configured dual-mode IPv4 and IPv6 dynamic addressing using ISC DHCP (also referred to as DHCP Daemon or DHCPD) on Ubuntu.

Protocol & Component Overview
The Dynamic Host Configuration Protocol (DHCP) is used to issue dynamic IP addresses to hosts configured to request them. DHCP may also be configured to issue a specific IP address to nodes with a specific layer-2 MAC. This is necessary when fixed-address nodes have an address inside the DHCP server's configured address space. The addresses are issued for a specific lease duration identified during the negotiation process. At the end of the lease, if a renewal is not requested, the server will assume the address is available again.

DHCP Servers communicate through UDP port 68 (aliased as BOOTPS - Boot Protocol Server - on many systems). The requesting client communicates through UDP port 67 (aliased as BOOTPC - Boot Protocol Client - on many systems).

There are four steps involved in requesting a DHCP address assignment:

Since the source and destination IP addresses change several times during the exchange, the communication stream is identified via a 32-bit transaction ID.

Implementation
We have implemented a dual-stacked IPv4 & IPv6 server with the following properties:

The following fixed-address leases were defined for servers:

The Ubuntu ISC DHCP service was used to implement DHCP server functionality. <ol> <li>Install radvd for IPv6 DHCP functionality:</li> <li>Deploy radvd using aptitude:</li> sudo apt-get install radvd <li>Edit the radvd configuration file:</li> sudo vim /etc/radvd.conf <li>Add the following clauses to advertise the appropriate physical interface as IPv6 capable:</li> <li>Edit the System Control configuration file:</li> sudo vim /etc/sysctl.conf <li>Enable IPv6 forwarding in the System Control configuration file:</li> net.ipv6.conf.default.forwarding=1 <li>Restart the radvd service for changes to take effect:</li> sudo service radvd restart <li>Check the radvd service status to ensure is started without errors:</li> sudo service radvd status <li>Sample output (first 3 rows only):</li> ● radvd.service - LSB: Router Advertising Daemon Loaded: loaded (/etc/init.d/radvd; bad; vendor preset: enabled) Active: active (running) since Tue 2017-04-11 21:11:26 PDT; 1 day 18h ago </ol> <li>Install ISC DHCP components from Aptitude:</li> sudo apt-get install isc-dhcp-server <li>Edit the ISC DHCP service configuration file:</li> sudo vim /etc/default/isc-dhcp-server <li>Change the following stanza to enable IPv6 functionality: </li> OPTIONS="-6" </ol> <li>Edit the DHCP IPv4 lease configuration file:</li> sudo vim /etc/dhcp/dhcpd.conf <li>Comment-out the following stanzas, if defined; they will be set later per-subnet: </li> <li>Set the following stanza to identify this server as the official server for this network: </li> authoritative; <li>Add the following declaration to define an IPv4 address range using the properties defined in the tables above: </li> </ol> <li>Edit the DHCP IPv6 lease configuration file:</li> sudo vim /etc/dhcp/dhcpd6.conf <li>Comment-out the domain-name and domain-name-server stanzas, as for IPv4;</li> <li>Set the servAr as authoritative, as for IPv4;</li> <li>Add the following declaration to define an IPv6 address range using the properties defined in the tables above. Note the use of 6 stanzas: </li> </ol> <li>Check the IPv4 & IPv6 DHCP service configuration:</li> sudo dhcpd -t sudo dhcpd -6 -t <li>Example Output:</li> Internet Systems Consortium DHCP Server 4.3.3 Copyright 2004-2015 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ WARNING: Host declarations are global. They are not limited to the scope you declared them in. Config file: /etc/dhcp/dhcpd.conf Database file: /var/lib/dhcp/dhcpd.leases PID file: /var/run/dhcpd.pid <li>If no output indicating syntax error in the configuration files is shown, the configuration is OK.</li> </ol> <li>Restart the IPv4 and IPv6 DHCP services for changes to take effect:</li> sudo service isc-dhcp-server restart sudo service isc-dhcp-server6 restart <li>Check the status of the IPv4 and IPv6 DHCP services:</li> sudo service isc-dhcp-server status sudo service isc-dhcp-server6 status <li>Example IPv4 service output (first 3 rows):</li> ● isc-dhcp-server.service - ISC DHCP IPv4 server Loaded: loaded (/lib/systemd/system/isc-dhcp-server.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2017-04-08 16:09:17 PDT; 19h ago <li>Example IPv6 service output (first 3 rows):</li> ● isc-dhcp-server6.service - ISC DHCP IPv6 server Loaded: loaded (/lib/systemd/system/isc-dhcp-server6.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2017-04-08 17:12:00 PDT; 18h ago <li>If both services have an active status with no warning or errors indicated, the configuration is correct.</li> </ol> </ol>

Configuration
1. Install ssh server one virtual machine sudo apt-get install openssh-server

2. Install ssh client on second virtual machine sudo apt-get install openssh-client

3. Generate public and private keys in the client server sudo ssh-keygen -t rsa

4. Copy the public key to ssh server ssh backupserver @192.168.58.3 mkdir -p .ssh cat .ssh/id_rsa.pub | ssh backupserver@192.168.58.4 'cat >> .ssh/authorized_keys'

5. For executing backup, use the following sudo tar -cvpzf backupfile.tar.gz /var/www/html/index.html

6. For executing automatic backup, use the following sudo crontab –e * * * * * sudo tar -cvpzf backupfile.tar.gz /var/www/html/index.html * * * * * sudo scp backupfile.tar.gz backupserver@192.168.58.4/home/backupserver/

Introduction
VPN protocol is used to develop a secured tunnel between two hosts. The data traversing through the tunnel is encrypted using AES 128 bit encryption. It is used for security purposes and to avoid eavesdropping and attacks from hackers. There are two types of VPN namely network to network and IPsec transport VPN. Here transport VPN is used since it is used within a network.

TestPlan
1) Connect to the VPN server and once connected a point to point tunnel session is established which can be retrieved in the interface list.               ifconfig                                  - Retrieves the detected network interface and its information                ppp0   Link encap:Point-to-Point Protocol - Shows that the device is connected to a private network.

Benefits
An IPSec Virtual Private Network (VPN) is a virtual network that operates across the public network, but remains “private” by establishing encrypted tunnels between two or more end points. VPNs provide: An IP Security (IPSec) VPN secures communications and access to network resources for site-to-site access using encryption, authentication, and key management protocols. On a properly configured VPN, communications are secure, and the information that is passed is protected from attackers.
 * Data integrity: Data integrity ensures that no one has tampered with or modified data while it traverses the network. Data integrity is maintained with hash algorithms.
 * Authentication: Authentication guarantees that data you receive is authentic; that is, that it originates from where it is supposed to, and not from someone masquerading as the source. Authentication is also ensured with hash algorithms.
 * Confidentiality: Confidentiality ensures data is protected from being examined or copied while transiting the network. Confidentiality is accomplished using encryption.

Server
Step 1: Install the following package used to configure VPN Command: sudo apt-get install ipsec-tools strongswan-starter Step 2:Open and Edit the following file Command: sudo nano /etc/ipsec.conf Step 3: Add the following Command: conn webserver-to-nfs authby=secret auto=route keyexchange=ike left=192.168.58.3 right=192.168.58.4 type=transport esp=aes128gcm16!

Step 4: Create the file which will have the pre shared keys Command: sudo nano /etc/ipsec.secrets

Step 5: Add the following Command: 192.168.58.3 192.168.58.4 : PSK “ your keys”

Step 6: Restart IPSec Command: ipsec restart

Step 7: To check the status use statusall Command: ipsec statusall

Host 2

Step 1: Install the following Command: sudo apt-get install ipsec-tools strongswan-starter

Step 2: Open andeEdit the following file Command:

sudo nano /etc/ipsec.conf

Step 3: Add the following Command: conn webserver-to-nfs authby=secret auto=route keyexchange=ike left=192.168.58.3 right=192.168.58.4 type=transport esp=aes128gcm16!

Step 4: Create the file which will have the pre shared keys Command:

sudo nano /etc/ipsec.secrets

Step 5: Add the following Command:

192.168.58.3 192.168.58.4 : PSK “ your keys”

Step 6: Restart IPSec Command:

ipsec restart

Step 7: To check the status use statusall Command: ipsec statusall

Testing: Step 1: Use this on any one host: Command: Ping -s 4048 192.168.58.3

Step 1: Watch status from other host Command: watch ipsec statusall

What Is ARP Spoofing?
ARP spoofing is a type of attack in which a malicious actor sends falsified ARP (Address Resolution Protocol) messages over a local area network. This results in the linking of an attacker’s MAC address with the IP address of a legitimate computer or server on the network. Once the attacker’s MAC address is connected to an authentic IP address, the attacker will begin receiving any data that is intended for that IP address. ARP spoofing can enable malicious parties to intercept, modify or even stop data in-transit. ARP spoofing attacks can only occur on local area networks that utilize the Address Resolution Protocol.

ARP Spoofing Attacks

 * 1) The effects of ARP spoofing attacks can have serious implications for enterprises. In their most basic application, ARP spoofing attacks are used to steal sensitive information. Beyond this, ARP spoofing attacks are often used to facilitate other attacks such as:
 * 2) Denial-of-service attacks: DoS attacks often leverage ARP spoofing to link multiple IP addresses with a single target’s MAC address. As a result, traffic that is intended for many different IP addresses will be redirected to the target’s MAC address, overloading the target with traffic.
 * 3) Session hijacking: Session hijacking attacks can use ARP spoofing to steal session IDs, granting attackers access to private systems and data.
 * 4) Man-in-the-middle attacks: MITM attacks can rely on ARP spoofing to intercept and modify traffic between victims.

Introduction
NFS (Network File System) is basically developed for sharing of files and folders between Linux/Unix systems by Sun Microsystems in 1980. It allows you to mount your local file systems over a network and remote hosts to interact with them as they are mounted locally on the same system. With the help of NFS, we can set up file sharing between Unix to Linux system and Linux to Unix system.

Benefits

 * 1) NFS allows local access to remote files.
 * 2) It uses standard client/server architecture for file sharing between all *nix based machines.
 * 3) With NFS it is not necessary that both machines run on the same OS.
 * 4) With the help of NFS we can configure centralized storage solutions.
 * 5) Users get their data irrespective of physical location.
 * 6) No manual refresh needed for new files.
 * 7) Newer version of NFS also supports acl, pseudo root mounts.
 * 8) Can be secured with Firewalls and Kerberos.

Scenario
In this scenario we are going to export the file system from the an IP address 192.168.58.4 ( NFS server ) host and mount it on an a host with an IP address 192.168.58.3 ( NFS Client ). Both NFS server and NFS client will be running Ubuntu Linux.

Server
Installing nfs-common package on both NFS client and NFS server using using: apt-get install nfs-common

Installing extra package on NFS server using: apt-get install nfs-kernel-server

Used the following command to check whether NFS is installed correctly on server side: rcpinfo -p

Used the following command to load the NFS module on server side: modeprob nfs

Now we created a directory /public to the file /etc/exports and created 3 empty files in that using the following commands: mkdir /public touch /public/nfs1/public/nfs2/public/nfs3

Now we edited the file /etc/exports using the following commands: /home/rishabh/Public/nfs.p3 192.168.58.3/24(rw,nohide,insecure,no_subtree_check,async,no_root_squash)

Mount the exported folders on a client machine: mount -t nfs 192.168.58.3:/home/rishabh/Public/nfs.p3 /home/nfs_local.p3