|
|
- Linux servers - Exercice 2
- ==============
-
- *Disclaimer:*
- --------------
-
- This exercise is a part of [Linux servers (ICT4TN021, spring 2018) // Linux-palvelimet (ICT4TN021, kevät 2018)](http://www.haaga-helia.fi/fi/opinto-opas/opintojaksokuvaukset/ICT4TN021) school course organized as a part of Information Technology studies in Haaga-Helia university of Applied Sciences, Helsinki, Finland. Course lecturer [Tero Karvinen](http://terokarvinen.com/) has defined the original assignment descriptions in Finnish presented in this document in English. Answers and translations have been written by Pekka Helenius (me, ~ Fincer).
-
- **a)** Create two different log events: One successful event and one failed or forbidden event. Analyze the log lines in detail.
- --------------
-
- **Answer:**
-
- **Successful event example - Apache server**
-
- ```
- phelenius@my-machine:~$ cat /var/log/apache2/access.log
- 127.0.0.1 - - [24/Jan/2018:19:14:47 +0200] "GET / HTTP/1.1" 200 3525 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/605.1 (KHTML, like Gecko) Version/11.0 Safari/605.1 Ubuntu/16.04 (3.18.11-0ubuntu1) Epiphany/3.18.11"
- 127.0.0.1 - - [24/Jan/2018:19:14:47 +0200] "GET /icons/ubuntu-logo.png HTTP/1.1" 200 3623 "http://localhost/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/605.1 (KHTML, like Gecko) Version/11.0 Safari/605.1 Ubuntu/16.04 (3.18.11-0ubuntu1) Epiphany/3.18.11"
- ```
-
- **Analysis:** In this event, user's client program (Web browser) has tried to access site *[localhost](https://en.wikipedia.org/wiki/Localhost)* for which Apache web server (HTTP daemon) has responded with OK code. The log snap reveals
-
- - client's IP address, in this case it is the same computer (*localhost* / *127.0.0.1*)
-
- - user ID (just a line in this case)
-
- - logging time
-
- - used HTTP method ([GET](https://www.w3schools.com/tags/ref_httpmethods.asp))
-
- - retrieved contents, using server defined root directory as root path (*/* and */icons/ubuntu-logo.png*)
-
- - code 200 ([HTTP_OK](https://ci.apache.org/projects/httpd/trunk/doxygen/group__HTTP__Status.html#ga02e6d59009dee759528ec81fc9a8eeff))
- - target size 3623 (reported to the client)
-
- - [HTTP Referer](https://en.wikipedia.org/wiki/HTTP_referer) (client)
-
- - [User agent header](https://en.wikipedia.org/wiki/User_agent) reported by the client. According to the agent string, client browser has been gecko-based [Epiphany web browser](https://en.wikipedia.org/wiki/Epiphany_(GNOME)), using x86_64 processor architecture. This string can be manipulated in the client end. For example, a desktop client web browser can pretend to be a mobile browser.
-
- Default syntax for Apache log files follow the layout which is described [here](https://httpd.apache.org/docs/2.4/logs.html) under section 'Common Log Format'.
-
- **Failed event example - Apache server**
-
- ```
- phelenius@my-machine:~$ cat /var/log/apache2/access.log
- 127.0.0.1 - - [24/Jan/2018:22:30:50 +0200] "GET /this-page-dont-exist HTTP/1.1" 404 510 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/605.1 (KHTML, like Gecko) Version/11.0 Safari/605.1 Ubuntu/16.04 (3.18.11-0ubuntu1) Epiphany/3.18.11"
- 127.0.0.1 - - [24/Jan/2018:22:30:50 +0200] "GET /favicon.ico HTTP/1.1" 404 500 "http://localhost/this-page-dont-exist" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/605.1 (KHTML, like Gecko) Version/11.0 Safari/605.1 Ubuntu/16.04 (3.18.11-0ubuntu1) Epiphany/3.18.11"
- ```
-
- **Analysis:** In this event, user's client program (Web browser) has tried to access site *http://localhost/this-page-dont-exist* for which Apache web server (HTTP daemon) has responded with error code. The log snap reveals
-
- - client's IP address, in this case it is the same computer (*localhost* / *127.0.0.1*)
-
- - user ID (just a line in this case)
-
- - logging time
-
- - used HTTP method ([GET](https://www.w3schools.com/tags/ref_httpmethods.asp))
-
- - retrieved contents, using server defined root directory as root path (*/this-page-dont-exist* and */favicon.ico*)
-
- - error code 404 ([HTTP_NOT_FOUND](https://ci.apache.org/projects/httpd/trunk/doxygen/group__HTTP__Status.html#gabd505b5244bd18ae61c581484b4bc5a0))
-
- - target sizes 500 & 510 (reported to the client)
-
- - [HTTP Referer](https://en.wikipedia.org/wiki/HTTP_referer) (client)
-
- - [User agent header](https://en.wikipedia.org/wiki/User_agent) reported by the client. According to the agent string, client browser has been gecko-based [Epiphany web browser](https://en.wikipedia.org/wiki/Epiphany_(GNOME)), using x86_64 processor architecture. This string can be manipulated in the client end. For example, a desktop client web browser can pretend to be a mobile browser.
-
- Error logs of an Apache server can be found (by default) at _/var/log/apache2/error.log_. HTTP status codes with brief description are defined on [Apache homepage - Common HTTP Status Codes](https://wiki.apache.org/httpd/CommonHTTPStatusCodes).
-
- **Other error situations - libraries & typos**
-
- The most common errors I have encountered on Linux desktop usage as a system administrator are various [Permission denied events](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/Step_by_Step_Guide/s1-navigating-ownership.html) mostly as a consequence of a human error (forgetting sudo or something similar) or various typos in commands. I have compiled a lot of programs from source code, making various library errors practically unavoidable in long-term. The syntax of these errors is: [Cannot open shared object file: No such file or directory](https://stackoverflow.com/questions/480764/linux-error-while-loading-shared-libraries-cannot-open-shared-object-file-no-s/480786) where an executable can't find a requested _.so_ library file or version from library path. Both of these errors can be avoided by typing commands precisely and avoiding compiling of software (and relying on official repositories and system-wide package upgrades, considering that installed packages are maintained actively).
-
- **Other error situations - unsupported Linux OS versions & breaking libraries**
-
- In long-term, using Linux distributions such as Ubuntu (or any variant that relies on Ubuntu package repositories, such as Linux Mint) become unusable with newer software. The newer versions require more recent library versions which older Ubuntu releases can't possibly provide anymore because official repositories have been shutted down. In this state, either older software must be used (not recommended) on older operating system or the system must be upgraded to more recent version where the newer packages are still available. In some extent, compiling programs from source on an older operating system version is possible and thus life cycle of an old operating system installation can be extended for some programs. This approach, however, leads easily to security risks and bugs which have not been fixed, and especially the security part is essential on Linux server installations.
-
- I have personal experience on using older Ubuntu 10.04 LTS after the official support dropped down. This operating system was running on desktop use, not in server environment. Thus, I accepted some security risks that may have been present in this approach. In software level, keeping the old Ubuntu installation led to major conflicts with libraries required by modern programs. I had older versions of some of these libraries installed but the newer versions introduced some major changes in function calls and methods so that compiling modern programs against these libraries was practically almost impossible (without major code patches). The most common library which broke first was [glibc](https://www.gnu.org/software/libc/). Ironically, some modern programs I couldn't use ran via [Wine](https://www.winehq.org). That's right: I couldn't use Linux versions but Windows versions ran flawlessly, on my Ubuntu 10.04 LTS.
-
- Since then, I have moved from Ubuntu to Arch Linux long time ago. Major reason for this is the rolling release model which Arch Linux uses. Compiling software from source is more flexible and, in theory, I don't have to reinstall my operating system never again due to "unsupported operating system version" issues.
-
- Based on my personal experience, rolling release model is not trouble free and can easily cause program/library conflicts but resolving these conflicts is flexible for an advanced Linux user. As an Arch Linux user, I must be aware of various conflicts which different updates and packages can cause, and I need to know it before I do anything crucial on my operating system. This means that updating software must never be done blindly and I need to be able to solve various error situations that might be present in the system. Usually these errors are poorly or misleadingly documented or not documented at all. Tracking down the root cause of some errors can be very tricky because of multiple components used by various programs.
-
- In general, some troublesome situation on Arch Linux consist of transitions from older API versions to a newer one. This has happened me once: from Qt4 to Qt5. The issue here is this: some packages are updated more frequently than the others, leading to mixing of two APIs in a system and therefore resulting to broken user experience.
-
- **Other error situations - a tricky error without clear identification**
-
- For example, I recently bought a new laptop. I use Firefox on KDE 5 environment. My laptop has Optimus graphics. I have updated all Xorg packages, Mesa drivers, Nvidia drivers and Plasma workspace packages, not forgetting the web browser itself. In some cases, the browser just crashes, and not only the browser but the whole desktop and may lead to laptop freeze so that only hard-reset is a viable solution. In this case, I must try to figure out whether it is Optimus, Mesa, Nvidia, Plasma workspace (or any library in it) or browser component (add-on, compile parameters, hardware acceleration) which cause the crash. This issue is very poorly documented. My best bet to track down the issue is in this case is to check any logs, especially Xorg & Journalctl log events, check all browser add-ons and hardware acceleration, try to check if similar issues for related packages have been reported in various Bugzilla sites for specific package versions and if patches are available, try them out if needed. Reporting this kind of problem to developers is very troublesome because I must track down the exact component which actually causes the crash. Reporting the issue for random developers as "My plasma workspace/kwin crashes while using a browser with Optimus laptop" says practically nothing and may lead to frustation, ignorance and false assumptions. Therefore, I must pin down the root cause myself before any reporting.
-
- **Other error situations - hardware issues**
-
- Linux driver support. There are arguments forth and against. Is it good or not? Well, simple answer: it completely depends on used hardware. On my recently purchased laptop, at least Linux kernel 4.15 is needed. In older kernel versions, the laptop doesn't simply boot on Linux. Therefore, at least Ubuntu 18.04 LTS or any other distribution providing this kernel must be installed or I must compile this kernel version (or newer) from source to the target hardware by myself.
-
- The main rule here: check which components you need support to (yes, components, individual hardware components), try to find out which Linux kernel module they may use or does the hardware vendor provide any driver support (or does the vendor generally have no interest in Linux in which case you can assume no driver support is available for their newer hardware models either unless implemented in Linux kernel or drivers are provided by open source community). In some cases, it's just pure trial-and-error. If you have physical access to the hardware and are permitted to try Linux out on it, your situation is good. In general, older the software, better chance for it to have Linux support for all hardware components.
-
- If you buy very modern or customized hardware, please consider using a distribution with rolling release model.
-
- **Other error situations - random errors**
-
- A good friend of mine contacted me recently. He stated that he couldn't log in to his [Gentoo](https://en.wikipedia.org/wiki/Gentoo_Linux) installation. After a short talk on phone and various approaches I figured out that the problem must be on his LightDM manager so I adviced him to check his _/var/log/lightdm/_ log files. It was proven that the problem was caused by faulty installation of LightDM package files (wrong permissions) or authentication errors oin his [PAM policy](https://en.wikipedia.org/wiki/Privileged_identity_management). To conclude, all these errors can be very random and results of various reasons.
-
- **Other error situations - a tricky error tracked down**
-
- Sample of [journalctl](https://www.loggly.com/ultimate-guide/using-journalctl):
-
- - Boot events logged (kernel, dbus, system daemon, other daemon processes, etc...)
-
- - Each log event has a timestamp and the hostname (my-machine).
-
- - In the next part, journalctl introduces the name of a system component responsible for the event (process/kernel).
-
- - For each process, [process identifier](https://en.wikipedia.org/wiki/Process_identifier) is represented. This identifier is useful for sending various signals like SIGKILL/SIGSTOP to processes, for example ([SIGKILL/SIGSTOP...](https://en.wikipedia.org/wiki/Signal_(IPC))).
-
- - After PID identifier, the message of the responsible component is represented. Erroneous messages are usually, but not always, marked as bolded red.
-
- Log events can usually be found either on user's home directory (custom paths used by programs executed only with user permissions) or in system-wide directory _/var/log/_. For example, journalctl log events can be saved in _/var/log/journal_ directory whereas user's [X11](https://en.wikipedia.org/wiki/X_Window_System) errors are logged either in _$HOME/.xsession-errors_ ([.xsession-errors](https://stackoverflow.com/questions/10222449/what-is-xsession-errors)) file or in _$HOME/.local/share/Xorg/_ directory. Some programs use simply user's home directory as their log directory (for example: bash shell logs user history in hidden _$HOME/.bash_history_ file).
-
- The following journalctl snapshot tells that there is an issue with the bluetooth daemon (used computer is a laptop known as Asus N56JR).
-
- ```
- ...
- 1 Jan 24 17:14:04 my-machine systemd[1]: Started LSB: daemon to balance interrupts for SMP systems.
- 2 Jan 24 17:14:04 my-machine kernel: Bluetooth: BNEP (Ethernet Emulation) ver 1.3
- 3 Jan 24 17:14:04 my-machine kernel: Bluetooth: BNEP filters: protocol multicast
- 4 Jan 24 17:14:04 my-machine kernel: Bluetooth: BNEP socket layer initialized
- 5 Jan 24 17:14:04 my-machine dbus[918]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service'
- 6 Jan 24 17:14:04 my-machine bluetoothd[916]: Bluetooth management interface 1.10 initialized
- 7 Jan 24 17:14:04 my-machine bluetoothd[916]: Failed to obtain handles for "Service Changed" characteristic
- 8 Jan 24 17:14:04 my-machine whoopsie[985]: [17:14:04] Using lock path: /var/lock/whoopsie/lock
- 9 Jan 24 17:14:04 my-machine bluetoothd[916]: Not enough free handles to register service
- 10 Jan 24 17:14:04 my-machine bluetoothd[916]: Error adding Link Loss service
- 11 Jan 24 17:14:04 my-machine bluetoothd[916]: Not enough free handles to register service
- 12 Jan 24 17:14:04 my-machine bluetoothd[916]: Not enough free handles to register service
- 13 Jan 24 17:14:04 my-machine bluetoothd[916]: Not enough free handles to register service
- 14 Jan 24 17:14:04 my-machine bluetoothd[916]: Current Time Service could not be registered
- 15 Jan 24 17:14:04 my-machine bluetoothd[916]: gatt-time-server: Input/output error (5)
- 16 Jan 24 17:14:04 my-machine bluetoothd[916]: Not enough free handles to register service
- 17 Jan 24 17:14:04 my-machine bluetoothd[916]: Not enough free handles to register service
- 18 Jan 24 17:14:04 my-machine bluetoothd[916]: Sap driver initialization failed.
- 19 Jan 24 17:14:04 my-machine bluetoothd[916]: sap-server: Operation not permitted (1)
- 20 Jan 24 17:14:04 my-machine cron[991]: (CRON) INFO (Running @reboot jobs)
- 21 Jan 24 17:14:04 my-machine systemd[1]: Starting Hostname Service...
- ...
- ```
-
- Bluetooth error messages: lines 7-19. Line 8 is not an error message.
-
- **journalctl analysis - regression test approach**
-
- The problem seem to be widely known among Ubuntu & Arch Linux users, according to a short search in the internet. There is a [launchpad report](https://bugs.launchpad.net/ubuntu/+source/bluez/+bug/1490349) but not a simple answer is given there. There is a suggestion to downgrade bluez package on [Arch Linux forum](https://bbs.archlinux.org/viewtopic.php?id=195886). [AskUbuntu](https://askubuntu.com/) website has many threads for this issue.
-
- This kind of problems must be examined carefully. Preferred method, as the software component has been tracked down, is regression test for this component. In regression test, the commit or commits which break the desired functionality are hunted down. Tricky part in regression test is to do it as soon as the problem has been identified: more the working package version and faulty one differ from each other, more effort is required to pin down the commits.
-
- Another dilemma are various dependencies: in this case, we can blame bluez package. But is it actually bluez which broke the functionality? Or is it a dependency required by bluez package? Or a dependency of a dependency? If tracking down the root cause of a problem lead to this situation, it's like looking for a correct path in a giant tree or a dense bush. One crucial examination can still be done: is the problem present in some other program, too? If it is, then we can exclude all other packages from our search instead of common packages required by two individual programs. Of course, this is not foolproof either because program A can depend only on small code part of a dependency package (simple function call) whereas program B can utilize many more functions of the same dependency package.
-
- Basic principle of a regression test is to compile suspected packages from source, two different versions: one where the problem is present, and another one where it is not. We continue compiling these two versions as long as we can pick up individial commits which likely cause the breakage in program functionality. To put it simple: it is an iteration process. [Git bisect](https://git-scm.com/docs/git-bisect) method provides a way to do this, for example.
-
- Once the problematic commit/commits are known, we can fix the code ourselves and, for projects where the code is available to use, we can make a [pull request](https://help.github.com/articles/about-pull-requests/) which is accepted by verified developer, or in some cases, we can make our own [fork repository](https://confluence.atlassian.com/bitbucket/forking-a-repository-221449527.html) of the package/component/program. Therefore, commiting [a patch](https://www.thegeekstuff.com/2014/12/patch-command-examples/) or patchset to source code is possible.
-
- **journalctl analysis - our bluez package**
-
- In bluez error case, we should check all relevant error messages written by bluetooth daemon to STDERR output. By analysing [BlueZ source code](https://github.com/pauloborges/bluez) ([alternative link](https://git.kernel.org/pub/scm/bluetooth/bluez.git)) we can see that the message present in journalctl log snapshot can be found in source code as well. In more detail, the following source code files (C code):
-
- ```
- phelenius@my-machine:~/bluez-master$ grep -Ril "Not enough free handles to register service"
- profiles/proximity/reporter.c
- plugins/gatt-example.c
- attrib/gatt-service.c
- ```
-
- Tracking down the problem even further can require analysis of the relevant parts of these files in addition to regression testing. Of course, hardware components where the error is present must be taken into account, too.
-
- **Other error situations - A simple test case (command which succeeds as root but fails otherwise)**
-
- The following command leads to an error situation (STRERR) if executed with normal user rights. If you run this command in root shell, it succeeds (STDOUT). Switch to root shell by executing _su root_ in your shell environment.
-
- **NOTE!** If you use root shell for running any script, be **aware** and be **always** sure what you are about to do. You can log out from root shell with _exit_ command. Do not run scripts from untrusted sources as root.
-
- ```
- for i in {0..4}; do touch $HOME/$i; mv $HOME/$i /usr/ 2>/dev/null && echo "Okay, you can do that. You are the god of the system" || echo "Sorry, no way. Do not even try that. You are not a god on \"$(hostname)\"" && if [[ -f $HOME/$i ]]; then rm $HOME/$i; fi ; done
- ```
-
- Command description:
-
- 1. Command creates five individual files in a foor loop (0, 1, 2, 3, and 4) in the current user's home directory. For root, the folder path is simply _/root_
-
- 2. For each created file, move it from the $HOME directory to system folder _/usr_. In a case of error, write default error message (STDERR) to _null_ device _/dev/null_ which basically means that we don't print the default error message. Number _2_ stands for STDERR in this shell environment. _>_ means "write to". The main idea behind this is that we don't want to see the default error message if command _mv_ fails. Instead, we handle error situations in ways described in steps 4 and 5.
-
- 3. For each created file, if _mv_ command succeeds (STDOUT), do as it's stated after _&&_ and before _||_ (echo "Okay..."). More about these syntaxes: [Bash Reference Manual](http://www.gnu.org/software/bash/manual/bashref.html#Lists)
-
- 4. For each created file, if _mv_ command fails (STDERR), execute as it's stated after _||_ symbols (echo "Sorry..."). Practically, you end up to this scenario if the current user don't have a home folder defined in $HOME variable or if the current user doesn't have writing permissions to the system folder _/usr_. The most common cause for error is the latter one (Permission denied) because the writing permission in _/usr_ directory is reserved only for _root_ user.
-
- 5. For each file, in error situation, there is a command call _$(hostname)_ included. This command call prints out the output of _hostname_ command which, in our case, is the computer name. [Craracter escapes](http://www.gnu.org/software/bash/manual/bashref.html#Escape-Character) (slashes _\_) must be used if we want to use double quotes inside double quotes of our _echo_ command. With escapes, we basically tell the _echo_ command that " symbol, if preceded by \, must be included in the clause itself and it is not the end point of our clause. Alternative writing method for the _echo_ command is as follows: echo 'my "clause"'. Note single quotes here. Here we tell for the _echo_ command that the clause starts and ends with single quotes and, therefore, we don't have to escape our double quotes. This is a basic principle for any bash shell commands, including _grep_ (for example: cp --help | grep -i "\-R") etc.
-
- Why do we write _$(hostname)_ and not simply _hostname_? We simply use bash internal [Command Substitution feature](http://www.gnu.org/software/bash/manual/bashref.html#Command-Substitution).
-
- 6. For each file, we do our last part. We check whether the file exists in the current user's home directory. If the file ($HOME/$i) exists, remove it. We end up to this situation mostly because of failures in moving the created file into _/usr_ directory.
-
- 7. Because all steps 2-6 have been executed inside the for loop, we stop our loop sequence by writing simply _done_.
-
- Semicolons (;) are used because the script is so called _one liner_ script, meaning that multiple bash commands are stacked together, not written into multiple lines. The script can also be written in multiple lines and saved into _myscript.sh_ file for further usage. Execution rights for the script file are been given with _chmod +x myscript.sh_ command in a case where user's [umask values](https://wiki.archlinux.org/index.php/umask) do not grant execution permission for a file by default (this is the default case). In addition, we must include a shell definition in the first line in our _myscript.sh_ file. Practically, the definition is some of the following strings: *#!/bin/bash, #!/bin/sh, #!/bin/zsh, #!/bin/tcsh, #!/bin/csh, #/bin/ksh*, etc. In our case, it is _#!/bin/bash_
-
- More about bash shell functionality can be found here: [Bash Reference Manual](http://www.gnu.org/software/bash/manual/bashref.html). The site is highly recommended for anyone writing advanced bash scripts.
-
- **b)** Optional task, not teached yet: Install SSH daemon. Try some of the following commands on your own SSH server: ssh-copy-id, sshfs, scp or git. (The easiest command might be scp: ‘scp foo.txt tero@example.com:’)
- --------------
-
- **Answer:**
-
- Execute the following
-
- ```
- sudo apt-get update && sudo apt-get install openssh-server openssp-sftp-server git sshfs && sudo systemctl start ssh.service && systemctl status ssh.service | grep "Active:"*
- ```
-
- If SSH daemon service is currently active, we do as follows:
-
- Let's execute a command sequence where we connect to Haaga-Helia's SSH server ([Secure Shell (developer Tatu Ylönen, 1995)](https://fi.wikibooks.org/wiki/SSH)) and upload a local file _nullfile_ to the SSH server's current user home directory. We create this local file with _touch nullfile_ command. In the following output, all critical address information is masked with character pattern XXXX.
-
-
- ```
- phelenius@my-machine:~$ ssh XXXXXX@haaga-helia.fi
- The authenticity of host 'haaga-helia.fi (XX.XX.XX.XX)' can't be established.
- RSA key fingerprint is XXXXXX:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.
- Are you sure you want to continue connecting (yes/no)? yes
- Warning: Permanently added 'haaga-helia.fi,XX.XX.XX.XX' (RSA) to the list of known hosts.
- XXXXXX@haaga-helia.fi's password:
- Last login: Tue Oct 31 19:25:16 2017 from XX-XX-XX-XX-XXXX.fi
- [XXXXXX@haaga ~]$ pwd
- /u10/XXXXXX
- [XXXXXX@haaga ~]$ exit
- logout
- Connection to haaga-helia.fi closed.
- phelenius@my-machine:~$ touch nullfile
- phelenius@my-machine:~$ scp nullfile XXXXXX@haaga-helia.fi:/u10/XXXXXX/
- XXXXXX@haaga-helia.fi's password:
- nullfile
- 100%0 0.0KB/s 00:00
- phelenius@my-machine:~$ ssh XXXXXX@haaga-helia.fi
- XXXXXX@myy.haaga-helia.fi's password:
- Last login: Wed Jan 24 23:41:44 2018 from XX-XX-XX-XX-XXXX.fi
- [XXXXXX@haaga ~]$ ls |grep nullfile
- nullfile
- [XXXXXX@haaga ~]$ exit
- logout
- Connection to haaga-helia.fi closed.
- phelenius@my-machine:~$
- ```
-
- Other local ssh commands (etc.) can be found with command *ls /usr/bin |grep ssh*
-
- **c)** Create an apt-get command of your dreams: one single command or one-liner which installs your favorite applications.
- --------------
-
- **Answer:**
-
- Let's install the following packages (with apt-get install) and equally, remove the following packages (with apt-get purge) and possible orphaned packages (with apt-get autoclean)
-
- ```
- sudo apt-get update && sudo apt-get -y install \
- apache2 \
- wireshark-gtk \
- fail2ban \
- ettercap-text-only \
- varnish \
- mariadb-server \
- mariadb-client \
- php7.0 \
- epiphany-browser \
- openssh-server \
- openssh-sftp-server \
- phpmyadmin \
- gnome-terminal \
- git \
- cmake \
- gedit \
- aptitude \
- build-essential \
- vlc \
- gpaint \
- transmission-cli \
- wcalc && \
- sudo apt-get -y purge --remove firefox firefox-locale-en mousepad xfce4-terminal catfish onboard gnome-mines gnome-sudoku && \
- sudo apt-get autoclean
- ```
-
- **d)** Install three new command line programs using your command line package manager. Try each of these programs in their target environment and purpose.
- --------------
-
- **Answer:**
-
- Let's pick up the following CLI (command line) programs:
-
- - *[lynx](https://en.wikipedia.org/wiki/Lynx_(web_browser)) - classic non-graphical (text-mode) web browser*
- - *[libimage-exiftool-perl](https://en.wikipedia.org/wiki/Exiftool) - library and program to read and write meta information in multimedia files*
- - *[fakeroot](https://wiki.debian.org/FakeRoot) - tool for simulating superuser privileges*
-
- The abovementioned package descriptions can be extracted with _aptitude show package_ command once the target package name is known and _aptitude_ is installed.
-
- Let's install those target packages:
-
- ```
- sudo apt-get install lynx libimage-exiftool-perl fakeroot
- ```
-
- **Lynx** is a client based web browser. **Exiftool** is a CLI tool for processing, examining and manipulating exif metadata present in audio and image files. **fakeroot** is, as the name stands for, a program which is tricks the current user by giving a false assumption of being root user.
-
- **Example runtime screenshots of each program:**
-
- 1. Lynx (https://www.hs.fi - front page)
-
- ![lynx_img](https://github.com/Fincer/linux_server_setup/blob/master/images/h2-lynx.png)
-
- 2. Exiftool (examined CR2 file metadata)
-
- ![exiftool_img](https://github.com/Fincer/linux_server_setup/blob/master/images/h2-exiftool.png)
-
- 3. fakeroot (asked who am i, accessed fakeroot environment, created a new file in fakeroot environment and asked who am i again. It is claimed that the current user is root although this is not true, as can be seen when exiting the fakeroot environment)
-
- ![fakeroot_img](https://github.com/Fincer/linux_server_setup/blob/master/images/h2-fakeroot.png)
-
|