**Analysis:** In this event, user's client program (Web browser) has tried to access site *[localhost](https://en.wikipedia.org/wiki/Localhost)* for which Apache web server (HTTP daemon) has responded with OK code. The log snap reveals
**Analysis:** In this event, user's client program (Web browser) has tried to access site **[localhost](https://en.wikipedia.org/wiki/Localhost)** for which Apache web server (HTTP daemon) has responded with OK code. The log snap reveals
- client's IP address, in this case it is the same computer (*localhost* / *127.0.0.1*)
- client's IP address, in this case it is the same computer (`localhost` / `127.0.0.1`)
- [User agent header](https://en.wikipedia.org/wiki/User_agent) reported by the client. According to the agent string, client browser has been gecko-based [Epiphany web browser](https://en.wikipedia.org/wiki/Epiphany_(GNOME)), using x86_64 processor architecture. This string can be manipulated in the client end. For example, a desktop client web browser can pretend to be a mobile browser.
- [User agent header](https://en.wikipedia.org/wiki/User_agent) reported by the client. According to the agent string, client browser has been gecko-based [Epiphany web browser](https://en.wikipedia.org/wiki/Epiphany_(GNOME)), using x86_64 processor architecture. This string can be manipulated in the client end. For instance, a desktop client web browser can pretend to be a mobile browser.
Default syntax for Apache log files follow the layout which is described [here](https://httpd.apache.org/docs/2.4/logs.html) under section 'Common Log Format'.
**Analysis:** In this event, user's client program (Web browser) has tried to access site *http://localhost/this-page-dont-exist* for which Apache web server (HTTP daemon) has responded with error code. The log snap reveals
**Analysis:** In this event, user's client program (Web browser) has tried to access site `http://localhost/this-page-dont-exist` for which Apache web server (HTTP daemon) has responded with error code. The log snap reveals the following:
- client's IP address, in this case it is the same computer (*localhost* / *127.0.0.1*)
- client's IP address, in this case it is the same computer (`localhost` / `127.0.0.1`)
- [User agent header](https://en.wikipedia.org/wiki/User_agent) reported by the client. According to the agent string, client browser has been gecko-based [Epiphany web browser](https://en.wikipedia.org/wiki/Epiphany_(GNOME)), using x86_64 processor architecture. This string can be manipulated in the client end. For example, a desktop client web browser can pretend to be a mobile browser.
- [User agent header](https://en.wikipedia.org/wiki/User_agent) reported by the client. According to the agent string, client browser has been gecko-based [Epiphany web browser](https://en.wikipedia.org/wiki/Epiphany_(GNOME)), using x86_64 processor architecture. This string can be manipulated in the client end. For instance, a desktop client web browser can pretend to be a mobile browser.
Error logs of an Apache server can be found (by default) at _/var/log/apache2/error.log_. HTTP status codes with brief description are defined on [Apache homepage - Common HTTP Status Codes](https://wiki.apache.org/httpd/CommonHTTPStatusCodes).
Error logs of an Apache server can be found (by default) at `/var/log/apache2/error.log`. HTTP status codes with brief description are defined on [Apache homepage - Common HTTP Status Codes](https://wiki.apache.org/httpd/CommonHTTPStatusCodes).
**Other error situations - libraries & typos**
#### Other error situations - libraries & typos
The most common errors I have encountered on Linux desktop usage as a system administrator are various [Permission denied events](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/Step_by_Step_Guide/s1-navigating-ownership.html) mostly as a consequence of a human error (forgetting sudo or something similar) or various typos in commands. I have compiled a lot of programs from source code, making various library errors practically unavoidable in long-term. The syntax of these errors is: [Cannot open shared object file: No such file or directory](https://stackoverflow.com/questions/480764/linux-error-while-loading-shared-libraries-cannot-open-shared-object-file-no-s/480786) where an executable can't find a requested _.so_ library file or version from library path. Both of these errors can be avoided by typing commands precisely and avoiding compiling of software (and relying on official repositories and system-wide package upgrades, considering that installed packages are maintained actively).
The most common errors I have encountered on Linux desktop usage as a system administrator are various [Permission denied events](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/Step_by_Step_Guide/s1-navigating-ownership.html) mostly as a consequence of a human error (forgetting sudo or something similar) or various typos in commands. I have compiled a lot of programs from source code, making various library errors practically unavoidable in long-term. The syntax of these errors is: [Cannot open shared object file: No such file or directory](https://stackoverflow.com/questions/480764/linux-error-while-loading-shared-libraries-cannot-open-shared-object-file-no-s/480786) where an executable can't find a requested `.so` library file or version from library path. Both of these errors can be avoided by typing commands precisely and avoiding compiling of software (and relying on official repositories and system-wide package upgrades, considering that installed packages are maintained actively).
**Other error situations - unsupported Linux OS versions & breaking libraries**
When compiling a program, common problems include
- changes in [The GNU C++ Library API](https://gcc.gnu.org/onlinedocs/libstdc++/api.html) which may break compilation process of a C/C++ program on Linux.
- missing header files (`.h` suffix), located at the same folder with original `.c` or `.cpp` files or at `/usr/include` folder or subfolders.
#### Other error situations - unsupported Linux OS versions & breaking libraries
In long-term, using Linux distributions such as Ubuntu (or any variant that relies on Ubuntu package repositories, such as Linux Mint) become unusable with newer software. The newer versions require more recent library versions which older Ubuntu releases can't possibly provide anymore because official repositories have been shutted down. In this state, either older software must be used (not recommended) on older operating system or the system must be upgraded to more recent version where the newer packages are still available. In some extent, compiling programs from source on an older operating system version is possible and thus life cycle of an old operating system installation can be extended for some programs. This approach, however, leads easily to security risks and bugs which have not been fixed, and especially the security part is essential on Linux server installations.
@ -86,11 +92,11 @@ Based on my personal experience, rolling release model is not trouble free and c
In general, some troublesome situation on Arch Linux consist of transitions from older API versions to a newer one. This has happened me once: from Qt4 to Qt5. The issue here is this: some packages are updated more frequently than the others, leading to mixing of two APIs in a system and therefore resulting to broken user experience.
**Other error situations - a tricky error without clear identification**
#### Other error situations - a tricky error without clear identification
For example, I recently bought a new laptop. I use Firefox on KDE 5 environment. My laptop has Optimus graphics. I have updated all Xorg packages, Mesa drivers, Nvidia drivers and Plasma workspace packages, not forgetting the web browser itself. In some cases, the browser just crashes, and not only the browser but the whole desktop and may lead to laptop freeze so that only hard-reset is a viable solution. In this case, I must try to figure out whether it is Optimus, Mesa, Nvidia, Plasma workspace (or any library in it) or browser component (add-on, compile parameters, hardware acceleration) which cause the crash. This issue is very poorly documented. My best bet to track down the issue is in this case is to check any logs, especially Xorg & Journalctl log events, check all browser add-ons and hardware acceleration, try to check if similar issues for related packages have been reported in various Bugzilla sites for specific package versions and if patches are available, try them out if needed. Reporting this kind of problem to developers is very troublesome because I must track down the exact component which actually causes the crash. Reporting the issue for random developers as "My plasma workspace/kwin crashes while using a browser with Optimus laptop" says practically nothing and may lead to frustation, ignorance and false assumptions. Therefore, I must pin down the root cause myself before any reporting.
For instance, I recently bought a new laptop. I use Firefox on KDE 5 environment. My laptop has Optimus graphics. I have updated all Xorg packages, Mesa drivers, Nvidia drivers and Plasma workspace packages, not forgetting the web browser itself. In some cases, the browser just crashes, and not only the browser but the whole desktop and may lead to laptop freeze so that only hard-reset is a viable solution. In this case, I must try to figure out whether it is Optimus, Mesa, Nvidia, Plasma workspace (or any library in it) or browser component (add-on, compile parameters, hardware acceleration) which cause the crash. This issue is very poorly documented. My best bet to track down the issue is in this case is to check any logs, especially Xorg & Journalctl log events, check all browser add-ons and hardware acceleration, try to check if similar issues for related packages have been reported in various Bugzilla sites for specific package versions and if patches are available, try them out if needed. Reporting this kind of problem to developers is very troublesome because I must track down the exact component which actually causes the crash. Reporting the issue for random developers as "My plasma workspace/kwin crashes while using a browser with Optimus laptop" says practically nothing and may lead to frustation, ignorance and false assumptions. Therefore, I must pin down the root cause myself before any reporting.
**Other error situations - hardware issues**
#### Other error situations - hardware issues
Linux driver support. There are arguments forth and against. Is it good or not? Well, simple answer: it completely depends on used hardware. On my recently purchased laptop, at least Linux kernel 4.15 is needed. In older kernel versions, the laptop doesn't simply boot on Linux. Therefore, at least Ubuntu 18.04 LTS or any other distribution providing this kernel must be installed or I must compile this kernel version (or newer) from source to the target hardware by myself.
@ -98,11 +104,11 @@ The main rule here: check which components you need support to (yes, components,
If you buy very modern or customized hardware, please consider using a distribution with rolling release model.
**Other error situations - random errors**
#### Other error situations - random errors
A good friend of mine contacted me recently. He stated that he couldn't log in to his [Gentoo](https://en.wikipedia.org/wiki/Gentoo_Linux) installation. After a short talk on phone and various approaches I figured out that the problem must be on his LightDM manager so I adviced him to check his _/var/log/lightdm/_ log files. It was proven that the problem was caused by faulty installation of LightDM package files (wrong permissions) or authentication errors oin his [PAM policy](https://en.wikipedia.org/wiki/Privileged_identity_management). To conclude, all these errors can be very random and results of various reasons.
**Other error situations - a tricky error tracked down**
#### Other error situations - a tricky error tracked down
Sample of [journalctl](https://www.loggly.com/ultimate-guide/using-journalctl):
@ -112,11 +118,11 @@ Sample of [journalctl](https://www.loggly.com/ultimate-guide/using-journalctl):
- In the next part, journalctl introduces the name of a system component responsible for the event (process/kernel).
- For each process, [process identifier](https://en.wikipedia.org/wiki/Process_identifier) is represented. This identifier is useful for sending various signals like SIGKILL/SIGSTOP to processes, for example ([SIGKILL/SIGSTOP...](https://en.wikipedia.org/wiki/Signal_(IPC))).
- For each process, [process identifier](https://en.wikipedia.org/wiki/Process_identifier) is represented. This identifier is useful for sending various signals like SIGKILL/SIGSTOP to processes, for instance ([SIGKILL/SIGSTOP...](https://en.wikipedia.org/wiki/Signal_(IPC))).
- After PID identifier, the message of the responsible component is represented. Erroneous messages are usually, but not always, marked as bolded red.
Log events can usually be found either on user's home directory (custom paths used by programs executed only with user permissions) or in system-wide directory _/var/log/_. For example, journalctl log events can be saved in _/var/log/journal_ directory whereas user's [X11](https://en.wikipedia.org/wiki/X_Window_System) errors are logged either in _$HOME/.xsession-errors_ ([.xsession-errors](https://stackoverflow.com/questions/10222449/what-is-xsession-errors)) file or in _$HOME/.local/share/Xorg/_ directory. Some programs use simply user's home directory as their log directory (for example: bash shell logs user history in hidden _$HOME/.bash_history_ file).
Log events can usually be found either on user's home directory (custom paths used by programs executed only with user permissions) or in system-wide directory `/var/log/`. For instance, journalctl log events can be saved in `/var/log/journal` directory whereas user's [X11](https://en.wikipedia.org/wiki/X_Window_System) errors are logged either in `$HOME/.xsession-errors` ([.xsession-errors](https://stackoverflow.com/questions/10222449/what-is-xsession-errors)) file, in `$HOME/.local/share/Xorg/` directory or in `/var/log` directory. Some programs use simply user's home directory as their log or history file directory (for instance: bash shell logs user history in hidden `$HOME/.bash_history` dotfile).
The following journalctl snapshot tells that there is an issue with the bluetooth daemon (used computer is a laptop known as Asus N56JR).
@ -148,7 +154,7 @@ The following journalctl snapshot tells that there is an issue with the bluetoot
Bluetooth error messages: lines 7-19. Line 8 is not an error message.
**journalctl analysis - regression test approach**
#### journalctl analysis - regression test approach
The problem seem to be widely known among Ubuntu & Arch Linux users, according to a short search in the internet. There is a [launchpad report](https://bugs.launchpad.net/ubuntu/+source/bluez/+bug/1490349) but not a simple answer is given there. There is a suggestion to downgrade bluez package on [Arch Linux forum](https://bbs.archlinux.org/viewtopic.php?id=195886). [AskUbuntu](https://askubuntu.com/) website has many threads for this issue.
@ -156,11 +162,11 @@ This kind of problems must be examined carefully. Preferred method, as the softw
Another dilemma are various dependencies: in this case, we can blame bluez package. But is it actually bluez which broke the functionality? Or is it a dependency required by bluez package? Or a dependency of a dependency? If tracking down the root cause of a problem lead to this situation, it's like looking for a correct path in a giant tree or a dense bush. One crucial examination can still be done: is the problem present in some other program, too? If it is, then we can exclude all other packages from our search instead of common packages required by two individual programs. Of course, this is not foolproof either because program A can depend only on small code part of a dependency package (simple function call) whereas program B can utilize many more functions of the same dependency package.
Basic principle of a regression test is to compile suspected packages from source, two different versions: one where the problem is present, and another one where it is not. We continue compiling these two versions as long as we can pick up individial commits which likely cause the breakage in program functionality. To put it simple: it is an iteration process. [Git bisect](https://git-scm.com/docs/git-bisect) method provides a way to do this, for example.
Basic principle of a regression test is to compile suspected packages from source, two different versions: one where the problem is present, and another one where it is not. We continue compiling these two versions as long as we can pick up individial commits which likely cause the breakage in program functionality. To put it simple: it is an iteration process. [Git bisect](https://git-scm.com/docs/git-bisect) method provides a way to do this, for instance.
Once the problematic commit/commits are known, we can fix the code ourselves and, for projects where the code is available to use, we can make a [pull request](https://help.github.com/articles/about-pull-requests/) which is accepted by verified developer, or in some cases, we can make our own [fork repository](https://confluence.atlassian.com/bitbucket/forking-a-repository-221449527.html) of the package/component/program. Therefore, commiting [a patch](https://www.thegeekstuff.com/2014/12/patch-command-examples/) or patchset to source code is possible.
**journalctl analysis - our bluez package**
#### journalctl analysis - our bluez package
In bluez error case, we should check all relevant error messages written by bluetooth daemon to STDERR output. By analysing [BlueZ source code](https://github.com/pauloborges/bluez) ([alternative link](https://git.kernel.org/pub/scm/bluetooth/bluez.git)) we can see that the message present in journalctl log snapshot can be found in source code as well. In more detail, the following source code files (C code):
@ -173,38 +179,54 @@ attrib/gatt-service.c
Tracking down the problem even further can require analysis of the relevant parts of these files in addition to regression testing. Of course, hardware components where the error is present must be taken into account, too.
**Other error situations - A simple test case (command which succeeds as root but fails otherwise)**
#### Other error situations - A simple test case (command which succeeds as root but fails otherwise)
The following command leads to an error situation (STRERR) if executed with normal user rights. If you run this command in root shell, it succeeds (STDOUT). Switch to root shell by executing _su root_ in your shell environment.
The following command leads to an error situation (`stderr` data stream) if executed with normal user rights. If you run this command in root shell, it succeeds (`stdout` data stream). Switch to root shell by executing `su root` or `sudo -u root bash -c '$SHELL'` in your shell environment.
**NOTE!** If you use root shell for running any script, be **aware** and be **always** sure what you are about to do. You can log out from root shell with _exit_ command. Do not run scripts from untrusted sources as root.
**NOTE!** If you use root shell for running any script, be **aware** and be **always** sure what you are about to do. You can log out from root shell with `exit` command. Do not run scripts from untrusted sources as root.
```
for i in {0..4}; do touch $HOME/$i; mv $HOME/$i /usr/ 2>/dev/null && echo "Okay, you can do that. You are the god of the system" || echo "Sorry, no way. Do not even try that. You are not a god on \"$(hostname)\"" && if [[ -f $HOME/$i ]]; then rm $HOME/$i; fi ; done
for i in {0..4}; do touch $HOME/$i; \
mv $HOME/$i /usr/ 2>/dev/null && \
echo "Okay, you can do that. You are the god of the system" || \
echo "Sorry, no way. Do not even try that. You are not a god on \"$(hostname)\"" && \
if [[ -f $HOME/$i ]]; then \
rm $HOME/$i; \
fi ; done
```
Command description:
1. Command creates five individual files in a foor loop (0, 1, 2, 3, and 4) in the current user's home directory. For root, the folder path is simply _/root_
**1.** Command creates five individual files in a foor loop (`0`, `1`, `2`, `3`, and `4`) in the current user's home directory. For root, the folder path is simply `/root`
**2.** For each created file, move it from the `$HOME` directory to system folder `/usr`. In a case of error, write default error stream ([stderr](https://en.wikipedia.org/wiki/Standard_streams#Standard_error_(stderr))) to **null device**`/dev/null` which basically means that we don't print the default error message. Number `2` stands for `stderr` in this shell environment. `>` stands for "write to". The main idea behind this is that we don't want to see the default error message if command `mv` fails. Instead, we handle error situations in ways described in steps 4 and 5.
2. For each created file, move it from the $HOME directory to system folder _/usr_. In a case of error, write default error message (STDERR) to _null_ device _/dev/null_ which basically means that we don't print the default error message. Number _2_ stands for STDERR in this shell environment. _>_ means "write to". The main idea behind this is that we don't want to see the default error message if command _mv_ fails. Instead, we handle error situations in ways described in steps 4 and 5.
**3.** For each created file, if `mv` command succeeds ([stdout](https://en.wikipedia.org/wiki/Stdout)), do as it's stated after `&&` and before `||` (echo "Okay..."). More about these syntaxes: [Bash Reference Manual](http://www.gnu.org/software/bash/manual/bashref.html#Lists)
3. For each created file, if _mv_ command succeeds (STDOUT), do as it's stated after _&&_ and before _||_ (echo "Okay..."). More about these syntaxes: [Bash Reference Manual](http://www.gnu.org/software/bash/manual/bashref.html#Lists)
**4.** For each created file, if `mv` command fails ([stderr](https://en.wikipedia.org/wiki/Standard_streams#Standard_error_(stderr))), execute as it's stated after `||` symbols (echo "Sorry..."). Practically, you end up to this scenario if the current user don't have a home folder defined in `$HOME` variable or if the current user doesn't have writing permissions to the system folder `/usr`. The most common cause for error is the latter one (Permission denied) because the writing permission in `/usr` directory is reserved only for `root` user.
4. For each created file, if _mv_ command fails (STDERR), execute as it's stated after _||_ symbols (echo "Sorry..."). Practically, you end up to this scenario if the current user don't have a home folder defined in $HOME variable or if the current user doesn't have writing permissions to the system folder _/usr_. The most common cause for error is the latter one (Permission denied) because the writing permission in _/usr_ directory is reserved only for _root_ user.
**5.** For each file, in error situation, there is a command call `$(hostname)` included. This command call prints out the output of `hostname` command which, in our case, is the computer name. [Craracter escapes](http://www.gnu.org/software/bash/manual/bashref.html#Escape-Character) (slashes `\`) must be used if we want to use double quotes inside double quotes of our `echo` command. With escapes, we basically tell the `echo` command that " symbol, if preceded by `\`, must be included in the clause itself and it is not the end point of our clause. Alternative writing method for the `echo` command is as follows: `echo 'my "clause"'`. Note single quotes here. Here we tell for the `echo` command that the clause starts and ends with single quotes and, therefore, we don't have to escape our double quotes. This is a basic principle for any bash shell commands, including `grep` (for instance: `cp --help | grep -i "\-R"`) etc.
5. For each file, in error situation, there is a command call _$(hostname)_ included. This command call prints out the output of _hostname_ command which, in our case, is the computer name. [Craracter escapes](http://www.gnu.org/software/bash/manual/bashref.html#Escape-Character) (slashes _\_) must be used if we want to use double quotes inside double quotes of our _echo_ command. With escapes, we basically tell the _echo_ command that " symbol, if preceded by \, must be included in the clause itself and it is not the end point of our clause. Alternative writing method for the _echo_ command is as follows: echo 'my "clause"'. Note single quotes here. Here we tell for the _echo_ command that the clause starts and ends with single quotes and, therefore, we don't have to escape our double quotes. This is a basic principle for any bash shell commands, including _grep_ (for example: cp --help | grep -i "\-R") etc.
Why do we write `$(hostname)` and not simply `hostname`? We simply use bash internal [Command Substitution feature](http://www.gnu.org/software/bash/manual/bashref.html#Command-Substitution).
Why do we write _$(hostname)_ and not simply _hostname_? We simply use bash internal [Command Substitution feature](http://www.gnu.org/software/bash/manual/bashref.html#Command-Substitution).
**6.** For each file, we do our last part. We check whether the file exists in the current user's home directory. If the file (`$HOME/$i`) exists, remove it. We end up to this situation mostly because of failures in moving the created file into `/usr` directory.
6. For each file, we do our last part. We check whether the file exists in the current user's home directory. If the file ($HOME/$i) exists, remove it. We end up to this situation mostly because of failures in moving the created file into _/usr_ directory.
**7.** Because all steps 2-6 have been executed inside the for loop, we stop our loop sequence by writing simply `done`.
7. Because all steps 2-6 have been executed inside the for loop, we stop our loop sequence by writing simply _done_.
**Semicolons (;)** are used because the script is so called **one liner script**, meaning that multiple bash commands are stacked together, not written into multiple lines. The script can also be written in multiple lines and saved into `myscript.sh` file for further usage. Execution rights for the script file are been given with `chmod +x myscript.sh` command in a case where user's [umask values](https://wiki.archlinux.org/index.php/umask) do not grant execution permission for a file by default (this is the default case). In addition, we must include a shell definition in the first line in our `myscript.sh` file. Practically, the definition is some of the following strings: `#!/bin/bash`, `!/bin/sh`, `#!/bin/zsh`, `#!/bin/tcsh`, `#!/bin/csh`, `#/bin/ksh`, etc. In our case, it is `#!/bin/bash`
Semicolons (;) are used because the script is so called _one liner_ script, meaning that multiple bash commands are stacked together, not written into multiple lines. The script can also be written in multiple lines and saved into _myscript.sh_ file for further usage. Execution rights for the script file are been given with _chmod +x myscript.sh_ command in a case where user's [umask values](https://wiki.archlinux.org/index.php/umask) do not grant execution permission for a file by default (this is the default case). In addition, we must include a shell definition in the first line in our _myscript.sh_ file. Practically, the definition is some of the following strings: *#!/bin/bash, #!/bin/sh, #!/bin/zsh, #!/bin/tcsh, #!/bin/csh, #/bin/ksh*, etc. In our case, it is _#!/bin/bash_
- Alternatively, syntax like `#!/bin/env bash` can be used as well
- Why do we have to write above line to our executable files? It's a Unix feature known as [shebang line](https://en.wikipedia.org/wiki/Shebang_%28Unix%29)
- Quoted from the linked Wikipedia article:
> In Unix-like operating systems, when a text file with a shebang is used as if it is an executable, the program loader parses the rest of the file's initial line as an interpreter directive; the specified interpreter program is executed, passing to it as an argument the path that was initially used when attempting to run the script, so that the program may use the file as input data. For example, if a script is named with the path `path/to/script`, and it starts with the following line, `#!/bin/sh`, then the program loader is instructed to run the program `/bin/sh`, passing `path/to/script` as the first argument.
More about bash shell functionality can be found here: [Bash Reference Manual](http://www.gnu.org/software/bash/manual/bashref.html). The site is highly recommended for anyone writing advanced bash scripts.
- Where do `stdout`, `stderr` and `stdin` come from? Read more about them on [File descriptor - Wikipedia](https://en.wikipedia.org/wiki/File_descriptor)
**b)** Optional task, not teached yet: Install SSH daemon. Try some of the following commands on your own SSH server: ssh-copy-id, sshfs, scp or git. (The easiest command might be scp: ‘scp foo.txt tero@example.com:’)
If SSH daemon service is currently active, we do as follows:
Let's execute a command sequence where we connect to Haaga-Helia's SSH server ([Secure Shell (developer Tatu Ylönen, 1995)](https://fi.wikibooks.org/wiki/SSH)) and upload a local file _nullfile_ to the SSH server's current user home directory. We create this local file with _touch nullfile_ command. In the following output, all critical address information is masked with character pattern XXXX.
Let's execute a command sequence where we connect to Haaga-Helia's SSH server ([Secure Shell (developer Tatu Ylönen, 1995)](https://fi.wikibooks.org/wiki/SSH)) and upload a local file `nullfile` to the SSH server's current user home directory. We create this local file with `touch nullfile` command. In the following output, all critical address information is masked with character pattern XXXX.
```
@ -250,7 +272,7 @@ Connection to haaga-helia.fi closed.
phelenius@my-machine:~$
```
Other local ssh commands (etc.) can be found with command *ls /usr/bin |grep ssh*
Other local ssh commands (etc.) can be found with `ls /usr/bin |grep ssh` or with `ls /usr/bin/*ssh*` command
**c)** Create an apt-get command of your dreams: one single command or one-liner which installs your favorite applications.
--------------
@ -298,7 +320,7 @@ Let's pick up the following CLI (command line) programs:
- *[libimage-exiftool-perl](https://en.wikipedia.org/wiki/Exiftool) - library and program to read and write meta information in multimedia files*
- *[fakeroot](https://wiki.debian.org/FakeRoot) - tool for simulating superuser privileges*
The abovementioned package descriptions can be extracted with _aptitude show package_ command once the target package name is known and _aptitude_ is installed.
The abovementioned package descriptions can be extracted with `aptitude show package` command once the target package name is known and `aptitude` is installed.
Let's install those target packages:
@ -306,9 +328,13 @@ Let's install those target packages:
**Lynx** is a client based web browser. **Exiftool** is a CLI tool for processing, examining and manipulating exif metadata present in audio and image files. **fakeroot** is, as the name stands for, a program which is tricks the current user by giving a false assumption of being root user.
- **Lynx** is a client based web browser.
- **Exiftool** is a CLI tool for processing, examining and manipulating exif metadata present in audio and image files. This is used mainly for analyzing images and manipulating their metadata.
- **fakeroot** is, as the name stands for, a program which is tricks the current user by giving a false assumption of being root user. This is used mainly for compiling software from source code.