How to fix your PC error

Good use usually involves visiting to give a hint when something is not working. If not, I need to take the time to make sure I can get basic information when a problem occurs. In such cases, I wrote a version in /var/log for each application or even scheduled task. But I came to the rescue at a time when I needed to check a lot of additional files when I needed to troubleshoot. To make troubleshooting easier, I decided to change my view to something simpler and more user friendly. This article describes what I found.

Log In To Syslog

As above, this indicates that I had too many validation files when I needed to resolve them. In addition, each active file had to be maintained, which took even more time. I had to add each file to make sure you use the rotation system and make a lot more saves. In my new approach, I created a unique archive of consciousness, where I wrote down all the problems that I could find. With m From the moment you apply for administrative tasks, all errors that occur are recorded in an amazing place.

I didn’t attach much importance to this because I didn’t have to reinvent the wheel. The syslog became my choice because it was indeed the standard for logging messages and was subsequently present on most Unix and Linux systems. It took a few hours, but I ported most of my own scripts and let them send output to syslog. In the end they looked like this:

#!/bin/sh
BKDATE=$(date +"%d-%b-%Y-%H%I")
BKDIR=/home/dbbackups
CD$BKDIR
pg_dump -U user -d data source > db/database-$BKDATE.sql
tar -zcf /home/dbbackups/backup-$BKDATE.tar.gz
["$?" != "0" ] && logging backup "$0 - PostgreSQL Error" || :

Referring to the above argument, key change is the type of last line that causes the receive log to send a message directly to the syslog in case of a failure of the receiving database.

Also, I’ve included the scheduled execution and the call in the same crontab file:

10 20 * * 1 . /home/dbbackups/makebackups.sh 2>&1 | -t /usr/bin/loggerpsqlbackup

Whenever there was a problem with their own script for that matter, or with a proper cron job, I only had to check one file (/var/log/messages). From that point on, I started configuring almost every application to send logs to syslog whenever possible. More indie movies, more files in the app.

Remote Logging

I have small clients with two people per server, but vendors these days often have more than a lot of them and you need to be able to control whoever is connected to it (whether physical or virtual). , or both). If the company is smaller, it is very likely that all hosting servers have standard configurations, at least in terms of software. This means that each of them only needs to manage one or two specific operating systems, perhaps with similar versions of the software. Make sure all support commands are equal when I finish the download process. Optional equipment may be restrictedFew additional cellular devices.

Later, as the business grows, the number of complex bursts of applications, monitoring, and troubleshooting will increase, resulting in more potential points of failure. Now the challenge is not to track one file across multiple servers, but to track files across many nodes, all devices in applications, communication between nodes, security, whatever. The environment is evolving and diversifying, especially in terms of hardware and software.

For this particular scenario, I am using remote logging, specifically Rsyslog. This way, I can easily send every message that arrives locally to a remote log server that specifically collects logs for every node and for almost every application. Now I have a better way to debug errors because I know what went wrong where and when. I can control any application, almost any script, any process. I’m more inclined to correct the mistake k.

Data Transfer Configuration

*.* action(type="omfwd" target="192.0.2.2" port="10514" protocol="tcp"
action.resumeRetryCount="100"
queue.type="linkedList" queue.size="10000")
*.info;mail.none;authpriv.none /var/log/messages
authpriv.* /var/log/secure
mail.* /var/log/maillog
cron.* /var/log/cron
*.emerg:omusrmsg:*
uucp,news.crit /var/log/spooler
local7.* /var/log/boot.log

  • The first lines forward all messages sent to 192.0.2.2 on port 10514 using standard TCP, and 100 retries before deleting the message if the destination is literally unreachable.
  • The fourth line reads: “For rsyslog, log all messages of level info or higher (except emails and/or private authentication messages).
  • The fifth title page indicates a restricted file.
  • The eighth and seventh lines define the location files for receiving emails and cron messages.
  • The ninth clause specifies that everyone receives unexpected messages.
  • The tenth news line stores critical level errors in a new special file.
  • The eleventh line stores boot messages in boot.log.
  • N Data Reception Setting

    On the receiver’s side, instead of the main three lines in the final message configuration, we have:

    module(load="imtcp")
    Login (type="imtcp" port="514")
    Module (load="imudp")
    input(type="imudp" port="514")
    

  • The first line loads the module associated with the TCP protocol.
  • The second line allows TCP port 514 to receive TCP data.
  • Another line loads a module related to the UDP protocol.
  • The fourth line defines the UDP 514 entry for receiving UDP data.
  • Another positive aspect of this setup is that sometimes you can receive logs from hardware components of other devices, such as routers and firewalls, if they allow events to be sent via e-mail in syslog format.

    Additional Requirements

    Consolidating all types of message logs from different sources is expensive. However, I think it really pays off with the following benefits:

    1. Ability to use different plug-ins, rel Referring to different magazine formats.
    2. The ability to delete all incoming messages from Cron Trends personal environment.
    3. High processing power for robust log analysis.
    4. Enough space to store current and historical data.
    5. The ability to use a cluster to store all your logs. Can

    You will need to monitor and maintain your recording infrastructure, but this is not required.

    Log Analysis And Alerts

    Once you’ve collected business data, your application, server assets, and even hardware devices, you need to build in some automation so you know when to act. For example, you need to notify you and your team when your company’s application is down, when your cron jobs fail, or when your firewall detects an unauthorized access attempt. This requires instant events per hour, so you need to participate in real-time analytics to trigger and receive alerts.

    You also have peak activity information of your apps today, last week, even last month or last year. Which app doesn’t work the most? There is an answer. You will be able to see suspicious network activity, website visitors, and users and visited websites. You know which user process is not available. All data belongs to you because your imagination is the limit of what you can read and learn. Historical data can be a very good learning base for both the body and the machine, which can help you proactively troubleshoot and improve the end user experience.

    Closing Words

    I just showed you how I went from a set of log files with the latest small client to a full visiting and monitoring mechanism. A similar computer allows you to take control of your processes, starting with simple cron jobs that give warnings about software and hardware, and improve your troubleshooting skills. If your business already has advanced IT systems with many differentlogging devices, consider using a true cloud solution to reduce the amount of maintenance required. With that in mind, learn how SolarWinds