Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

4/18/2023

Install docker Ubuntu 22.04

Install docker Ubuntu 22.04

Install docker Ubuntu 22.04

nano install_docker.sh

#!/bin/bash # Update your existing list of packages sudo apt update # Install a few prerequisite packages which let apt use packages over HTTPS sudo apt install -y apt-transport-https ca-certificates curl software-properties-common # Add the GPG key for the official Docker repository to your system curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg # Add the Docker repository to APT sources echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null # Update the package database with Docker packages from the newly added repo sudo apt update # Make sure you are about to install from the Docker repo instead of the default Ubuntu repo sudo apt-cache policy docker-ce # Finally, install Docker sudo apt install -y docker-ce # Enable Docker to start on boot sudo systemctl enable docker # Start Docker service sudo systemctl start docker echo "Docker has been installed and started!"

chmod +x install_docker.sh && ./install_docker.sh

reboot the server

5/15/2020

Do you need test you WAF / FRONT DOOR

Checking your WAF /  Front DOOR / Cloudflare is working


Using the this in your request will get a block and you can see in the log.



;-)

4/28/2020

DSC Azure automation Linux

Have a problem in your DSC configuration.


"Failed to apply the configuration. These resources produced errors: [nxFile]MyFolder. Detailed error information can be found in the log file.\"}"]




4/09/2020

Nginx solution for check http_stub_status_module



Nginx add internal configuration to use http_stub_status_module


4/07/2020

Nginx Reverse configuration

Simple Nginx proxy reverse configuration


add the file proxy.conf in /etc/nginx/conf.d/


proxy_http_version 1.1;

proxy_set_header Host               $http_host;
proxy_set_header X-Real-IP          $remote_addr;
proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto  $scheme;
proxy_set_header Upgrade            $http_upgrade;
proxy_set_header Connection         "Upgrade"; 



Configuration for the domain simple

server {
  listen 80;
  listen [::]:80;

  server_name example.com;

  location / { 
 include    conf.d/proxy.conf;
 proxy_pass http://192.168.123.321:3000/;
  }
}







4/01/2020

How to pass value of a variable in ssh command

How make a script using a local or remote variable in other script


 Sample SCRIPT


 #!/bin/bash
## find the last or new backup file in the folder and create a variable name
DBFILE DBFILE=`ssh -t serverB@192.0.0.1 "sudo find /temp/backup.sql -type f -mtime -1 -name "*.sql""`

 ###( just show the name file)
echo $DBFILE  
###  send the variable local to remote server
echo here is DBFILE $DBFILE
#ssh -t serverB@192.0.0.1 "sudo cp $DBFILE /home/user/"
## Script execute remote with the variable 
#/usr/bin/scp serverB@192.0.0.1:/home/user/$DBFILE  /opt/temp


10/16/2019

Compress MP4 files


How to compress the MP4 files.

find . -type f -name "*.MP4" -exec bash -c 'FILE="$1"; ffmpeg -i "${FILE}" -s 1280x720 -acodec copy -y "${FILE%.mp4}.shrink.mp4";' _ '{}' \;

Varnish Exemple for CACHE and Optimisation

Varnish Optimisation cache
#
# This is an example VCL file for Varnish.
#
# It does not do anything by default, delegating control to the
# builtin VCL. The builtin VCL is called when there is no explicit
# return statement.
#
# See the VCL chapters in the Users Guide at https://www.varnish-cache.org/docs/
# and https://www.varnish-cache.org/trac/wiki/VCLExamples for more examples.


# Marker to tell the VCL compiler that this VCL has been adapted to the
# new 4.0 format.
vcl 4.0;


# Default backend definition. Set this to point to your content server.
backend default {
.host = "127.0.0.1";
.port = "8080";
}


sub vcl_recv {
# Happens before we check if we have this in cache already.
#
# Typically you clean up the request here, removing cookies you don't need,
# rewriting the request, etc.




# Happens before we check if we have this in cache already.
#
# Typically you clean up the request here, removing cookies you don't need,
# rewriting the request, etc.


# Properly handle different encoding types
if (req.http.Accept-Encoding) {
if (req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|woff)$") {
# No point in compressing these
unset req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate") {
set req.http.Accept-Encoding = "deflate";
} else {
# unknown algorithm (aka crappy browser)
unset req.http.Accept-Encoding;
}
}


# Cache files with these extensions
if (req.url ~ "\.(js|css|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|woff)$") {
unset req.http.cookie;
return (hash);
}


# Dont cache anything thats on the blog page or thats a POST request
if (req.url ~ "^/blog" || req.method == "POST") {
return (pass);
}


# This is Laravel specific, we have session-monster which sets a no-session header if we dont really need the set session cookie.
# Check for this and unset the cookies if not required
# Except if its a POST request
if (req.http.X-No-Session ~ "yeah" && req.method != "POST") {
unset req.http.cookie;
}


return (hash);
}


sub vcl_backend_response {
# Happens after we have read the response headers from the backend.
#
# Here you clean the response headers, removing silly Set-Cookie headers
# and other mistakes your backend does.






# set beresp.ttl = 30s;
set beresp.grace = 24h;


if (beresp.ttl < 120s) {
unset beresp.http.cookie;
unset beresp.http.Set-Cookie;
set beresp.ttl = 120s;
unset beresp.http.Cache-Control;
}






}


sub vcl_deliver {
# Happens when we have all the pieces we need, and are about to send the
# response to the client.
#
# You can do accounting or modifying the final object here.
}

Linux Compress PDF batch



#### gs
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/ebook -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf $INPUTFILE

### pdf2ps && ps2pdf
pdf2ps input.pdf output.ps && ps2pdf output.ps output.pdf

### Webservice
http://compress.smallpdf.com/de
For linux
#!/bin/sh
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/default -dNOPAUSE -dQUIET -dBATCH -dDetectDuplicateImages -dCompressFonts=true -r150 -sOutputFile="compress_$@" "$@"
 ./compresspdf.sh file.pdf 
find -type f -name "*.pdf" -exec ./compresspdf.sh {} \;
#!/bin/sh
INPUT=$1; shift
OUTPUT=$1; shift
GS_BIN=/usr/bin/gs
QFACTOR="0.40"

# Image Compression Quality
#
# Quality HSamples VSamples QFactor
# Minimum [2 1 1 2] [2 1 1 2] 2.40
# Low     [2 1 1 2] [2 1 1 2] 1.30
# Medium  [2 1 1 2] [2 1 1 2] 0.76
# High    [1 1 1 1] [1 1 1 1] 0.40
# Maximum [1 1 1 1] [1 1 1 1] 0.15 

${GS_BIN} -dBATCH -dSAFER -DNOPAUSE -q -sDEVICE=pdfwrite -sOutputFile=${OUTPUT} -c "<< /ColorImageDict << /QFactor ${QFACTOR} /Blend 1 /HSample [1 1 1 1] /VSample [1 1 1 1] >> >> setdistillerparams" -f ${INPUT}

For windows
echo off
:: PDF to PCL Converter. Needs Ghostscript locally, and installed here: "C:\Program Files\gs\gs9.02\bin\gswin32c.exe"
:: Other versions of GS will work, but the script will have to be updated.
:: Users can enter the breakpoint for the PCL files. If none is provided, 100 is used.
cls
echo This program will take all the PDF files in the current directory
echo and group them into PCL files. If you ran
echo this from the command line, you could enter the breakpoint for
echo number of PDF files per PCL. If left blank or not entered, the
echo program defaults to 100.
echo.
echo If you want to continue, hit any key and please wait for the
echo program to complete processing (could take a while).
echo If you want to stop now, hit CTRL-C and exit out of the batch.
pause

setlocal enabledelayedexpansion
set INC=%1
if "%1%" == "" set INC=100
SET GSS=
SET CTR=0
SET FL=1

for %%i in (*.pdf) DO (
SET GSS=!GSS! "%%i"
SET /a CTR+=1
echo !CTR! COUNTER !FL! OUTPUTFILE NUMBER
if !CTR! == %INC% (

"C:\Program Files\gs\gs9.02\bin\gswin32c.exe" -dNOPAUSE -dBATCH -sDEVICE=laserjet -sOutputFile=File_!FL!.pcl !GSS!
echo "C:\Program Files\gs\gs9.02\bin\gswin32c.exe" -dNOPAUSE -dBATCH -sDEVICE=laserjet -sOutputFile=File_!FL!.pcl !GSS!
set /a FL+=1
echo !FL! FL
set CTR=0
set GSS=

)

)
if NOT !CTR! == 0 (
echo LAST PROCESS
"C:\Program Files\gs\gs9.02\bin\gswin32c.exe" -dNOPAUSE -dBATCH -sDEVICE=laserjet -sOutputFile=File_!FL!.pcl !GSS!
)


endlocal
echo Complete.
echo on

10/15/2019

resize images in Linux



Reduce images files png or jpg jpeg in subdirectories

find -type f -name "*.jpeg" -exec jpegoptim --strip-all {} \;


optipng *.png

find -type f -name "*.png" -exec optipng {} \;


6/12/2019

Haproxy Monitoring or BLOCK DDOS

How to monitor or block DDOS in Haproxy





Configuration to monitor access
This configuration just will TAG the external IP in the  abuse table, if need block something just remove the double #, to change the level of monitoring increase or reduce the number of connection  level.

# ABUSE SECTION works with http mode dependent on src ip
##tcp-request content reject if { src_get_gpc0(Abuse) gt 5000 }
acl abuse src_http_req_rate(Abuse) ge 5000
acl flag_abuser src_inc_gpc0(Abuse) ge 100
acl scanner src_http_err_rate(Abuse) ge 5000



# Abuse protection.
# Sources that are not filtered.
tcp-request content accept if { src -f /etc/haproxy/whitelist.lst }
# Sources rejected immeditely.
tcp-request content reject if { src -f /etc/haproxy/blacklist.lst }
# Limiting the connection rate per client. No more than 5000 connections over 3 seconds.
##tcp-request content reject if { src_conn_rate(Abuse) ge 5000 }
# Reject if more than 1000 connections from client.
# This is to accommodate clients behind a NAT.
##tcp-request content reject if { src_conn_cur(Abuse) ge 1000 }
# Block based on backend.
##tcp-request content reject if { src_get_gpc0(Abuse) gt 5000 }
# Track counters based on forwarded ip.
##tcp-request content track-sc1 src table Abuse


When the rule BLOCK is enabled  you can choose the return 403 or silent-drop t9

# Returns a 403 to the abuser and flags for tcp-reject next time
http-request deny if abuse flag_abuser
http-request deny if scanner flag_abuser




Monitoring

Show the stick table that there the top IP
echo "show table Abuse" | socat unix-connect:/var/run/haproxy/admin.sock stdio





hatop -s /var/run/haproxy/admin.sock



It's possible connect external tools like

Microsoft OMS
Datadog
Prometheus.
Splunk
And others APMs

6/07/2019

SSH / Putty or similar freeze,

Change the MTU
Putty / SSH / Termius  freeze. 
After changing the MTU of Windows TAP adapter to 1200, it works fine. 
enter image description here

5/02/2019

Azure Debian apt update problem ?



Sometimes when the Debian is old, and you try to install or update something, the APT doesn't work in Debian machine in Azure VM, need to update the apt.




I got these errors

Err http://debian-archive.trafficmanager.net/debian/ jessie-backports/main pinen                                                                                                                                                     try-gtk2 amd64 0.9.7-5~bpo8+1
  404  Not Found [IP: 52.233.239.54 80]
Get:6 http://debian-archive.trafficmanager.net/debian/ jessie/main gnupg-agent a                                                                                                                                                     md64 2.0.26-6+deb8u2 [273 kB]
Err http://debian-archive.trafficmanager.net/debian/ jessie-backports/main libks                                                                                                                                                     ba8 amd64 1.3.5-2~bpo8+1
  404  Not Found [IP: 52.233.239.54 80]
Get:7 http://debian-archive.trafficmanager.net/debian/ jessie/main gnupg2 amd64                                                                                                                                                      2.0.26-6+deb8u2 [1398 kB]
Get:8 http://debian-archive.trafficmanager.net/debian/ jessie/main libgtk2.0-bin                                                                                                                                                      amd64 2.24.25-3+deb8u2 [535 kB]
Fetched 7839 kB in 1s (5669 kB/s)
E: Failed to fetch http://debian-archive.trafficmanager.net/debian/pool/main/lib                                                                                                                                                     a/libassuan/libassuan0_2.4.3-2~bpo8+1_amd64.deb  404  Not Found [IP: 52.233.239.                                                                                                                                                     54 80]

E: Failed to fetch http://debian-archive.trafficmanager.net/debian/pool/main/p/p                                                                                                                                                     inentry/pinentry-gtk2_0.9.7-5~bpo8+1_amd64.deb  404  Not Found [IP: 52.233.239.5                                                                                                                                                     4 80]

E: Failed to fetch http://debian-archive.trafficmanager.net/debian/pool/main/lib                                                                                                                                                     k/libksba/libksba8_1.3.5-2~bpo8+1_amd64.deb  404  Not Found [IP: 52.233.239.54 8                                                                                                                                                     0]

E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?



OR this 

Hit http://security.debian.org jessie/updates/main Sources
Ign http://apt.newrelic.com newrelic/non-free Translation-en
Ign http://apt.newrelic.com newrelic/non-free Translation-fr
Hit http://security.debian.org jessie/updates/main amd64 Packages
Err http://debian-archive.trafficmanager.net jessie-backports/main amd64 Packages
  404  Not Found [IP: 52.233.239.54 80]
Ign http://debian-archive.trafficmanager.net jessie-backports/main Translation-en
Ign http://debian-archive.trafficmanager.net jessie-backports/main Translation-fr
Hit http://security.debian.org jessie/updates/main Translation-en
Fetched 2663 B in 2s (1324 B/s)
W: GPG error: http://debian-archive.trafficmanager.net jessie InRelease: The following signatures were invalid: KEYEXPIRED 1507383481
W: Failed to fetch http://debian-archive.trafficmanager.net/debian/dists/jessie-backports/main/source/Sources  404  Not Found [IP: 52.233.239.54 80]

4/27/2019

Haproxy error inconsistencies between private key and certificate loaded from PEM file





Error in haproxy with lets encrypt

error msg
bind *:443' : inconsistencies between private key and certificate loaded from PEM file '/etc/letsencrypt/live/

Need to create a new file

cat cert.pem privkey.pem > haproxy_cert.pem



Add in haproxy
frontend www
        bind *:80
        bind *:443  ssl crt /etc/letsencrypt/live/mydomain.com/haproxy_cert.pem

and make a test
haproxy -c -V -f /etc/haproxy/haproxy.cfg

8/05/2016

11/01/2015

Como limpar a partição de boot ubuntu



Quando você tentar fazer a instalação de algum programa no ubuntu e ele diz que não é possível execute apt-get -f install para forçar ou continuar a instalar os pacotes etc, e fica em um loop eterno, o servidor deve estar com a partição de boot cheia, verifique o volume dando o comando df -h caso esteja em 100% execute o comando abaixo para limpar.








uname -r


verifique o KERNEL que o servidor tem instalado







sudo dpkg --list 'linux-image*'




removendo as versões não desejáveis.







sudo dpkg -r linux-image-3.2.0-23-generic

or

sudo dpkg --force-depends --purge linux-image-3.2.0-23-generic




10/25/2015

Munin Ubuntu 12.04

Munin



Para instalar







sudo apt-get install munin


Configurar







sudo vim /etc/munin/munin.conf


Arquivo do munin.conf







# Example configuration file for Munin, generated by 'make build'

# The next three variables specifies where the location of the RRD

# databases, the HTML output, logs and the lock/pid files. They all

# must be writable by the user running munin-cron. They are all

# defaulted to the values you see here.

#

dbdir /var/lib/munin

htmldir /var/www/munin

logdir /var/log/munin

rundir /var/run/munin

#

# Where to look for the HTML templates

# tmpldir /etc/munin/templates

# (Exactly one) directory to include all files from.

#

includedir /etc/munin/munin-conf.d

# Make graphs show values per minute instead of per second

#graph_period minute

# Graphics files are normaly generated by munin-graph, no matter if

# the graphs are used or not. You can change this to

# on-demand-graphing by following the instructions in

# http://munin.projects.linpro.no/wiki/CgiHowto

#

#graph_strategy cgi

# munin-cgi-graph is invoked by the web server up to very many times at the

# same time. This is not optimal since it results in high CPU and memory

# consumption to the degree that the system can thrash. Again the default is

# 6. Most likely the optimal number for max_cgi_graph_jobs is the same as

# max_graph_jobs.

#

#munin_cgi_graph_jobs 6

# If the automatic CGI url is wrong for your system override it here:

#

#cgiurl_graph /cgi-bin/munin-cgi-graph

# munin-graph runs in parallel, the number of concurrent processes is

# 6. If you want munin-graph to not be parallel set to 0. If set too

# high it will slow down munin-graph. Some experiments are needed to

# determine how many are optimal on your system. On a multi-core

# system with good SCSI disks the number can probably be quite high.

#

#max_graph_jobs 6

# Drop somejuser@fnord.comm and anotheruser@blibb.comm an email everytime

# something changes (OK -> WARNING, CRITICAL -> OK, etc)

#contact.someuser.command mail -s "Munin notification" somejuser@fnord.comm

#contact.anotheruser.command mail -s "Munin notification" anotheruser@blibb.comm

#

# For those with Nagios, the following might come in handy. In addition,

# the services must be defined in the Nagios server as well.

#contact.nagios.command /usr/bin/send_nsca nagios.host.comm -c /etc/nsca.conf

# a simple host tree

[localhost.localdomain]

address 127.0.0.1

use_node_name yes

#

# A more complex example of a host tree

#

## First our "normal" host.

# [fii.foo.com]

# address foo

#

## Then our other host...

# [fay.foo.com]

# address fay

#

## Then we want totals...

# [foo.com;Totals] #Force it into the "foo.com"-domain...

# update no # Turn off data-fetching for this "host".

#

# # The graph "load1". We want to see the loads of both machines...

# # "fii=fii.foo.com:load.load" means "label=machine:graph.field"

# load1.graph_title Loads side by side

# load1.graph_order fii=fii.foo.com:load.load fay=fay.foo.com:load.load

#

# # The graph "load2". Now we want them stacked on top of each other.

# load2.graph_title Loads on top of each other

# load2.dummy_field.stack fii=fii.foo.com:load.load fay=fay.foo.com:load.load

# load2.dummy_field.draw AREA # We want area instead the default LINE2.

# load2.dummy_field.label dummy # This is needed. Silly, really.

#

# # The graph "load3". Now we want them summarised into one field

# load3.graph_title Loads summarised

# load3.combined_loads.sum fii.foo.com:load.load fay.foo.com:load.load

# load3.combined_loads.label Combined loads # Must be set, as this is

# # not a dummy field!

#

## ...and on a side note, I want them listen in another order (default is

## alphabetically)

#

# # Since [foo.com] would be interpreted as a host in the domain "com", we

# # specify that this is a domain by adding a semicolon.

# [foo.com;]

# node_order Totals fii.foo.com fay.foo.com

#


Arquivo apache.conf







sudo vim /etc/munin/apache.conf


Configurações do arquivo.







Alias /munin /var/www/munin

<Directory /var/www/munin>

Order allow,deny

# Allow from localhost 127.0.0.0/8 ::1

Allow from all

Options None

# This file can be used as a .htaccess file, or a part of your apache

# config file.

#

# For the .htaccess file option to work the munin www directory

# (/var/cache/munin/www) must have "AllowOverride all" or something

# close to that set.

#

# AuthUserFile /etc/munin/munin-htpasswd

# AuthName "Munin"

# AuthType Basic

# require valid-user

# This next part requires mod_expires to be enabled.

#

# Set the default expiration time for files to 5 minutes 10 seconds from

# their creation (modification) time. There are probably new files by

# that time.

#

<IfModule mod_expires.c>

ExpiresActive On

ExpiresDefault M310

</IfModule>

</Directory>


Criar a pasta e dar a permissão correta a pasta








sudo mkdir /var/www/munin

sudo chown munin:munin /var/www/munin



Reiniciar os serviços








sudo service munin-node restart

sudo service apache2 restart



Instalação Plugins

Check os plugins quem servidor tem ativo







ls /etc/munin/plugins


Confira os plugins que o servidor tem disponível







ls /usr/share/munin/plugins/









Plugin | Used | Suggestions

------ | ---- | -----------

acpi | no | no [cannot read /proc/acpi/thermal_zone/*/temperature]

amavis | no | no

apache_accesses | yes | yes

apache_processes | no | no

apache_volume | yes | yes

apc_envunit_ | no | no [no units to monitor]

bonding_err_ | no | no [No /proc/net/bonding]

courier_mta_mailqueue | no | no [spooldir not found]

courier_mta_mailstats | no | no [could not find executable]

courier_mta_mailvolume | no | no [could not find executable]

cps_ | no | no

cpu | yes | yes

cpuspeed | no | no [missing /sys/devices/system/cpu/cpu0/cpufreq/stats/time_in_state]

cupsys_pages | no | no [could not find logdir]

df | yes | yes

df_inode | yes | yes

diskstats | yes | yes

entropy | yes | yes

exim_mailqueue | no | no [no exiqgrep]

exim_mailstats | no | no [logdir does not exist]

fail2ban | no | no [/usr/bin/fail2ban-client not found]

forks | yes | yes

fw_conntrack | no | yes

fw_forwarded_local | no | yes

fw_packets | yes | yes

hddtemp_smartctl | no | no [smartctl not found]

http_loadtime | yes | yes

if_ | yes | yes (eth0 eth1)

if_err_ | yes | yes (eth0 eth1)

interrupts | yes | yes

iostat | yes | yes

iostat_ios | yes | yes

ip_ | no | yes

ipmi_ | no | no [no /usr/bin/ipmitool]

irqstats | yes | yes

jmx_ | no | no [java runtime not found at /usr/bin/java]

load | yes | yes

lpstat | no | no [lpstat not found]

memory | yes | yes

munin_stats | yes | yes

mysql_ | no | no

netstat | no | no

nfs4_client | no | no [no /proc/net/rpc/nfs]

nfs_client | no | no

nfsd | no | no [no /proc/net/rpc/nfsd]

nfsd4 | no | no [no /proc/net/rpc/nfsd]

nginx_request | no | no [no nginx status on http://srv205/nginx_status]

nginx_status | no | no [no nginx status on http://localhost/nginx_status]

ntp_kernel_err | no | no

ntp_kernel_pll_freq | no | no

ntp_kernel_pll_off | no | no

ntp_offset | no | no [no ntpq program]

nvidia_ | no | no [no nvclock executable at /usr/bin/nvclock, please configure]

open_files | yes | yes

open_inodes | yes | yes

postfix_mailqueue | no | no

postfix_mailvolume | no | no [postfix not found]

postgres_bgwriter | no | no

postgres_cache_ | no | no

postgres_checkpoints | no | no

postgres_connections_ | no | no

postgres_connections_db | no | no

postgres_locks_ | no | no

postgres_querylength_ | no | no

postgres_scans_ | no | no

postgres_size_ | no | no

postgres_transactions_ | no | no

postgres_tuples_ | no | no

postgres_users | no | no

postgres_xlog | no | no

proc_pri | yes | yes

processes | yes | yes

ps_ | no | no

qmailqstat | no | no

selinux_avcstat | no | no [missing /selinux/avc/cache_stats file]

sendmail_mailqueue | no | no

sendmail_mailstats | no | no [no mailstats command]

sendmail_mailtraffic | no | no [no mailstats command]

slapd_ | no | no [Net::LDAP not found]

slapd_bdb_cache_ | no | no [Can't execute db_stat file '/usr/bin/db4.6_stat']

slony_lag_ | no | no [DBD::Pg not found, and cannot do psql yet]

smart_ | no | no [smartmontools not found]

snort_alerts | no | no [/var/snort/snort.stats not readable]

snort_bytes_pkt | no | no [/var/snort/snort.stats not readable]

snort_drop_rate | no | no [/var/snort/snort.stats not readable]

snort_pattern_match | no | no [/var/snort/snort.stats not readable]

snort_pkts | no | no [/var/snort/snort.stats not readable]

snort_traffic | no | no [/var/snort/snort.stats not readable]

squeezebox_ | no | no [no connection on localhost port 9090]

squid_cache | yes | no [could not connect: Connection refused]

squid_objectsize | yes | no [could not connect: Connection refused]

squid_requests | yes | no [could not connect: Connection refused]

squid_traffic | yes | no [could not connect: Connection refused]

swap | yes | yes

threads | yes | yes

tomcat_ | no | no

uptime | yes | yes

users | no | no

varnish_ | no | no [which varnishstat returns blank]

vmstat | yes | yes

vserver_cpu_ | no | no [/proc/virtual/info not found]

vserver_loadavg | no | no [/proc/virtual/info not found]

vserver_resources | no | no [/proc/virtual/info not found]

yum | no | no

zimbra_ | no | no [No Text::CSV_XS]


Para instalar os plugins basta fazer um atalho a pasta de plugin







sudo ln -s /usr/share/munin/plugins/nome de plugin /etc/munin/plugins/ nome de plugin


Instalando os plugins







sudo ln -s /usr/share/munin/plugins/mysql_ /etc/munin/plugins/mysql_

sudo ln -s /usr/share/munin/plugins/squid_cache /etc/munin/plugins/squid_cache

sudo ln -s /usr/share/munin/plugins/squid_icp /etc/munin/plugins/squid_icp

sudo ln -s /usr/share/munin/plugins/traffic /etc/munin/plugins/traffic

sudo ln -s /usr/share/munin/plugins/openvpn /etc/munin/plugins/openvpn

sudo ln -s /usr/share/munin/plugins/squid_objectsize /etc/munin/plugins/squid_objectsize


Após a instalação execute o restart.







sudo /etc/init.d/munin-node restart

sudo /etc/init.d/apache2 restart