n top of ZFS or LVM. Ceph preffer raw devices. I would like to move to Ceph but with a good consistent step by step manual
which will give a working solution.
On 29.04.2023 01:26, Linux-Fan wrote:
Mimiko writes:
Hello.
I would want to use a shared storage with docker to store volume so
On 29.04.2023 02:52, cor...@free.fr wrote:
Hello list,
When I run this command:
$ sudo echo 123 > /root/123.txt
A better use is to do:
echo 123 | sudo tee /root/123.txt
or
sudo tee /root/123.txt <
Hello.
I would want to use a shared storage with docker to store volume so swarm
containers can use the volumes from different docker nodes.
As for now I tried glusterFS and linstor+drbd.
GlusterFS has a bug with slow replication and slow volume status presentation or timing out. Also directory
On 26.11.2019 19:04, Sven Hartge wrote:
Your first method is fine.
Grüße,
Sven.
Thank you
On 26.11.2019 16:10, Dan Ritter wrote:
For a system which is expected to work immediately at boot, you
want auto.
For a system which doesn't have anything particularly weird
going on, you want to use the integrations.
Thank you.
It is a server on which network adapters are onboard or PCI and
Hello.
This is a snippet of my interfaces config file:
auto eth0
iface eth0 inet manual
auto eth1
iface eth1 inet manual
auto bond0
iface bond0 inet manual
bond-slaves eth0 eth1
bond-mode 802.3ad
bond-miimon 100
bond-downdelay 200
bond-updelay 200
auto
On 01.04.2019 09:40, Matthew Crews wrote:
On 3/31/19 11:20 PM, Mimiko wrote:
On 01.04.2019 05:51, Matthew Crews wrote:
Step-by-step instructions are found here:
https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS
Hello.
I read this guide, but this implies to have a separate MD
On 01.04.2019 05:51, Matthew Crews wrote:
Step-by-step instructions are found here:
https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS
Hello.
I read this guide, but this implies to have a separate MD raid on disk. It is
not fully /boot on ZFS.
Hello.
I know that ZFS is not well supported as MD raids. All I've found on internet
on installing Debian on ZFS is using a live disk.
My goal is to boot from network and install Debian root on ZFS raid mirror. I can do this using MD raid. I am searching for suggestions of how to
enable ZFS du
On 25.03.2019 21:23, Sven Hartge wrote:
Please update your system, Debian 6.0 Squeeze is severely out of date
and should NOT be used for any production systems and even less when
connected to the public Internet.
Yes I know. But this is not the answer I'm searching.
On 24.03.2019 22:46, Sven Hartge wrote:
I came across a problem when booting. In the server is installed 4
disks connected to raid controller and 2 disks connected to
motherboard sata interfaces. During booting disks connected to raid
controller are detected before md raid assembling process, whi
hello.
I came across a problem when booting. In the server is installed 4 disks connected to raid controller and 2 disks connected to motherboard sata
interfaces. During booting disks connected to raid controller are detected before md raid assembling process, while the 2 disks connected to
mot
Hello.
I want to install minimal configuration of L2TP to connect remote mikrotik to L2TP server on Debian. No IPsec, no pass, just minimal to establish a
connection for a test.
I use apt-get install xl2tpd ppp
The configuration is:
/etc/xl2tp.conf:
[global]
port = 1701
access control = no
[l
On 11.09.2018 15:48, Matthew Crews wrote:
My recommendation is to use a separate /boot partition and make it EXT2.
I'm planning on using ZFS as software raid on which I'll create vols. So it
might be ext2, ext3, ext4.
I've had root / and /boot on same partition ext4 with other directories, except /var, /home, /srv so system will boot even if those directories fill
up. As most data will be at same disks (separated using MD in mirrors), failing both disks at same time will break the system any way. Some solution
On 11.09.2018 12:04, to...@tuxteam.de wrote:
Hello.
Currently I use ZFS for making a pool of disks, but the system itself is
installed on 2 SSD disks using MD to mirror.
How is now ZFS on handling booting from ZFS mirror. Can I start use ZFS as root
filesystem on latest Debian? Is it stable o
Hello.
Currently I use ZFS for making a pool of disks, but the system itself is
installed on 2 SSD disks using MD to mirror.
How is now ZFS on handling booting from ZFS mirror. Can I start use ZFS as root
filesystem on latest Debian? Is it stable on updates?
Thanks for suggestions.
last update to OpenDPI was 6 years ago. Could it be used now without problems?
On 19.08.2018 20:50, Reco wrote:
If software archeology is your thing, there's OpenDPI - [2] (sorry for
the GitHub link again).
Isn't zorp gone commercial only?
On 19.08.2018 20:51, Dan Ritter wrote:
zorp is a proxying firewall with many look-inside features, but
is not arbitrarily deep.
Thank you all for suggestions.
Yes, I didn't tell my goal. First of course is to limit access to web sites and collect statistics. Yes this could be done with squid and ssl_bump. I
hope this does not change certificate as internet-banking will not work. The problem for a quick implementation is
Hello.
Maybe this was answered. Is there a Deep Packet Inspection to use in Debian 9
for a firewall setup? Opensource and maybe in repository.
Thank you.
Hello.
I use in preseed.cfg the following:
tasksel tasksel/first multiselect standard
tasksel tasksel/first seen false
d-i clock-setup/utc boolean false
d-i clock-setup/utc seen false
I set seen as false to view the value that was selected. Nor tasksel, nor clock-setup/utc does not set
Hello.
I want to preseed a Debian Strech installation. I pass different parameters via
params to kernel and other from preseed.cfg using a netboot setup.
This is how I start install:
initrd http://${next-server}/linux/debian/9.0.0/netboot-amd64/debian-installer/amd64/initrd.gz && chain
http:/
INPUT default policy to DROP and no
icmp accepted. Strange that some time I did get an answer on ping.
Accepting icmp requests gives me stable replies.
Thank you all for answering.
--
Mimiko desu.
oing apt-get install linux-image-amd64 says it already latest version.
Anyway, this slightly different (minor) version difference could be the problem?
--
Mimiko desu.
lot of output regarding arp and icmp on LAN.
Any sysctl settings you might have changed on the host?
net.ipv4.ip_forward=1
net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.all.arp_announce=2
--
Mimiko desu.
,
but reality is this.
--
Mimiko desu.
round on interfaces of server hosting virtuals.
Virtuals are linux different flavour and one windows. This problem may occur on
any of this virtuals.
I've observed that for this particular virtual, which have problem, the arp table of host assigned self mac to the virtual's IP, not the
On 22.02.2017 16:04, Darac Marjal wrote:
StackExchange
(http://serverfault.com/questions/407842/incredibly-slow-kvm-disk-performance-qcow2-disk-files-virtio)
suggests that QCOW2 may be a
factor, and to use:
$ qemu-img create -f qcow2 -o
preallocation=metadata,compat=1.1,lazy_refcounts=on imag
On 22.02.2017 10:40, Joel Wirāmu Pauling wrote:
Why are you using vhd instead of qcow2? I would wager it's a combination of the
vhd format having some bug. Try with qcow2 first.
I tried and the write speed is the same.
Hello.
I already wrote to qemu mail list and didn't find an answer.
I've setup qemu / kvm on Debian Wheezy to host some Debian Jessie guests.
I create disk like this:
virsh vol-create-as --pool default --name root.vhd --capacity 50G --format vpc
Then create a virtual with this:
virt-install --
On 16.04.2016 17:09, Mimiko wrote:
Hello.
Recently I started to use qemu-kvm for virtualisation.
I've setup a bridged network and use it in virtual machine:
default
test
hvm
When the bond used in the bridge
On 17.05.2016 15:16, Elimar Riesebieter wrote:
Ask your search engine: "init.d/networking restart is deprecated"
This was the only tool to fully restart all networking even the
interface thru which connection is made.
Are the `ifdown eth & ifup eth` the only option now?
--
Mimiko desu.
On 22.04.2016 11:59, Gene Heskett wrote:
> What you Mimiko, should be doing is using your own ISP's mail server,
> which on this mailing list I am, by setting up your own email agent.
> There are quite a few available for linux.
I don't want to use my ISP's mail account. Its lim
On 22.04.2016 07:49, Michael Milliman wrote:
On 04/21/2016 09:18 PM, Gary Roach wrote:
Actually this time it worked for the first time in a month. Hopefully
the problem has been corrected. Thanks for your reply. Normally we do
get a return copy of our posts
I have participated in several threa
Hello.
Recently I started to use qemu-kvm for virtualisation.
I've setup a bridged network and use it in virtual machine:
default
test
hvm
function='0x0'/>
When the bond used in the bridge setup is set to 802.3ad,
On 15.04.2016 13:37, Sven Hartge wrote:
>All interfaces are connected to 3 different switches which are on same
>LAN (switches are interconnected).
This is bad.
Well, I need the eth0 for administration, as bond I can disconnect,
connect restart and will lose connection.
> This is the way L
On 15.04.2016 11:33, Mimiko wrote:
Hello.
A server has 3 interfaces: eth0, eth1, eth2. I've setup a bond12 with
mode adaptive-alb with eth1 and eth2. Now interfaces have:
auto eth0
iface eth0 inet static
address x.x.x.1
netmask 255.255.255.0
auto eth1
iface eth1 inet manual
auto eth2
Hello.
A server has 3 interfaces: eth0, eth1, eth2. I've setup a bond12 with
mode adaptive-alb with eth1 and eth2. Now interfaces have:
auto eth0
iface eth0 inet static
address x.x.x.1
netmask 255.255.255.0
auto eth1
iface eth1 inet manual
auto eth2
iface eth2 inet manual
auto bond12
iface
On 21.03.2016 09:31, Igor Cicimov wrote:
> The problem is when I do `service networking restart` I get this message:
> RTNETLINK answers: invalid argument
> Failed to bring up br0
So, to overcome MTU problem, this is my interfaces:
auto eth0
iface eth0 inet manual
bo
On 21.03.2016 09:31, Igor Cicimov wrote:
> The problem is when I do `service networking restart` I get this message:
> RTNETLINK answers: invalid argument
> Failed to bring up br0
I found the rout cause of this error:
ip link set dev br0 mtu 9000 up
RTNETLINK answers: Inva
On 21.03.2016 00:30, Tom H wrote:
[Off-list]
Try rewriting "/etc/network/interfaces" as (I didn't check the actual
bond and bridge options):
iface eth0 inet manual
bond-master bond0
iface eth1 inet manual
bond-master bond0
iface bond0 inet manual
bond-mode 802.3ad
bond-miimon 100
bond-downde
On 21.03.2016 09:31, Igor Cicimov wrote:
Hold on what is vlan doing here? Remove the vlan line and try again.
Igor, I tried lot of options to change and comment out, including this
one, before posting here. So this option does not create a problem. It
is here for future vlan tagging enabling
On 21.03.2016 09:05, Igor Cicimov wrote:
What script are you talking about? The interfaces are set to manual in
the config thus need to be manually started.
/etc/init.d/networking
This script does all the bring-up's of manual interfaces.
The bridge config is an extension from bond configurati
On 20.03.2016 23:57, Igor Cicimov wrote:
Did you bring eth0 and eth1 up?
Why should I do it when script must do this all?
Hello.
Recently I want to extend my existing bond to be also a bridge to use
qemu-kvm. As seen in examples on net this is my `interfaces` file content:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
# bond-master bond0
auto eth1
iface eth1 inet manual
# bond-maste
Hello.
I use zfs to create zvols. This is the zpool list:
NAME SIZE ALLOC FREE EXPANDSZ FRAGCAP DEDUP HEALTH ALTROOT
zfspool 14.5T 10.7T 3.75T -13%74% 1.00x ONLINE -
I have 3.75T free to create more zvols or extend existing vols.
I have several zvols crea
On 21.12.2015 09:58, David Christensen wrote:
On 12/20/2015 08:36 AM, Mimiko wrote:
The HDD's are connected thru SuperMicro SAS RAID AOC-USASLP-L8i
(PCI-E).
http://www.supermicro.com.tr/AOC-USASLP-L8i.cfm.htm
So, one RAID card, eight SAS channels, and one 2 TB and one 1 TB driv
I've tested using iperf the file server to the windows server and I've
got 400Mbits. The other linux server got 600Mbits. Also I've tested
iperf between windows server and another same type supermicro with
hyper-v and got around 650Mbits.
I've tested samba speed from windows to other linux ser
On 20.12.2015 02:36, Frank Pikelner wrote:
> There appear to be driver issues discussed in other threads with
> respect to Intel driver and slow throughput due to interrupts and CPU
> offloading. May want to review driver parameters and look at trying a
> few changes.
Yes, I've read about that. I
Hello.
After reviewing the results of test, I've modified smb.conf. I've added
max protocol = SMB2 and removed SO_RCVBUF=8192 SO_SNDBUF=8192 from
socket options. The read speed from this server increased to 40MB/s, the
write speed to this server increased to 30MB/s.
On 19.12.2015 00:43, Dan
On 18.12.2015 16:32, Michael Beck wrote:
> Any lost packets?
ifconfig
bond0 Link encap:Ethernet HWaddr
UP BROADCAST RUNNING MASTER MULTICAST MTU:9000 Metric:1
RX packets:654308767 errors:0 dropped:5238 overruns:0 frame:0
TX packets:761897714 errors:0 dropped:0
Hello.
I've bonded two onboard Intel 82576 Gigabit networks on an supermicro
server for load balancing (round-robin). It is working, but transfer
rate is about 10-20MB/s, while on same type of server the same
configuration in windows I get around 100MB/s.
cat /proc/net/bonding/bond0
Ethernet
Hello.
I've setup autofs in Debian 7 to automount iso's when it is accessed via
samba. At start it was working fine. I've setup --timeout=5. But as
number of iso's grew, autofs started just to create the folders for the
corresponding iso's while no content was in them. Some folders have
conte
. Обычно это
пакет для виндоус. Если тебе нужен только веб сервер, то sudo apt-get
install apache2
После этого нужно всего лишь открыть TCP 80 и настроить конфиг файл
сервера. Открываешь его и там для каждой опции есть описание того что
она делает.
Просто попробуй.
--
Mimiko desu.
--
To
630272.811880] /build/linux-4wkEzn/linux-3.2.68/fs/cifs/cifsfs.c:
Devname: //10.10.0.64/img flags: 0
[3630272.811906] /build/linux-4wkEzn/linux-3.2.68/fs/cifs/connect.c:
Domain name set
[3630272.811915] /build/linux-4wkEzn/linux-3.2.68/fs/cifs/connect.c:
Username: mimiko
[3630272.811922] /buil
Hello.
How to find on which interface this packet appears?
--
Mimiko desu.
--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/55572382.6060...@gmail.com
On 03.04.2015 23:21, David Wright wrote:
Those scripts have logging lines. Have you read their output?
Yes, there are logging, But there is no any suspection lines in that log
files. Only error is given when it's trying to mount devices so there are:
/backup/network - error mounting
/backup/o
1T zfspool/op
mount /dev/zvol/zfspool/backup /backup
mount /dev/zvol/zfspool/network /backup/network
mount /dev/zvol/zfspool/op /backup/op
Lats three commands can be put in /etc/fstab
--
Mimiko desu.
--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of &q
On 01.04.2015 17:55, Reco wrote:
> No, the problem is related to the Debian indeed. As ZFS is used as an
> LVM here, so you might as well replace those fancy/dev/zvol/* with
> something conventional, and the problem will still remain.
>
> Consider the following /etc/fstab.
>
> /dev/sda1 /backup
Hello.
I've setup a file server on Debian Wheezy x86_64. I've used 2 ssd for
system folders which are partitioned and used in software raid with
mdadm. It's working ok. Also there are a bunch of disks which are
combined in a big disk raid-z2 with zfs:
zpool create -f -m none -o ashift=12 zfsp
Well.
I did some test today to with tcpdump. It's realy strange. First I
uninstalled vlan. Configured all again. using tcpdump I saw it was
sending packets. But at first it didn't want to work.
I added 8021q to /etc/modules, rebooted server and as I wrote: ping
works, ftp works, but not http
Hello.
Recently I tried to combine multiple ISP using a layer 2 switch into one
port connected to a debian wheezy router.
I set up in interfaces:
auto eth0
iface eth0 inet static
address local_lan_ip
netmask mask
auto eth1.2
iface eth1.2 inet static
address isp1
?
Another problem is that //domain.com resolves to some IPs related to
domain controllers, while //domain.com/op resolves to another IPs
related to /op namespase. Why cifs resolves to domain controllers?
Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.63-2+deb7u1 x86_64 GNU/Linux
Thank you.
--
Mimiko desu
s), chassis fans, and fan inside the PSU. Given
> the age of this machine, I'd simply replace every fan in it for good
> measure.
Before putting this server in the server room, technicians did a full
clean of server's inside from dust. And checked voltages and fans. But
if after
ITSU SIEMENS
PRIMERGY Econel200/D2020, BIOS 08.10.Rev.1100.2020 06/01/2006
What could be the problem of this?
--
Mimiko desu.
--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/53771335.20...@gmail.com
66 matches
Mail list logo