5[00:02:24] <lope> so this random ass website says it calculates real usable capacity for hard drives. 4TB comes down to 3.638 TB so I figure if I partition all drives to 3.6TB I won't have any issues making mirrors in future? replaced-url
52[00:24:54] <petn-randall> lope: You can remove your MTA from the server, though some tools will fail, or mail will land in limbo when the use the sendmail maildrop and there's nothing to pick it up.
111[01:04:12] <dpkg> #debian-next is the channel for testing/unstable support on the OFTC network (irc.oftc.net), *not* on freenode. If you get "Cannot join #debian-next (Channel is invite only)." it means you did not read it's on irc.oftc.net.
112[01:04:37] <annadane> you can always boot from the older kernel until the problem is fixed
203[02:13:31] <garotosopa> Hi. I've just installed Debian 10 with Gnome and when I go to Settings, Online Accounts and click on Google, it crashes. If I launch gnome-control-center on the command line I get "Gdk-Message: 21:10:15.844: Error 71 (Protocol error) dispatching to Wayland display." Is it a bug worth reporting?
215[02:25:21] <Waxhead> Is it possible to "tune" the root login (getty) so that it always have "realtime" priority? Quite useful to avoid slowness in case some program eats up all ram and you need to quickly do a ctrl+alt+f6 or something to nuke it.
216[02:28:33] *** Quits: Ekchuan (~RandyMars@replaced-ip) (Quit: My MacBook has gone to sleep. ZZZzzz…)
217[02:28:54] *** Quits: garotosopa (b1c07579@replaced-ip) (Remote host closed the connection)
223[02:35:02] <jmcnaught> Waxhead: you could maybe create a drop-in with 'systemctl edit getty@tty1.service' (for tty1) and set CPUSchedulingPriority=99 (see 'man systemd.exec')
245[02:42:32] <m4t> is there a way to fix this dependency loop with davfs2+systemd+buster? i.e. "systemd[1]: local-fs.target: Job storage-box.mount/stop deleted to break ordering cycle starting with local-fs.target/stop"
251[02:43:53] <m4t> it's not breaking things too badly but on reboot davfs2 complains about orphaned cache items, i think because it's being SIGTERM'd rather than gracefully unmounted
263[02:59:41] <m4t> hmm it seems to incorrectly have this added (from systemctl show -p Wants -p After storage-box.mount): Before=remote-fs.target umount.target local-fs.target
264[03:08:02] *** Quits: Jerrynicki_ (~niklas@replaced-ip) (Remote host closed the connection)
319[03:41:13] <Beggar> I removed manually all alsa settings, then pulse adirectories too from usr and etc. Then I used confmiss to recosntruct them. I did it with libasound2, libasound2-data, libasound2-plugin, alsa-utils and pulseaudio
320[03:42:18] <Beggar> after that I could access pulseaudio options and then I unmuted the microfone, nothing happened, then I muted it and unmuted and the microfone came back to work. I unistalled pulseaudio again and the microfone still working.
322[03:43:22] <Beggar> I am kind of happy that the problem was solved, but at the same time I am kind of puzzled here. How pulseaudio can mute the microfone withoutit appearing as mute in alsamixer?
324[03:44:23] <Beggar> Also... What happened when I re-unmuted it throught pulseaudio? As far as I know pulseaudio si just a layer OVER ALSA, so it depends on alsa to operate... how can it make something alsa can not?
325[03:46:22] *** nyov is now known as Guest21796
326[03:46:28] <akk> Actually, it looks like there's no virtualbox in buster either?
350[03:51:36] <Beggar> I found qemu to be versatile to my needs at least
351[03:51:39] <akk> Can it use virtualbox VMs like .ova ? I'd like to run a windows vm for the rare occasion when I need something like an adobe ebook.
352[03:51:51] <Beggar> -kvm-enable solve msot of the issues usually.
360[03:54:20] <akk> I'd just as soon not spend a lot of time learning qemu syntax either (one benefit of virtualbox, it was pretty easy to get started) so I appreciate the tip, annadane
361[03:54:53] <mrquincy> Just logged in so that I don't have to watch all the packages stream by in my stretch->buster upgrade. Too scary. Can't look.
372[03:58:21] <jmcnaught> lizzie: according to 'apt-file search /usr/lib/dir' that path does not exist in any Debian packages. What are you trying to do?
373[03:58:30] <annadane> oh, you can just do that. okay
403[04:17:33] <lizzie> ok I found it when I searched for just the filename using that same command
404[04:17:37] <lizzie> I'm trying to find out why steam complains of missing swrast_dri.so on my netbook
405[04:18:01] <lizzie> apparently this package is already installed so that wasn't it, probably something with the library load path, anyways thanks for that useful command!
406[04:18:41] <jmcnaught> lizzie: oops typo in my message. Are you using the steam package in Debian contrib?
407[04:18:45] *** Quits: electro33 (uid613@replaced-ip) (Quit: Connection closed for inactivity)
421[04:25:28] <lizzie> jmcnaught: ah ok, I can see now that the page you linked mentions the same workaround, and it doesn't seem like it matters which steam package I use since steam downloading the libraries itself is the issue
422[04:25:53] <lizzie> it's taking a bit to do the steam update (very slow netbook), but I'll elt you know if it launches normally
423[04:26:12] <lizzie> and I'll just remember that when in doubt I should check that wiki page again and delete .so files
437[04:37:19] <akk> annadane: Do you know of a good virt-manager tutorial? I'm striking out figuring out how I'm supposed to start the libvirt-daemon to make virt-manager happy.
439[04:38:00] <akk> I'm finding lots of redhat-oriented tutorials and a few 3-year-old ubuntu tutorials referencing packages that no longer exist in debian.
440[04:38:21] <Tom-_> no idea
441[04:38:24] <Tom-_> !libvirt
442[04:38:24] <dpkg> Libvirt is a library for interfacing with different virtualization systems. replaced-url
443[04:38:38] <Tom-_> i haven't learned or used libvirt yet
444[04:38:41] <annadane> akk, i'm a little confused on that myself, i've had times where it searches forever as started by root and other times not, sometimes adding myself to the libvirt group and logging in/out does it, restarting the libvirt services etc
445[04:38:45] <Tom-_> but maybe someone else will know
450[04:39:59] <akk> The only thing that lists is sys-devices-virtual-misc-rfkill.device
451[04:40:17] <annadane> (to add yourself to groups you have to log out/in for it to take effect but it not clear you have to do that to make it work, vs just running as root)
452[04:40:23] <annadane> tl;dr: i'm very much not an expert, sorry
453[04:40:27] <jmcnaught> akk: installing virt-manager should have installed the libvirt-daemon-system package (unless you have recommends disabled?)
454[04:40:48] <akk> What should it list? Maybe there are yet more packages I need to install, beyond virt-manager qemu-utils virtinst qemu-kvm python-libvirt qemu virt-viewer bridge-utils libvirt-daemon
455[04:41:23] <akk> jmcnaught: I do have recommends disabled. I guess it's actually a requirement, not a recommend?
481[04:48:16] <akk> virt-manager is looking like it might be harder than just finding the qemu commands, so I guess I'll try searching for qemu tuts before giving up.
482[04:48:40] <akk> Hopefully this polkit thing is just a virt-manager requirement and not a qemu/kvm requirement.
489[04:51:37] <lizzie> so I have an arguably silly problem but basically I have a laptop with a dead battery that always wakes up thinking it's in 2033. Can I make it so my debian system will always run ntpdate on first network connection after a boot?
490[04:52:34] <akk> Oh, stupid question: when running any of these virtualizers like qemu, do I need to run as root?
495[04:55:14] <lizzie> akk: libvirt supports connecting to manage vms as a non-root user (obviously you can't mess with vms that were created under the root domain)
496[04:55:37] <lizzie> add your user to the groups libvirt and libvirt-qemu
497[04:55:50] <jmcnaught> that requires policykit to work
498[04:56:02] <annadane> i wonder if bhyve is any less mystifying
499[04:56:05] <lizzie> if you want to *directly* run qemu by hand with hardware vm support then you always need root
508[04:59:12] <lizzie> I've never needed to *think* about it, for what it's worth
509[04:59:26] <akk> Likewise.
510[05:00:03] *** Quits: de-facto (~de-facto@replaced-ip) (Remote host closed the connection)
511[05:00:28] <jmcnaught> a lot of stuff in recommends is to make things just work, or more convenient
512[05:00:49] <akk> qemu apparently needs the kvm_intel kernel module, which won't load because "Operation not supported", even though /proc/cpuinfo says I have the right flags. Sigh.
523[05:03:50] <akk> (I had to go to wikipedia to find out what numeric version was current. Is there a way to get the numeric version of the currently running debian?)
524[05:04:32] <akk> /etc/debian_version says bullseye/sid, though on stable systems sometimes it has a number.
570[05:30:00] <lizzie> it's possible that it was broken before I finished upgrading to debian 10 and it's just not broken now. I'll complain again if it re-emerges
693[07:16:59] <annadane> "Starting with glibc 2.26, Linux kernel 3.2 or later is required. To avoid completely breaking the system, the preinst for libc6 performs a check. If this fails, it will abort the package installation, which will leave the upgrade unfinished. If the system is running a kernel older than 3.2, please update it before starting the distribution upgrade."
694[07:17:04] <annadane> in the release notes ^
695[07:17:29] <shantanoo> :(
696[07:17:33] <shantanoo> am i screwed?
697[07:18:24] <annadane> i'm not sure. it's possible you can update the kernel in isolation but i'd like other answers
724[07:30:37] <themill> I recall reading that openvz has some bits that can be flipped to have it pretend to be a different kernel to its containers but I don't know if that actually lets glibc work on the newer kernel
743[07:44:32] <darxmurf> I have a auth issue with a samba server for linux users. This samba srv is attached to an MS active directory. Clients can open shares without problem from windows and OSX using DOMAIN\username and password. From mount.cifs I have an access denied.
744[07:45:10] <darxmurf> The only way I found with mount.cifs is to open a kerberos ticket, then use mount.cifs sec=krb5.
745[07:45:50] <darxmurf> Why is this server not accepting my login/pass when I access it with username=login,domain=DOMAIN
760[07:55:35] <spacejam> I am currently running Debian Stretch and am wanting to upgrade. BUT, I can't seem to be able to verify the integrety of the .iso file and the .sign file. I was able to complete the integrety check on the sha512sum file but am at a loss on confirming the public key needed to verify the .iso file. Could someone send me a link to where i can find clear instructions.
779[08:03:58] <annadane> (debian isn't the only distro to not include "here's how you validate the image" on the same page as the download links, a practice i find irritating)
780[08:04:32] *** Quits: dtux (~dmtucker@replaced-ip) (Remote host closed the connection)
816[08:27:25] <spacejam> Same here! I am going to dual boot with three OSes. Fist one being Debian Bust, Second Ubuntu Studio, and third Tails! I've got a ton ahead of me. lol
828[08:36:03] <spacejam> Does anybody have any fore knowlege they would be willing to share on the subject of validating other debian based systems, (Ubuntu) or is it in the same ball park? And even more so could you tell me if there is a totaly process for validating tails.
829[08:36:34] <spacejam> if there is a totally diferent process.
860[08:47:46] <spacejam> diffent oses for different purposes
861[08:47:58] <ratrace> spacejam: "validate .iso integrity" doesn't really relate to "validating other debian based systems" and yeah, it's always the same thing, sha256sum on the iso file, check against the hashes you got over https at least
862[08:48:10] <Haohmaru> that only intensifies the h0rr0r
863[08:48:53] <spacejam> depends on the color of you hat i would say
864[08:49:02] <spacejam> lol
865[08:49:07] <EmleyMoor> Anything I need another OS for goes on a VM
910[09:09:06] <PaulePanter> For people experiencing bug #932861, you can get the system to boot by specifying `fastboot` on the Linux kernel command line, and then install the package *logsave* manually.
933[09:27:36] <Blockah> I want to use the verdana font but hexchat and every other application has nowhere near as many as fonts as windows... Is there some sort of font package I need to install inside debian to unlock them?
986[09:39:58] <EmleyMoor> (however, you may well be right on the other point)
987[09:40:40] <Haohmaru> not sure why but debian tends to come with bitmap fonts *disabled* o_O
988[09:40:46] <Blockah> No idea? Just wanted verdana, but no new fonts came about using after installing fonts-arkpandora and reconfiguring the fontconfig
989[09:41:25] <Haohmaru> try this, launch a text editor like leafpad/mousepad or whatever, check if you see the new fonts there
990[09:41:37] <Haohmaru> if yes - then maybe hexchat needs a restart
991[09:41:53] <EmleyMoor> Blockah: If you want actual Verdana you're going to have to use the msttcorefonts thing
1003[09:45:25] <EmleyMoor> I would generally use a monospace font for IRC, but when I'm accessing it through XMPP, I stick with the default in the client.
1004[09:45:28] <Haohmaru> i have the impression that i didn't have to close hexchat for new fonts to appear in it, but i might be wrong
1005[09:45:43] <jelly> you're not wrong
1006[09:46:03] <EmleyMoor> Blockah: Nothing like Windows. With that you'd have to reboot... this is just restarting one app
1007[09:46:06] <Haohmaru> just have to close the font picker dialog
1008[09:46:14] *** Quits: erzqy (~erzqy@replaced-ip) (Remote host closed the connection)
1010[09:48:32] <Blockah> EmleyMoor, A restart is a restart, clear sign of people not knowing what they're doing imagine if apt-get update && apt-get upgrade required a reboot...
1032[09:53:05] <alkisg> It's like .png/.bmp/.jpg images, vs .svg images
1033[09:53:15] <Haohmaru> yeah
1034[09:53:28] <Haohmaru> i love "fixed 8" maaan
1035[09:53:30] <jelly> Haohmaru, freetype does not handle them well which is why they're disabled there. Apps that can use them natively already do regardless of that default setting
1062[10:01:40] <deadrom> hi all. hard time figuring the disks on a server. machine has to SAS controllers, one attached to a storage, on for internal drives, which has 2 disks. lsscsi -g tells me the real devices on this hpsa-module loaded drives are /dev/sdg1 and 2, but those should from a raid 1 set but list different partitions
1068[10:03:14] *** Quits: yonder (~yonder@replaced-ip) (Remote host closed the connection)
1069[10:03:40] <EmleyMoor> deadrom: /dev/sdX are usually disks, /dev/sdX1, 2 etc are partitions thereof. Not saying yours mightn't be different but that's what I would expect
1072[10:04:26] <EmleyMoor> sfdisk -l /dev/sdg might help
1073[10:04:33] <Blockah> dmesg might help
1074[10:04:39] <Blockah> Ah
1075[10:05:00] <deadrom> EmleyMoor: server games. I'd rather expect /dev/cciss0/c0d1p1 or such but the hpsa module that loaded for this seems to work differently
1076[10:05:33] <EmleyMoor> deadrom: Why would you expect that? That doesn't sound very Linux-y
1080[10:06:39] <EmleyMoor> deadrom, That's not how *anything* usually looks in Linux to my experience
1081[10:06:43] <jelly> deadrom, that WAS how they looked up until about 8 years back. hpsa works like all the normal scsi controller drivers, and you get /dev/sd*
1092[10:10:02] <pragomer> because I lost my data and want to copy things from my home folder inside this encrypted image
1093[10:10:02] <deadrom> pragomer, is that a *disk* image or *partition* image?
1094[10:10:03] <jelly> pragomer, use losetup to create a block device from the image, then use the usual crypysetup luksOpen thing on that /dev/loopTHING
1095[10:10:12] <pragomer> its a disk image
1096[10:10:18] <pragomer> i tried losetup like this
1105[10:11:26] <deadrom> pragomer then you might to add to the losetp where to start and end, so it really only mounts the partition, no tthe entire disk with mbr and all. see google on "mount partition from a disk image", you should finde the proper commands. but: you need to know the partition size
1106[10:11:27] *** Joins: paulus (~paulus@replaced-ip)
1116[10:13:07] <Blockah> I remember hearing from a nub, Windows is more secure than debian because of ESET NOD32 and linux is freeware garbage that anyone can hack even ssh is insecure with brute forcing... *I sighed* Pointed him towards IPTables and a facepalm
1117[10:13:14] <deadrom> pragomer just to rule it out, you adapted "partitonoffset"?
1119[10:13:42] <pragomer> I took the partition beginning ofset, yes
1120[10:14:22] <jelly> I have no idea why losetup -o ... fails that way. RTFM might be useful or not.
1121[10:14:32] <deadrom> pragomer, fuser -v /dev/loop[0..7] tell you anythign who might be hogging the loop devices? all of them seems strange
1122[10:14:50] <deadrom> not sure if fuser works on loops
1123[10:14:59] <pragomer> ok, I will give myself some more tries... but thanks in anyway. i know i am generally on the right way
1124[10:15:07] <Blockah> Classic nub lines: You do realize you're having these issues because AMD doesn't play nice right? -- Intel would have done it all for you
1176[10:25:33] <Habbie> it is also 9.3 * 1024*1024*1024 bytes
1177[10:25:36] <Habbie> gigabytes vs. gibibytes
1178[10:25:37] <lope> Why such bullshit measurements though
1179[10:25:46] <ratrace> GB vs GiB
1180[10:25:55] <Habbie> i'm not a fan of your word choices, lope, i'm trying to help here
1181[10:26:04] <lope> Exactly, why is the debian installer using GB instead of GiB
1182[10:26:42] <lope> Habbie, I've got nothing against you my friend. Just saying Debian shouldn't be using Hard drive manufacturers marketing department measurements
1203[10:29:07] <deadrom> it adds a new layer, yes, but when you've done your things right it's the only layer you need to care about. fragmentation, what fs frags these days?
1204[10:29:30] <ratrace> lvm has nothing to do with filesystem fragmentation
1205[10:29:40] <ratrace> lvm is not a filesystem
1206[10:29:50] <deadrom> if you constatnly fiddle with LV sizes they get fragged ok, but don't do that, make up you mind on what you need and stick to that
1207[10:30:01] *** Quits: madage (~madage@replaced-ip) (Remote host closed the connection)
1220[10:31:50] <lope> Abstract away where the data gets stored, but then doesn't actually do any background defragging of block positions
1221[10:31:59] <Haohmaru> wait, is this LVM thing about letting you change partition sizes without that requiring physically "moving" data back and forward?
1222[10:32:08] <gidna> with the netinstall have I to install all packages online during the installation phase or can I install the core and the other packages at a later moment?
1223[10:32:18] <lope> LVM's fine on a SSD
1224[10:32:20] <deadrom> I'd like to see a system where lvm has notable/measurable perf impact and then the history of what happened to it. but if you need on the fly size mgmt it's your current best shot. btrfs is a bit too fresh to trust it for prod use
1235[10:33:35] <lope> deadrom, well if you use LVM for VM's and you're constantly moving them around, cloning expanding, shrinking, then it happens.
1236[10:34:04] <lope> especially when you've got lots of users who do it on a whim
1237[10:34:21] <deadrom> lope, like I said, you shouln't do that in the first place. size your use cases and keep them
1238[10:34:23] <lope> then LVM fragments like a normal filesystem
1239[10:34:37] <lope> deadrom, it's LVM's whole selling point
1240[10:34:38] <ratrace> that'd be some pretty extreme usage for that to happen
1241[10:35:07] <lope> saying don't change LV's too much on LVM is the same as saying LVM is almost pointless.
1242[10:35:27] <ratrace> the lvm selling point is that you can combine multiple pv's into one or more vgs which you partition into lvs. it never was about using lvs as if they were "files"
1243[10:35:35] <deadrom> my point exactly. just cause your car (yay! car analogy!) has adjustable ride height dampers you don't fiddle with them after every turn.
1244[10:35:40] <EmleyMoor> lope: Out of interest, how do I check my LVs for fragmentation?
1245[10:35:50] <deadrom> good question!
1246[10:36:04] <lope> EmleyMoor, I've got no idea, there's a tool to defrag LVM, do a search for it and see if it can tell you
1247[10:36:37] <lope> ratrace, well some applications use LV's as if they are files
1248[10:36:39] *** Quits: lcabrera (~desarroll@replaced-ip) (Remote host closed the connection)
1249[10:36:50] <lope> (as you describe it)
1250[10:36:57] <lope> anyway, I'm gonna GBTW
1251[10:37:04] <lope> thanks for the chat
1252[10:37:20] <Haohmaru> so the LVM magic happens in the "device driver" .. grate
1253[10:37:28] <Haohmaru> runtime overhead ;P~
1254[10:38:09] <deadrom> applications should not have the right to alter LVs imo. that's like yanking the nerves in a horse to control its direction
1292[10:53:04] <netcrash> ratrace: jelly thank you , just updated and after the boot by pxe from a host , it still shows a message that ipconfig can't find a device
1298[10:54:15] <jelly> netcrash, you will need to be more specific than that
1299[10:54:18] <jelly> !ask
1300[10:54:18] <dpkg> If you have a question, just ask! For example: "I have a problem with ___; I'm running Debian version ___. When I try to do ___ I get the following output ___. I expected it to do ___." Don't ask if you can ask, if anyone uses it, or pick one person to ask. We're all volunteers; make it easy for us to help you. If you don't get an answer try a few hours later or on debian-user@lists.debian.org. See <smart questions><errors>.
1301[10:54:25] <jelly> !paste
1302[10:54:26] <dpkg> Do not paste more than 2 lines to this channel. Instead, use for text: replaced-url
1304[10:54:43] <rapidsp> problem with openbox and lxde. in gnome and kde works good
1305[10:55:45] <deadrom> rapidsp, sounds like power management. kde and gnome do a lot of power maanagment on their own, with the @lightweights@ zou probablz need to tell the machine a few things on the command line
1308[10:56:29] <ratrace> zou kezboard changed lazout midsentence
1309[10:56:36] <netcrash> jelly: I have to boot this recent laptop into clonezilla to do a clone, the laptop uses de e1000e driver from intel wich I now added to initramfs and ran update-initramfs -u ; lsinitramfs /boot/initrd.img-4.19.0-5-amd64 | grep e1000e . shows the module is in initramfs . But the laptop on booting by this initrd says it can't find a device to configure
1310[10:56:36] *** Quits: conta (~Thunderbi@replaced-ip) (Ping timeout: 258 seconds)
1311[10:57:04] <deadrom> ratrace, circumstances force me to use the *world market leader OS*...
1312[10:57:12] <ratrace> android?
1313[10:57:35] <deadrom> actually not sure if I would like that better
1334[11:02:10] *** Quits: deadrom (d90608ec@replaced-ip) (Remote host closed the connection)
1335[11:02:45] <thunfisch> hey, quick question: for unattended upgrades, is there some way to configure staggering of upgrades across a group of host? i.e. i have a group of load-balanced app-servers and i want to have at most one of the servers upgrading at a time.
1336[11:03:12] <thunfisch> of course i can just configure offsets for the checks, but having some intelligence there would make me a bit more comfortable
1337[11:03:40] <ratrace> thunfisch: tune the .timers that start the service? you can also randomize around a point in time with them
1338[11:04:03] <thunfisch> yeah was hoping to avoid something like that. seems really crude.
1339[11:04:28] <ratrace> meanwhile, i recommend never using unattended anything on a server where you clearly require serialization of actions. ansible, saltstack, chef, puppet, ...
1340[11:04:45] <jelly> thunfisch, no, that thing is really designed more for single workstations than parallel installations
1341[11:04:52] <ratrace> thunfisch: it's unattended, it doesn't get any more sophisticated than that
1342[11:04:59] <thunfisch> alright. thanks.
1343[11:05:10] <netcrash> ratrace: jelly I'm trying to boot clonezilla menu to do a clone of the disk, I placed a initrd from kernel 4.19 , but busy box shows a kernel 4.9
1344[11:05:37] <thunfisch> any other recommendations? we have to manage updates on roughly 700 servers. old way was puppet running an apt command periodically, but that makes me frown.
1351[11:06:58] <ratrace> thunfisch: i dont' think any other solution exists logically. unattended upgrades is run by a timer. either you tune the timer or you don't run that but use automation to schedule updates actively.
1356[11:07:48] <ratrace> there is no fairy dust you can sprinkle your servers for them to grow consciousness and start greeting each other like fish in the monty python's sketch, while also they coordinate updates among themselves.
1362[11:09:10] <thunfisch> any experiences with nexus for apt repos? something where I could freeze a set of packages so that all machines have the same version would be great.
1363[11:09:13] <ratrace> thunfisch: you can serialize with ansible, no need to abstract with jenkins
1364[11:09:24] *** Quits: tyranny12 (~blarg@replaced-ip) (Quit: No Ping reply in 180 seconds.)
1367[11:10:39] <colo-work> we collect upgradable packages once a day across the fleet, and then (manually) review and curate a whitelist of packages to be upgraded. another job then iterates over all our hists, and applies whitelisted upgrades only.
1371[11:10:51] <thunfisch> jenkins will just run ansible for us. we're a team of 10 people, so a central place to run stuff like that and keep logs is great.
1386[11:12:11] <thunfisch> also, team has knowledge of ansible
1387[11:12:18] <thunfisch> um, can't confirm
1388[11:12:26] <netcrash> jelly: busybox continues to say kernel 4.9
1389[11:12:28] <plantroon> run it in parallel and it will go ok
1390[11:12:33] <ratrace> but if ansible works for you, fine, sure
1391[11:12:37] <thunfisch> just increase the tasks and you're good
1392[11:12:47] <thunfisch> and don't build stuff that blocks across hosts
1393[11:13:10] <ratrace> plantroon: i am but when you need to gather facts and coordinate among them, it's gonna be very, very, very slow. we switched from ansible to saltstack and couldn't be happier.
1394[11:13:45] <thunfisch> i don't know, full run with complete setup for monitoring, logging, applications, etc takes about 3 mins here
1403[11:15:13] <thunfisch> well, we're migrating away from old infra with 1404 and puppet still. so i was looking at a new way to do upgrades and maybe have even more control so i was just exploring options
1404[11:15:39] <thunfisch> how I'm implementing them is imo specific to the project anyways, so I'm excluding stuff like ansible in such cases mostly.
1405[11:15:44] <ratrace> unattended upgrades is the very opposite of "more control". just ditch it and do it properly with active tasks from your automation
1409[11:16:28] <thunfisch> colo-work: how do you collect upgradable packages? just get a list of available upgrades directly on host, gather, dedup and whitelist?
1410[11:16:34] *** Quits: tyranny12 (~blarg@replaced-ip) (Quit: No Ping reply in 180 seconds.)
1411[11:16:40] <fireba11> hm .. tried unattended-upgrades a while back and it didn't do what i wanted, ended up using cron-apt for everything :-P
1414[11:17:29] <fireba11> i graduated from getting mails with available updates to automatically installing all updates a while ago with basically 0 issues
1418[11:18:22] <EmleyMoor> When I try to do lvm fullreport (among other things), I get four errors: /dev/sd[j-m]: open failed: No medium found - how have these been picked up on and how do I get them "out of the way"?
1420[11:18:31] <jelly> I've seen unattended-upgrades on ubuntu before it came back to debian and decided it was not to be trusted for an enterprise environment
1422[11:19:06] <ratrace> ubuntu itself is not to be trusted for an enterprise environment
1423[11:19:26] <jelly> disagreed, but also offtopic
1424[11:19:34] <ratrace> not just plain trolling ; we have some ubuntu servers in our fleet and this latest mid-lts-release bump of openssl to 1.1.1. broke tons of stuff and not just for us
1425[11:19:56] <colo-work> thunfisch, pretty much like that, yes.
1426[11:20:00] <ratrace> not the first time either that a mid-lts-release breaks a ton of stuff ; agreed though, offtopic
1427[11:20:22] <colo-work> thunfisch, on each host, we execute apt-get -qq update; apt-get -qq --simulate dist-upgrade
1428[11:21:10] <thunfisch> colo-work: alright. I think i will rather go with a central nexus for apt mirroring an release control. i like the idea of having the exact same packages across all hosts. can then pick a canary group for upgrades first maybe.
1434[11:21:54] <colo-work> that's pretty much necessary for any nontrivial number of hosts that consume repos, i think
1435[11:22:03] <netcrash> jelly: I placed the kernel I'm using in the boot , now initramfs loads the device network but after 2 sec it gives on getting dhcp ip. and the boot fails
1436[11:22:13] <colo-work> but we don't "whitelist" packages and transfer them from upstream (debian) into repos of our own
1441[11:24:06] <colo-work> (and with those hosts on the same LAN)
1442[11:24:12] <EmleyMoor> I also get similar errors for /dev/sdc on two other machines
1443[11:24:28] <fireba11> colo-work: ah, we're at about 40 debian installations and the last time i was thinking about an apt proxy i decided not yet worth the effort :-D
1455[11:29:13] <EmleyMoor> Hmmm... in my case it *could* be my card reader causing one of the errors... and as it doesn't work properly anyway, I will unplug it at next downtime. On the other machine(s) affected, one has a working built in card reader, the other has one in its mobile modem... so I'm treating this as "low nocuity"
1477[11:42:53] <netcrash> The kernel loads into initramfs, fails to set ipconfig ip because of timeout of 2secs , if I execute after that ipconfig on the interface it get's an ip
1478[11:42:54] <at0m> petererer: not apt-cacher-ng/acng ?
1481[11:46:08] <thunfisch> petererer: still waiting on feedback from a few colleagues, but nexus apt repository with snapshot feature seems to be perfect for this.
1491[11:49:20] <EmleyMoor> When I plug my mobile phone into my desk/laptop I can choose to access its files. I can handle it in nautilus pretty much like any other storage. Is there anything I can do, without compromising that functionality, to access it from a shell too?
1515[12:00:12] <at0m> EmleyMoor: sure can be done over BT if you enable file sharing on the phone. fwiw, i found kdeconnect most convenient (wifi, bt)
1516[12:00:32] *** Parts: paulus (~paulus@replaced-ip) ()
1517[12:00:53] *** Quits: conta (~Thunderbi@replaced-ip) (Ping timeout: 245 seconds)
1518[12:00:56] <fireba11> colo-work: you're right ... seems really easy. what i could use more is a windows update proxy, but those are "!$%"§$% software so i don't feel motivated setting one up *g*
1561[12:18:56] <otyugh> I was quite sure being read that the debian project wouldn't support officially the "upgrade an oldstable to stable" and promote the fresh install ; but I can't find confirmation anywhere. Did I imagined that fact ? (I need to apologize to some people if that's so :-s)
1580[12:26:00] <Lirion> about that staging stuff... nexus is not free to use, is it? and apt-cacher ng has this nasty https incability - i mean, i don't want to break pfs, but can it talk _and_ be talked to through https? and then it still doesn't have staging.
1581[12:26:19] <Lirion> I'm thinking out loud b/c I'd be very curious about something that works and is (f)OSS
1582[12:26:31] *** Quits: xcm (~xcm@replaced-ip) (Remote host closed the connection)
1806[13:56:51] <colo-work> new `su` implementaion in buster, afair
1807[13:56:59] *** Quits: oish_ (~charlie@replaced-ip) (Quit: Lost terminal)
1808[13:57:02] <FinalX> su -kl (or su - for short) gives you a login shell instead of a shell with your own environment. as if you logged in as root itself.
1809[13:57:03] *** Quits: v01d4lph4 (~v01d4lph4@replaced-ip) (Remote host closed the connection)
1810[13:57:08] <FinalX> su -l*
1811[13:57:11] <FinalX> same goes for sudo -i
1812[13:57:53] <FinalX> as opposed to su -m/su -p, where you keep your own.
1935[14:28:41] <FinalX> Whatt do you want it for? Desktop, server? Debian waits a long time till everything is "done" and ditches perfectly good things that people still want/use while Ubuntu does their best to try and keep it. Meanwhile, Ubuntu pushes out LTS's long before they're actually stable, and about half a year later it's really stable and LTS :P
1936[14:28:44] <emOne> but I do like debian for the stability
1938[14:29:10] <pyfgcr> everything is fine on ubuntu, until you hit a dist upgrade and "15674 packages can be upgraded", one breaks everything and you have no clue which it is.
1939[14:29:11] <Habbie> FinalX, but ubuntu does not promise security updates for most of those things they keep :)
1940[14:29:18] <FinalX> But software-wise: Ubuntu 16.04, year later: Debian release, 18.04: Year later, Debian release, and so on.
1941[14:29:20] <pyfgcr> this also applies to debian stable thoigh
1943[14:29:38] <Habbie> pyfgcr, don't randomly upgrade your OS ,goes for any OS
1944[14:29:54] <FinalX> Well, you could upgrade to the non-LTS versions of Ubuntu, or run Debian testing. Neither I would recommend on production servers, but hey /shrug
1945[14:29:56] <pyfgcr> Habbie: uh?
1946[14:30:05] * alkisg wishes both distros would freeze at the same time every two years, and then each of them would "release when it's ready"; it would allow developers to test one version for both distros...
1951[14:30:32] *** Parts: dunn (~obed@replaced-ip) ()
1952[14:30:35] <FinalX> and KVM in 16.04 was newer than in Debian at the time, so kvmhosts ran Ubuntu ;)
1953[14:30:43] <pyfgcr> not randomly, but eventually any non-rolling distro will reach end of life
1954[14:31:02] <FinalX> Meanwhile I run Windows 10 on my laptop, sue me ;) I use whatever is best for the task at hand and stopped caring about a lot of random noise a long time ago.
1955[14:31:18] <ratrace> i try ubuntu in a vm every six months to see what new thing they've invented as ubuntu-only unnedeeded abstraction; and every time i find myself in enemy territory, clueless, thinking "is this even based on any linux at all"
1961[14:32:16] <FinalX> We use Ubuntu on a small number of servers, precisely for that reason.. they often hurt less to upgrade, we run into new things, work them into Puppet for when we get the next Debian :P
1962[14:32:29] <alkisg> snaps and netplan and the other experiments will probably make a lot of ubuntu users switch to debian..
1972[14:33:24] <alkisg> Netplan broke a lot of installations; chromium-browser will be packaged as snap-only; etc etc, it's getting chaotic...
1973[14:33:28] <FinalX> It's really nice to have everything neatly in one place.
1974[14:33:40] <pyfgcr> Habbie: I still don't get the meaning of your previous message
1975[14:33:42] <allan_wind> Hi guys, after the buster upgrade, I have these events in syslog "packagekit.service: Main process exited, code=killed, status=15/TERM".... any ideas?
1976[14:33:43] <FinalX> In one overview.
1977[14:33:46] <ratrace> one place like /etc/network/interfaces ?
1978[14:33:48] *** zodd is now known as Guest36424
1979[14:34:04] <ratrace> or /etc/systemd/network/ if one prefers that
1980[14:34:05] <FinalX> That's definitely not having everything neatly in one place.
1981[14:34:08] <Habbie> pyfgcr, debian has a policy of shipping security patches for all the software; ubuntu only makes that promise for main
1988[14:35:32] <ratrace> FinalX: where else is the network config then?
1989[14:35:45] <FinalX> I love how there's always some people that just love to lash out and bash things they haven't really used or even looked at, just because it's new. I have a few coworkers like that, it's very tiresome. Changes come, it's part of your job and life. Some are good, some aren't, but keep an open mind and test things yourself before judging something you haven't but just "don't like because it was forced upon me".
1990[14:35:45] <pyfgcr> Habbie: was it an answer to "everything is fine on ubuntu"?
1991[14:35:57] <FinalX> ratrace: netplan.io, go
1992[14:36:07] <Habbie> pyfgcr, no, "don't randomly upgrade your OS" was a response to that
1993[14:36:19] <ooAoo> may i know are we able to recover our system by copying the hdd if we lost the grub2 password?
1994[14:36:23] <trek00> allan_wind: are you using the pacakgekit service? (something to update packages)
1995[14:36:24] <ratrace> FinalX: i've not only looked into that, i've looked into the source code as well ; it does nothing on itself, only converts yaml to networkd.
1997[14:36:36] <FinalX> Then you tell me where you'd put a lot of those things, and don't come to me with "I can run an up-script from /etc/network/interfaces", because that's simply *NOT* done from there, then.
1998[14:36:41] <ratrace> and networkmanager, but i'm not concerned about desktop
2022[14:39:12] <FinalX> ratrace: Because you can't do many things in /etc/network/interfaces (that YOU brought up first) *WITHOUT* an up-script, or praying that shit might work instead of verifiably working, as with netplan.
2023[14:39:22] <trek00> allan_wind: it should be a bug, but I don't know the package
2024[14:39:29] *** Quits: mkowalski (~mkowalski@replaced-ip) (Remote host closed the connection)
2026[14:39:43] <FinalX> And seperating things into /etc/network/interfaces and /etc/systemd/network is precisely why netplan exists. To unify everything network-related.
2027[14:39:45] <ratrace> FinalX: but netplan doesn't DO anything like ifupdown does, it ONLY LITERALLY ONLY configures networkd units, which you can do directly, that's my whole point
2029[14:39:51] <pyfgcr> Habbie: ok, I was just trying to understand your point
2030[14:40:00] <alkisg> FinalX: actually netplan was breaking network booting from me, because they put code in initramfs-tools that had bugs (e.g. LP #1763608); I wouldn't complain if I didn't have to spend 50 hours in the last two years trying to fix breakage because of netplan, snapd etc..
2031[14:40:00] <alkisg> Innovation is great, but they should always be a migration period where things don't break a lot until the new package gets adopted by most of the users
2032[14:40:08] <FinalX> ratrace: Stop trying to validate your own bitterness to people, it's just plain sad.
2034[14:40:24] <trek00> ooAoo: raw copying the hdd should let you to backup the filesystem, but not to reboot; you need to reinstall grub to recover the boot process
2035[14:40:24] <ratrace> FinalX: it's a C program that takes a yaml file and produces systemd-networkd units under /run/systemd/network/
2036[14:40:37] <ratrace> only that and nothing more
2056[14:50:28] <rpifan> Man my system got hosed. I updated it. It went to hibernation mode. Got stuck now everytime i try to boot it gets stuck at initramfs. Saying resuming from hibernation then logsafe not found
2115[15:06:27] <lope> bcache delivers SSD class performance at the capacity of a larger hard drive
2116[15:06:33] <lope> that's the whole point of bcache
2117[15:06:49] <lope> L2ARC has very limited usefulness
2118[15:06:55] <ratrace> it's literally the same thing
2119[15:07:03] <lope> it's not in any way comparable to bcache
2120[15:07:04] *** Quits: xcm (~xcm@replaced-ip) (Remote host closed the connection)
2121[15:07:41] <lope> on bcache, if you write data, the SSD takes it. If you read data again immediately, the SSD supplies it, thus SSD speed on bcache.
2125[15:08:21] <lope> On ZFS, you write data, it goes into the SSD, you read data, you wait... now the zpool writes the data out to the hard disk, you still wait. now the zpool reads the data back from the HDD, now you can have the data
2145[15:11:20] <ratrace> i don't know anyone using bacache and a numer of tech industry leading companies using zfs; which bts is the filesystem in world's most used NAS solution
2146[15:11:33] <ratrace> %s/bts/btw/
2147[15:11:46] <lope> yeah ZFS is extremely reliable
2148[15:11:49] <lope> no question about that.
2149[15:11:53] <ratrace> but anyway ; what you're describing is just the difference between write-through and write-back caching modes
2165[15:15:27] <ratrace> i'd recommend you try both ; you'd be surprised how performant zfs l2 is ; i don't know who fed you lies about l2 not speeding anything, but it's literally it's only job and it's doing it very well
2166[15:15:28] <lope> I think the only way I'd run bcache is in a multi-node ceph
2168[15:17:02] <ratrace> then again .... my experience is zfs on freebsd ; i don't have much of experience with zfs on linux; their issue tracker is a very scary place ; we have one debian+zfs server in production but that's a backup of a backup and it's simple, no L2 stuff
2169[15:17:07] <lope> ratrace, guys on #zfsonlinux basically said: for VM's and DB's 1. l2arc is pointless. 2. the best caching is inside the VM's themselves (whatever that means). 3. Throw lots of RAM at ZFS and make use of large L1ARC
2176[15:18:06] <lope> How would you suggest I size the log cache and L2ARC?
2177[15:18:12] <ratrace> i mean if they said it ; on freebsd it performs very, very well ; we have a file server that once primed up, doesn't touch HDDs at all except an occasional write
2178[15:18:15] <lope> They said having a log bigger than 1GiB is pointless.
2219[15:23:48] <ratrace> lope: you should always keep good backups anyway ; even zfs has had some data eating bugs lately ; one of which even affected our gentoo systems ; backups ftw
2220[15:23:49] <lope> ratrace, the only way I'd be comfortable is to have my data on different kernels
2221[15:24:17] <ratrace> lope: file server for a mobile application
2222[15:24:38] <lope> So I'd have to run one side of my bcache pair inside a VM or on another machine, mounted over NBD
2223[15:24:58] <lope> or just have a single bcache on each machine with ceph on top
2224[15:25:50] <lope> either way it's all very complicated. And I just need to get something running quickly right now
2237[15:27:32] <lope> surely zfs goes through validation before it goes to stable releases?
2238[15:27:43] <lope> just like bcache?
2239[15:27:47] <ratrace> lope: note that SLOG is only useful to offer ssd-like write performance for bursty activity ; the data needs to sync up to slower hdds anyway
2241[15:28:19] <lope> oh, I just had an idea. I don't know if this makes sense
2242[15:28:38] <lope> What if I run a pair of bcache with zfs on top, and before upgrading bcache or the kernel I change the cache mode to write through
2243[15:28:47] <lope> so there's zero risk of bcache corrupting anything
2244[15:28:51] <ratrace> lope: define "stable" releases ; debian? yeah i suppose so ; but upstream? the issue was on 0.7.x "considered stable" branch, not even the-then-experimental 0.8.x
2266[15:33:02] <lope> I was just thinking to keep backups on the same machine, to speed up the ability to fix anything if there is a problem.
2267[15:33:11] <ratrace> we're evaluating btrfs though ; zfs is awesome on freebsd ; scary on linux and the license is a meh which makes its use in certain rented-hardware situations very painful
2268[15:33:56] <lope> actually the HDD split idea is crap, from an IOPS perspective.
2270[15:34:17] *** ghost43_ is now known as ghost43
2271[15:34:39] <lope> it also means the system can't keep running and be shut down gracefully if a problem emerges with any of 1. kernel, 2. ZFS, 3. Bcache
2280[15:36:19] <ratrace> so just stick 'em all on SSDs and be done?
2281[15:36:31] <lope> ratrace, one day when budget allows
2282[15:36:39] <lope> So getting back to sizing of log and cache
2283[15:36:46] <lope> My log will be 1GiB
2284[15:36:56] <lope> then cache? you've made yours as big as possible?
2285[15:37:02] <lope> basically
2286[15:37:06] <ratrace> i think you should re-do the budget equation and include time==money variable, for all these levels of abstraction you're considering :)
2287[15:37:17] <lope> you said you've got 300G in your 512G log cache device
2288[15:37:25] <lope> how much of that cache is actually being used regularly?
2301[15:40:05] <ratrace> yea i can't run zfs-stats at the moment because it'd have to go through our salt stack which is currently being offline for testing
2302[15:40:57] <ratrace> anyway it's not block level hit count but various caches hit and miss ratios, as produced by the zfs-stats script
2343[15:58:05] <carp_> Hi, I upgraded to Buster and now when I lock my computer, my screen doesnt wake up again. I noticed during the Buster upgrade that the fonts went buggy. Also when I open firefox now, there is a graphical glitch momentarily. Has anyone else had this?
2364[16:05:03] <dpkg> Where possible, Nvidia graphic processing units are supported using the open source <nouveau> driver on Debian systems by default. To install the proprietary "nvidia" driver, see replaced-url
2365[16:05:39] <trek00> carp_: see that page to install nvidia non-free drivers
2366[16:05:54] *** Quits: v01d4lph4 (~v01d4lph4@replaced-ip) (Remote host closed the connection)
2367[16:06:19] <ooAoo> trek00: which encryption techniqque should i use for filesystem?
2369[16:07:26] <carp_> trek00 I have not installed nvidia non-free drivers. I have kept the computer fully free open source. Strongly hoping to keep the system as close to default and fully free open source if possible.
2371[16:07:54] <trek00> ooAoo: there are many types, it depends if you want all encrypted or only some directories/files, but if you lost the password/key you any data will be lost
2372[16:08:35] <ooAoo> trek00: do you think auto mount encrypted disk is a risk on bootup?
2373[16:09:02] *** debhelper sets mode: +l 1558
2374[16:09:17] <trek00> carp_: I like too having all opensource, but if you have graphicals problems you should try to report them or install non-free drivers
2377[16:10:17] <trek00> ooAoo: when encrypting something if you lose the password you can access no more to your data, it's not simple to restore like the lost password with grub
2378[16:10:46] <carp_> trek00 right, thanks for the tips. I will look into this further.
2379[16:11:04] <ooAoo> trek00: ok
2380[16:11:46] <trek00> carp_: intel/amd graphics cards are really usable with free drivers, instead for nvidia it really depends if the chipset support is in good shape in the free nuveau drivers
2383[16:13:23] <carp_> trek00 right. Strange that it worked on jessie and stretch but has now stopped? orI suppose things like this happen from time to time.
2406[16:22:28] <carp_> trek00 right. I will have to learn how to use this computer properly so may as well try to do that. (im a windows user, have been using debian instead for a couple of years but only the very basic default setup and i have simply avoided downloading anything or moving my files over still).
2500[17:03:35] <_2E0LNX> I've upgraded my zoneminder box to buster, and now it's not playing nice. Looks like there's an issue with php's mysql extension... Where to start poking to fix?
2552[17:22:40] <charking> Hello. I want to report a bug for a package in Buster, but there is already a bug for the same problem in Stretch marked unreproducible. Should I create a new bug report, or use the existing bug report?
2553[17:23:08] <charking> Different version of same software between Buster and Stretch.
2580[17:29:44] <_1byte> I'm trying to install i386 onto a bootable CompactFlash drive via a Debian host machine. I have tried to format the MBR and used dd to write the iso to the drive, however I can not seem to get it to boot in the client machine (Pentium 75 with 8mb ram). I have made sure in fdisk that the drive is flagged as bootable, but I'm not sure what else to try. Any thoughts?
2583[17:30:56] <Mathisen> _1byte, some more error info would be a good start
2584[17:30:57] <jhutchins_wk> _1byte: 1) Formatting is overwritten by copying the iso, which is a pre-formatted image. 2) Be sure you are copying the iso to the device, not a partition.
2597[17:33:29] <alkisg> The last low-spec I managed, was Debian Wheezy in 32 MB RAM
2598[17:33:38] <alkisg> So you'll need a lot older than that
2599[17:33:50] <greycat> Current Debian releases will not work on 8 MB of RAM. Not even remotely close. And the current release also won't work on a first-generation Pentium.
2602[17:34:34] <greycat> Wheezy officially required 64 MB, but if alkisg managed it on 32 MB, I can believe it. Jessie/systemd does not work on 64 MB. I know this first hand.
2603[17:34:52] <_1byte> Oh, I just checked, Memory 32512 KB
2621[17:37:03] <alkisg> I see that was 42 MB, not 32... but it might work with 32 without xorg, don't remember
2622[17:37:03] <dob1> hi, is there a command that I can use other than echo >> to append data at the end of a file? I am worried to forget a > on the echo command and delete all the content of the file
2632[17:38:09] <alkisg> winny: it was actually a VM; but then I cloned it to around 50 old systems that I had with low specs (windows 98/xp machines that schools wanted to "upgrade")
2633[17:38:11] <dob1> greycat, but if I forget the -a it's the same as to forget >
2634[17:38:30] <dob1> I wil create an alias maybe
2635[17:38:55] <alkisg> winny: I think the oldest were amd k6 and pentium 2
2636[17:39:04] <jhutchins_wk> dob1: Add a step to back up the file before you write to it.
2637[17:39:07] <winny> alkisg: how did you manage such low memory usage? On my gentoo based low memory install, booting to ~70MiB was all I could manage (to icewm)
2638[17:39:17] <jhutchins_wk> dob1: Make backups, keep notes, get the commands right in the first place.
2639[17:39:18] <_1byte> Maybe I can try to compile a stripped down kernel?
2641[17:39:57] <alkisg> winny: I avoided all system services by starting in "recovery mode" (kernel parameter=single), where most services don't start, and then ran startx from there, thus also avoiding a display manager
2643[17:40:08] <ratrace> _1byte: that'd work ; i have a gentoo desktop with kernel built just for that machine's hardware, it's around 6MB
2644[17:40:19] <jhutchins_wk> _1byte: The kernel itself is pretty "stripped", it uses dynamically loaded modules.
2645[17:40:43] <dob1> jhutchins_wk, I will create an alias for tee -a
2646[17:40:45] <winny> does debian still use eglibc? maybe that might have something to do with that
2647[17:40:55] <alkisg> _1byte: the biggest problem is the initramfs, because at some point you have this in ram: kernel, initramfs, AND uncompressed initramfs
2648[17:41:25] <alkisg> _1byte: so before starting to compile kernels etc, start by (1) testing as it is, (2) stripping down the initramfs itself
2649[17:41:29] <ratrace> is initramfs really needed
2650[17:41:37] <_1byte> HEY! Graphical Install is working! After writing to the device instaeed of the partition
2652[17:41:55] <koollman> alkisg: isn't there a way to provide an uncompressed initramfs directly ? (so no extra space required). or no initramfs at all, too
2653[17:42:06] <alkisg> _1byte: which debian vesion are you trying to install?
2654[17:42:45] <alkisg> koollman: no initramfs should be doable, e.g. raspbian does it; but in some cases it might require a custom kernel to access the hardware
2655[17:42:54] <ratrace> unless you need tools to unlock rootfs for the kernel to pivot (like scripts and userland commands), you don't really need initramfs
2683[17:48:30] <greycat> even if you manage to get Debian running on that 32 MB machine, what are you going to *do* with it?
2684[17:48:32] <mureena> tfw you still see Debian Sarge in production
2685[17:48:42] <lope> I converted my raid MBR partition to GPT with `sgdisk -g /dev/sdc` it converted. But I tried to install grub now and it failed. "grub-install: warning: this GPT partition label contains no BIOS Boot Partition; embedding won't be possible."
2686[17:48:53] <_1byte> I need it as a serial console to work on my SGI machines lol
2687[17:48:54] <lope> Surely grub should be able to create the necessary GPT partition for me?
2688[17:49:27] <alkisg> lope: grub doesn't create gpt partitions; use gparted for that; then grub will use them
2704[17:52:26] <alkisg> (that is because grub-efi uses the efi partition, which is a vfat one; well if you don't have that either, you need either a vfat efi partition and grub-efi, or the special bios boot partition and grub-pc)
2766[18:07:36] <lope> ratrace, I've got linux soft-raid1 on /dev/sdc1 and /dev/sdd1 (it was MBR) I converted to GPT, created a 3M partition, marked it as bios_grub, reinstalled grub to sdc and sdd (it said success). All seems fine. I double checked my mdadm status and double checked that my md0 UUID did not change and /etc/fstab is fine. Will I be able to boot? Did I forget anything?
2789[18:12:57] <ratrace> lope: btw, grub-install should go first, because it prepares the dirs for update-grub to work on ; i suppose it didn't matter in this case as you already had /boot laid out
2790[18:13:14] <alkisg> dpkg-reconfigure grub-pc is better
2791[18:13:21] <lope> ratrace, haha, well I repeated all the commands a 2nd or 3rd time for good luck
2792[18:13:24] <alkisg> It remembers the disk where grub is installed too
2793[18:13:28] <lope> just incase I got the order wrong :p
2794[18:13:45] <lope> ratrace, ah yeah, /boot was already there
2798[18:15:01] <AlpacaFace> Anyone having good experiences with nvme ssd? Recommendations please? I tried the Corsair MP510, incompatible. Shows up in bios, but nothing showing up in lspci/lshw/fdisk/nvme-cli. Most Ive read say even if they work, the performance is horrific in linux compared to Win, where they are optimized. That on linux, NVME performance over SATA is negligible as of recent. Any opposing experiences please?
2799[18:15:17] <ratrace> lope: update-grub literally just calls grub-mkconfig -o /boot/grub/grub.cfg so if /boot/grub is not there (and it's created by grub-install) it'd fail ; and i guess alkisg's suggestion is maybe even better
2805[18:15:57] <alkisg> dpkg-reconfigure grub-pc runs both grub-install and update-grub, and additionally remembers where grub gets installed and allows multiple disks too, and has a debconf/whiptail menu :D
2807[18:17:29] <lope> AlpacaFace, I've not experienced any nvme SSDs but done quite a bit of research on them. The Corsair MP510 indeed looks nice, however it's got a high idle power consumption, so not suitable for mobile. And it produces a huge amount of heat so needs a good cooling solution. However it's a very solid reliable choice. providing a stable ~1GB write speed across the entire capacity of the drive and very good TBW rating.
2814[18:18:43] <lope> On the other hand the XPG (Adata) SX8200 is a very impressive drive, with much faster write speeds of about 3.2G IIRC for about 70G (IIRC) until it runs out of empty write cache then drops down to probably SATA speeds of like 500~700MB/s IIRC
2817[18:18:59] <no_gravity> Hello! I have made /etc/resolv.conf a file instead of a link, so it does not get overwritten anymore. Is there a way to undo that?
2818[18:19:06] <lope> But the TBW is lower as well, it's not as suited to server use as the Corsair MP510
2821[18:19:38] <lope> koollman, IIRC 3.5W idle power consumption vs Samsung, which is like 0.6W
2822[18:19:42] <ratrace> no_gravity: remove the file and allow network manager to recreate the link? i think it's NM that creates it, and it's only for systemd-resolved ; i could be wrong
2823[18:19:56] <lope> The SX8200 idle power was around 1.7W IIRC
2824[18:20:04] <no_gravity> ratrace: I tried to remove it and reconnect, but it did not get created again.
2842[18:22:57] <lope> the NVME is just a bit of extra bling factor to the performance.
2843[18:23:03] <lope> But the real benefit is HDD vs SSD
2844[18:23:45] <AlpacaFace> Ive seen some of the Phoronix benches, mostly ubuntu, and not really relatable. Ubuntu is supposed to support the mp510, but not showing up in debian. But then Ive also read someone spoke to a Corsair rep that says it is not supported in Linux. So who can know for sure. Thanks for the responses, ppl. Much appreciated. My ssd looks to be on its way out, so just preparing :)
2845[18:23:47] <lope> I'd much rather eliminate HDDs and will gladly settle for SATA SSDs than try get some small amount of NVMe
2866[18:27:46] <lope> I mean imagine APG SX8200 Pro performance full cloning a VM?
2867[18:28:00] <lope> 3.2GiB/s write speed for up to 70G
2868[18:28:15] <ratrace> lope: i think you're misunderstanding ZIL/SLOG ; it's only the first device that takes in blocks, so it handles bursts ; but then it syncs them back to hdd
2887[18:31:19] <lope> non sync writes don't touch ZIL
2888[18:31:25] <ratrace> really just.... try it and see how your workload behaves ; please don't listen to random people on IRC especially when you're hearing conflicting info
2889[18:31:26] <lope> so non sync writes will block until the HDD takes them
2890[18:31:34] <ratrace> lope: depends on logibas setting
2898[18:32:52] <lope> I hear you, gotta test things
2899[18:32:54] <ratrace> lope: well wait, non-sync writes are not committed anywhere, they're dirty pages in ram
2900[18:33:24] <ratrace> so you've got that buffer there, RAM ; whether that then goes through zil at kernel's discretion, i can't tell you with 100% certainty right now
2912[18:34:45] <lope> I'm not saying ZFS is bad performance, it's good.
2913[18:34:48] <lope> But bcache is better
2914[18:34:49] <ratrace> i think, but again i am not 100% sure, when the kernel dumps the dirty pages, they do go through zil first, for logbias=latency
2915[18:35:30] <lope> ZFS performance relies on having RAM to cache reads and writes and a tiny bit for ZIL then L2ARC which there's mixed reports about
2916[18:35:33] <ratrace> lope: it could be ; i just want to correct some misinfo there about zfs not benefiting from L2/ZIL at *all* :)
2917[18:35:41] <lope> With Bcache the SSD is taken advantage of fully
2918[18:35:55] <lope> so performance is not limited by how much RAM you have
2919[18:36:03] <ratrace> but the real question here is, why can't you just use ssds directly?
2929[18:38:02] <lope> bcache gives you SSD performance on HDD budget
2930[18:38:36] <lope> bcache needs to live on a block device, and provides a block device
2931[18:38:50] <lope> thanks for the chat bud, I'm not using bcache btw.
2932[18:38:59] <ratrace> "ssd performance on hdd budget" is not unlike "unlimited bandwidth" overselling in hosting :) yeah it's unlimited as in "unmetered" but nobody told you you'd be sharing it with 1000 other customers :)
2933[18:39:01] *** debhelper sets mode: +l 1551
2934[18:39:31] <lope> I'd like to run bcache, but my setup is not suitable for it.
2935[18:39:44] <ratrace> lope: i asked because iirc you need to set it up in advance, you can't just add it to any fs at any moment, like one can with l2
2936[18:40:10] <koollman> lope: I haven't found bcache to be very effective in most cases. it helps, but it's quite limited
2937[18:40:11] <lope> well I'm setting up a new server so that's why I've been intensely considering the options.
2938[18:40:16] <ratrace> so, eg, i can't convert any of my existing servers to try that out, i'd have to build the fs from scratch
2939[18:40:24] <lope> Need to get cracking and get this server up asap.
2945[18:42:03] <ratrace> lope: i don't mean that, but your fs needs to be atop of the bcache device. you can't take an existing fs and then reconfigure it to go through a bcache layer, right?
2946[18:42:17] <x0n> a SSD SLOG has one big selling point besides resistance to power loss data corruption (you do have a supercap in your SLOG like you should, right?): random writes to the pool arrive like sequential IO at the disk end
2947[18:42:50] <koollman> ratrace: iirc there are tricks. but mostly you are right, you want to do that on an empty device that will be formatted
2948[18:42:50] <x0n> so the use case is having multiple concurrent applications doing disk IO
2977[18:52:32] <ratrace> the problem for zfs is not packaging, it's kernel forbidding its apis for non-gpl software, which is happening lately ; zfs, nvidia on power arch ; i'm sure it'll continue ; personally i doubt there's future for zfs on linux
2980[18:53:48] <karlpinc> ratrace: IIRC zfs is in contrib, and installing it automatically compiles it so the license is not violated. Works, but clunky.
2991[18:57:49] <koollman> ratrace: it's not just the kernel. it's cddl/gpl incompatibility. you can distribute both as source, but can't distribute the resulting binary (unless you have a solid legal team)
3030[19:16:29] <dunatotatos> Hi there. I just updated my Debian Sid, and grub-install complains because core.img is too large to fit in the "embedding area". My hard drive is formatted in btrfs, with no separate /boot. The problem seems to be known, (replaced-url
3031[19:16:29] <dunatotatos> ble, I would like to avoid re-sizing a filesystem. Is there a way to reduce the size of core.img?
3060[19:28:36] <ratrace> lope: btw, i just realized looking at this issue i linked above, they say the fpu export was backported to 4.19.38. so unless the debian maintainer repatches it, the zfs mega slowdown is coming to debian stable too with 10.1
3065[19:31:34] <ratrace> lope: or it's solved.... replaced-url
3066[19:32:35] <AlpacaFace> intel-microcode throwing my cooling out of whack. It solved bootup firmware errors, but fan speed was super buggy. Removed, now better. Dell G3
3093[19:43:07] *** Quits: pyfgcr (~pyfgcr@replaced-ip) (Remote host closed the connection)
3094[19:43:16] <MaxLanar> Hello, I've upgraded from debian 9 to debian buster, everything works great except I have no sound anymore. In pavucontrol, in 'output devices', I got 'no output devices available'. What can I do to investigate the issue/resolve it ?
3095[19:44:04] *** Quits: v01d4lph4 (~v01d4lph4@replaced-ip) (Remote host closed the connection)
3116[19:55:45] <lope> ratrace, I'm using a 5.x kernel and I've got ZFS running?
3117[19:55:58] <lope> <ratrace> koollman: it's not just redistribution, it's using kernel functions ; zfs doesn't even build on 5.x kernels due to that
3120[19:57:03] *** Quits: Chas (~Chas@replaced-ip) (Remote host closed the connection)
3121[19:57:50] <MaxLanar> Besides I still couldn't manage the volume in pavucontrol or with the keyboard keys after purging the user pulse config files
3134[20:01:01] <dpkg> Some users have had <timidity> blocking access to their sound card, resulting in <pulseaudio> only seeing a dummy output. Check if timidity is running with 'systemctl status timidity' and stop/disable with 'systemctl stop timidity ; systemctl disable timidity' and/or remove the timidity-daemon package.
3135[20:01:09] <jmcnaught> MaxLanar: check for this ^^
3151[20:04:42] <lope> drove the people on #zfsonlinux a bit nuts. But (probably inspired by you) I've realized that my HDDs and SSD's are more than big enough to support me setting up something sensible and reliable to get main customers running with, while also leaving myself opportunity to experiment with bcache etc for my own use
3171[20:11:10] <lope> My thinking with bcache, because issues have occurred during kernel updates, is I want to try run bcache on a pair of VM's and export the bcache via NBD to the host, then run a mirror Zpool on those NBDs
3173[20:11:30] <lope> Those VM's will have no internet access so won't ever really need kernel upgrades.
3174[20:12:00] <lope> But if I do upgrade their kernels, I'll be sure to disable bcache first, and upgrade one at a time and let it run for a few weeks while before upgrading the other.
3175[20:12:02] *** Cricket2 is now known as GeminiCricket
3176[20:12:17] <trek00> lope: it won't have performances degradation due to massive context switching?
3178[20:12:23] <lope> raid doesn't protect from much aside from total drive failure
3179[20:12:30] <lope> you can have silent data corruption in raid
3180[20:12:38] <lope> ZFS is better than raid in many ways
3181[20:13:53] <lope> trek00, I don't think there would be massive performance degradation, virtualization is pretty good these days. I figure even factoring in the performance losses of virtual machines, it'll still be much faster than ZFS.
3321[21:06:09] <yokowka> greycat, this are shows after log out and command id: uid=1000(denis) gid=1000(denis) группы=1000(denis),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),108(netdev),113(bluetooth),114(lpadmin),119(scanner)
3341[21:10:51] <domovoy> i'm setting an sso solution with sssd and kerberos for auth, for the id ldap seems to be the de-facto choice, but i find it quite annoying to work whith, i'd prefer postgresql. I saw there is libnss-pgsql2 i may be able to use as a proxy with sssd, but it seems dead (last version is from 2008), any better alternative?
3352[21:17:19] <yokowka> greycat, run it in console?
3353[21:17:32] <greycat> any shell will do
3354[21:17:36] <lope> ratrace, concerning?
3355[21:17:44] <ratrace> lope: your 5.x kernel
3356[21:17:51] <lope> debian buster
3357[21:18:03] <ratrace> lope: zfs fixed that issue btw, they simply removed simd support so the modules could be built again but... at severe expense of performance ; what i meant was it broke zfs so it couldn't be built but that was worked around
3358[21:18:10] <ratrace> lope: but custom kernel?
3359[21:18:23] <yokowka> greycat, sudo:x:27:denis
3360[21:18:54] <greycat> That looks correct. So, if you log out and back in, it should work.
3370[21:20:30] <ratrace> lope: so that's some frankendebian ; anyway, you should check if that kernel is patched for the simd fixes that zfs introduced just recently
3371[21:20:44] <greycat> I'm wondering if GNOME is doing something even *more* stupid now, because that bar of stupidity wasn't already raised high enough. Like, maybe it's reusing terminal sessions across logins and therefore no gnome-terminal ever picks up new privilegs. Just a guess.
3407[21:28:17] <lope> well, what they actually advised was just putting it on a partition
3408[21:28:19] <karlpinc> lope: It is often easiest to use lvm on the whole disk. Then the physical device can be LUKS encrypted, and you can have partitions within that used for various purposes.
3409[21:28:27] *** Quits: trysten (~user@replaced-ip) (Remote host closed the connection)
3410[21:28:31] <lope> but raid 1 is better cos then at least the system can keep running if a disk fails outright
3411[21:28:54] *** Quits: timahvo1 (~rogue@replaced-ip) (Remote host closed the connection)
3412[21:29:04] <lope> karlpinc, are you saying luks the physical disk or luks the LV?
3413[21:29:17] <karlpinc> lope: However, the latest "LUKS password prompter" (askpass?) will retry the password you entered on the next LUKS device, so that if you give them all the same password then you only have to enter the password once.
3427[21:32:33] <karlpinc> lope: You could. But that's complicated. You're going to have to unlock the LUKS that's protecting your data anyway, so may as well use the typical idiom. "Standard is better than better."
3428[21:32:49] <lope> it's not complicated
3429[21:33:01] *** Quits: JonathanD (~JonathanD@replaced-ip) (Remote host closed the connection)
3430[21:33:03] <lope> its a handful of commands in a bash script
3431[21:33:13] <karlpinc> lope: Not especially, but something else for somebody to figure out.
3432[21:33:30] <karlpinc> Just saying.
3433[21:33:41] <lope> what makes you think I need somebody else to figure something out on my server?
3434[21:33:59] <lope> anything anyone does on linux including your last solution is complicated
3437[21:34:07] <karlpinc> lope: Hey, if it's only ever just you then whatever makes sense to you.
3438[21:34:47] <lope> swap is disposable, if hypothetically someone inherited the server from me they can just turn off swap and set up swap however they like.
3441[21:35:32] <greycat> If it's a server, encrypting swap is irrelevant. Encrypting swap is only a thing for laptops, or other cases where you're afraid someone is going to steal the hard drive or the entire machine.
3442[21:35:53] <lope> greycat, while I tend to agree, not entirely
3443[21:36:13] <lope> because I may have some encrypted volumes on the machine that I'd mount via SSH
3444[21:36:28] <lope> and I'd expect that when they're unmounted or the server is rebooted, they'd be secure
3445[21:36:29] <greycat> which has zero to do with swap
3446[21:36:39] <lope> if there is swap on the disk holding the key, that's not so good.
3447[21:36:41] <karlpinc> greycat: You could be protecting against some rogue datacenter employee that only ever has access to powered down equipment.... (Make up your own threat model here. :)
3448[21:37:06] <greycat> I... suppose.
3449[21:37:15] <yokowka> how to change debian 9 stretch on debian 10 buster, is any problem with that exchange??
3450[21:37:17] <greycat> but if that's the treat model, there's MUCH more important stuff to worry about
3451[21:37:25] <karlpinc> greycat: Offsite backups of disk snapshots.... :-/
3452[21:37:54] <karlpinc> yokowka: You follow the instructions in the buster release notes.
3453[21:38:06] <karlpinc> !release notes
3454[21:38:06] <dpkg> The release notes for Debian 10 "Buster" are at replaced-url
3466[21:45:42] <lope> greycat, karlpinc: I once had a paranoid customer that asked me to encrypt his data on-disk for the VM's I hosted for him. It added a hassle factor but I did it for years. Then recently I did a server upgrade it was such a hassle to deal with his encrypted storage and I said to him "hang on a second, you've got all these developers running windows with your data on their laptops, in coffee shops, in houses, left in cars etc. I'm hosting your
3467[21:45:43] <lope> stuff inside a data center with 24/7 security, access control, locked cages, yada yada, Sharks with laser beams on their heads, and my host has strict data security policies. But you want ME to encrypt your data, and you don't insist your staff do???? can I stop encrypting it please, it's a hassle. He thought about it then said ok fine. hahaha
3468[21:46:29] <tds> that sounds like the point where you get the developers to encrypt their disks, not disable it on the server ;)
3469[21:47:02] <ratrace> disk encryption in servers is a good thing ; despite sharks with lasers, disks replaced out of your chassis end up somewhere and either you trust they will clean them out before tossing to the recycling bin, or you simply FDE
3470[21:47:03] <lope> well he agreed... less work for me.
3482[21:49:12] <ratrace> lope: anyone who's banking on the assumption that you're reusing keys or passwords for your other, online servers :)
3483[21:49:30] <ratrace> and who has access to the drive tossed out of your chassis
3484[21:49:32] <lope> everybody's busy and needs to get on with their work. The drive would 99.999% likely just get wiped (if possile) and reused, or recycled. Who's actually going to go snooping on some random disk from a server?
3485[21:49:54] <humpled> people
3486[21:49:58] <ratrace> i would ; i did :)
3487[21:50:38] <lope> well, encryption hurts performance, adds complexity, and means stuff can't be automated as easily.
3497[21:51:47] <ratrace> lope: no it doesn't :) with aesni in modern cpus, the encryption throughput is greater than sata ; and it's easy to automate it, we have FDE on all the servers, and the system to unlck them in place over ssh
3498[21:51:51] <lope> my dedi is not going to get stolen.
3499[21:52:12] <trysten> FDE = fixed disk encryption? oh, _full_ disk encryption
3506[21:53:00] <lope> so only your data and VM's would be encrypted
3507[21:53:01] *** Quits: ce_hyperosaurus (~androirc@replaced-ip) (Remote host closed the connection)
3508[21:53:06] <ratrace> lope: it's not though, i just said we had unlocking mechanism in place
3509[21:53:21] <tds> lope: one critical thing here is that you're assuming your provider is competent enough to thoroughly wipe/destroy drives, and not just resell them to the next customer
3510[21:53:21] <lope> you can't boot a server unattended if it's encrypted
3511[21:53:23] <ratrace> lope: initramfs script, with dropbear ssh, waits for keys to be given via ssh
3512[21:53:37] <ratrace> lope: automation does it, jesus, are you even reading what i write :)
3521[21:54:33] <lope> okay, someone could backdoor your initramfs of course
3522[21:54:36] <tds> just doing disk encryption and throwing away the key when the server isn't used anymore may also make your life easier than demanding certificates of destruction or whatever ;)
3523[21:54:44] <lope> so it's not secure from a physical attack
3524[21:55:01] <lope> but at least if your drives are removed, the data on them is worthless
3525[21:55:02] <ratrace> lope: of course, but that's not the threat we're protecting against :: we assume FDE is good only for data at rest; when the drives leave the chassis
3528[21:55:05] *** Lord_of_Life_ is now known as Lord_of_Life
3529[21:55:41] <ratrace> again, we don't trust the dc will wipe the drives, esp. if faulty, before tossing to the recycling bin, that's the threat model we use FDE for
3536[21:56:56] <lope> I've already got debian running raid 1 on 2 unencrypted SSDs and I've got 2x HDDs in the server I could juggle things around on while setting it up.
3537[21:57:10] <lope> Not much is installed at this point so far, basically OS and proxmox.
3538[21:57:52] <ratrace> lope: basically you install it as if you'd unlock it locally, add dropbear and configure its networking, add ssh keys for initramfs, there's ton of articles on google
3540[21:58:17] <ratrace> the only exception we do is a custom initramfs hook, we don't use the default unlocking script
3541[21:58:28] <trysten> When I boot debian, it takes about 10 seconds to get over the fact that one of the harddrives is erroring. How can I make the kernel ignore those errors and go ahead and boot
3542[21:59:41] <lope> ratrace, I've only ever done FDE installs with the debian or ubuntu installer, other than that I've manually LUKS'd drives but I've got no idea how to manually luks a drive and make it boot. I've experienced installers not working properly and LUKS'd OS's being unbootable, even though I could mount the data, I didn't know how to make the grub and initramfs etc work.
3543[21:59:41] *** Quits: tagomago (~tagomago@replaced-ip) (Remote host closed the connection)
3545[22:00:02] <lope> ratrace, I was not able to use the installer to install debian on my dedi. I had to debootstrap it from the host's recovery console.
3546[22:00:16] <ratrace> lope: it's autodetected, and you hint at what needs unlocking on boot via /etc/crypttab
3550[22:01:53] <lope> ratrace, glad to speak to you. When I took to IRC last time I battled with it, nobody online actually knew how LUKS booting works. how is /etc/crypttab even readable before the disk is decrypted? or is that hint used by the update-initramfs -u and update-grub stuff?
3551[22:02:34] <ratrace> it's used by initramfs-tools yes
3552[22:03:05] <ratrace> lope: look into crypt* stuff in /usr/share/initramfs-tools/hooks/
3553[22:03:11] <lope> ratrace, what would be the basic procedure to setup a dropbear system like you described with debootstrap?
3555[22:03:38] <ratrace> there's initramfs-tools manpage to get you started, but looking at specific hooks and scripts is the only way to learn what it does; that's how i learned it and wrote custom hooks
3556[22:03:52] <ratrace> lope: you want an ansible playbook for that? :)
3557[22:04:18] <ratrace> zfs root over luks, remote unlocking via ssh
3558[22:04:31] <karlpinc> lope: The above has the instructions for remote unlocking via ssh. (But not anything about debootstrap.)
3559[22:04:46] <lope> ratrace, I'm not familiar with ansible
3571[22:07:35] <karlpinc> I've got an automated unlocking via ssh worked out. It's something of a kludge, because the initial idea was that somebody would get an email when the system booted and unlock the crypted disks. So now, a program recognizses the email, ssh's in with a key, which runs a script via authorized_keys. Ta-da.
3574[22:08:19] <karlpinc> ("Somebody" never unlocked the crypted fs, so something had to be done... :)
3575[22:08:31] <ratrace> sounds too convoluted
3576[22:08:44] <ratrace> why not fetch the key via https from a central key server?
3577[22:08:48] <lope> ratrace, so are you saying you have an ansible script that can be let lose on a server, and turns it into one of your dropbear bitches automagically?
3585[22:10:46] <lope> ratrace, why do you use ansible? what does it do that you can't do yourself?
3586[22:10:54] <ratrace> karlpinc: being that the treat model is when the disk leaves the chassis, and some simple iptables to ensure only your ips can fetch the keys, sure; why not
3587[22:11:02] *** Quits: cryptodan (~cryptodan@replaced-ip) (Remote host closed the connection)
3588[22:11:10] <karlpinc> ratrace: Ok.
3589[22:11:19] <ratrace> lope: it's a relic from when we used ansible exclusively, before saltstack
3590[22:11:51] <tds> ratrace: out of interest, are you planning to use native zfs encryption when 0.8 is a thing in debian, or sticking with zfs on luks?
3591[22:12:00] <lope> okay, then I'll ask the same question about saltstack?
3592[22:12:11] *** Quits: gjuric (~textual@replaced-ip) (Quit: My MacBook has gone to sleep. ZZZzzz…)
3595[22:13:08] <lope> ratrace, do you pay for saltstack?
3596[22:13:10] <ratrace> tds: luks ; we actually don't use zfs that much, only on a few backup of backup servers
3597[22:13:19] <karlpinc> ratrace: (Although IPs are not known to be especially secret. :) Obviously, there's some infrastructure you trust.
3598[22:13:47] <ratrace> lope: we have a fleet of servers, it'd be very, very, very tedious if i or my team had to manually repeat all the commands on all the servers for all the configuration and updating -- hence salt
3604[22:14:59] <ratrace> i work for a web shop that does some mobile apps and customer websites, there are hundreds of sites and any change needs propagation to three dns servers, that's where the event reactor comes very handy
3607[22:15:26] <tds> you can equally just build all this with some bash scripts, there's just plenty of tooling around to make your life easier (like salt/ansible/whatever :)
3608[22:15:32] <lope> ratrace, I'm guessing your ssh key server verifies the incoming IP before providing the key?
3611[22:16:07] <lope> tds, yeah, I've tested out a node.js project that lets you SSH into multiple servers and type commands into all of them at once. Just a rough proof of concept
3612[22:16:14] <ratrace> tds: that's true, but we also want to be within some standards so when new blood comes pouring in, we could hire folks with existing knowledge ; NIH-ing stuff is the worst thing you can do to lock yourself in trouble
3613[22:16:19] <lope> But obviously wouldn't compare to a well developed solution.
3614[22:16:34] <jmcnaught> node.js driving multiple ssh connections sounds like an abomination
3615[22:16:36] <lope> NIH?
3616[22:16:39] <karlpinc> ratrace: I've used a hidden dns master to push dns changes to multiple authoratiative public slaves, but I like the idea of doing it all with a single tool.
3617[22:16:53] <lope> jmcnaught, that's kind of how I felt using it
3618[22:16:57] <ratrace> lope: yes of course, in fact, we have an in-house app that whitelists servers that may reboot
3619[22:17:06] <lope> excited and scared at the same time, then deleted it.
3620[22:17:12] <ratrace> so yes, if one goes down unexpectedly, one of us has to investigate what happend and then whitelist it for reboot
3621[22:17:15] <karlpinc> lope: Not Invented Here
3622[22:17:36] <tds> ratrace: as someone at a place that has a lot of similar stuff automated in mostly perl and bash, I absolutely agree :)
3629[22:19:51] <lope> ratrace, does it make sense for me to setup a dropbear install on a VM, then just just DD it over my existing server's partition, mount it, then update grub etc?
3630[22:20:00] <lope> Might be easier than trying to do it on my dedi
3639[22:21:36] <tds> if you just want the convenience of working from a vm, another lazy option is to run qemu from your recovery environment and do the install remotely that way
3640[22:21:42] <lope> I just thought you might be one of those crazy people running / on zfs on the metal
3641[22:21:49] <lope> I looked into it and it was scary as fuck.
3666[22:25:46] <ratrace> lope: with automation it's also easy to just rebuild the server root, we always make sure to separate base OS from application data, even on "regular" ext4 servers whre there's no snapshots
3667[22:25:53] <tds> lope: just running your servers with proper oob management also makes life a lot easier
3669[22:26:14] <tds> I can sit at home and boot a box from an nfs root, rebuild the entire install, and reboot it back, without having to get out of bed :)
3670[22:26:15] <lope> tds, what do you mean by oob mgmt?
3671[22:26:31] <tds> remote serial console, ipmi, kvmoip, that kind of thing
3672[22:26:40] <karlpinc> lope: That's work you do before you can do any work. :)
3693[22:30:33] <tds> ratrace: persuading that kind of thing to work on linux in the first place always seem to be a pain
3694[22:30:41] <tds> and really not what you want when a box just died :)
3695[22:31:33] <ratrace> definitely
3696[22:32:21] <lope> ratrace, I guess your dropbear systems must have GPT partition tables?
3697[22:32:30] <lope> then do you run grub-pc?
3698[22:32:45] <lope> and then do you have grub installed unencrypted in a small grub-partition?
3699[22:33:09] <lope> you know, that 2MiB grub_boot GPT partition?
3700[22:33:24] <ratrace> lope: yes, only root, swap and data partitions are encrypted; on zfs and btrfs servers only root and swap; /boot and bios_grub are not
3701[22:33:59] * karlpinc swears he's doing dropbear in his initramfs on a ext4 system
3715[22:37:00] <ratrace> otherwise there's a ton of ways someone could attack us, no matter what uuids or protections we did; they could subvert hardware in ways unknown to us
3719[22:38:44] <lope> okay. Interestingly, my old server, when I ordered it came with a static IP and debian 6 preinstalled haha. The new one just boots a rescue system with DHCP
3720[22:38:56] <lope> But I set static once I debootstrapped.
3725[22:39:57] <lope> I thought I had a working server to start on, but noooooo ratrace, you've inspired me to level up and get on the dropbear bandwagon
3740[22:42:51] <lope> It's also not a particularly productive job. I spend a lot of time figuring out new things, testing things, setting things up, reverting things and not much time actually getting things done.
3741[22:43:09] <lope> Vs programming, you know the language and frameworks etc, and just are productive
3744[22:44:18] <ratrace> ironically, "devops guys" don't really go this low level ; they're mostly only consumers of public clouds or have us sysadmins set them up with private clouds, so they can deploy their gitlabs, jenkinses, CIs, gits and k8s and dockers and ...
3745[22:44:51] *** Quits: n-iCe (~androirc@replaced-ip) (Remote host closed the connection)
3748[22:46:04] <ratrace> if they do it themselves, then they do perversions like docker inside k8s inside snaps inside lxc inside snaps inside azure -- kid you not, saw that mentioned done by someone here on irc the other day; unless they were lying through their teeth
3814[23:21:42] <ratrace> lope: well, dunno about azure but on ubuntu k8 is a snap, so if you install it in lxd, which is a snap too, you have three, four levels of containerception right there