From Gilbert.Detillieux at umanitoba.ca Wed Feb 1 11:22:30 2023 From: Gilbert.Detillieux at umanitoba.ca (Gilbert Detillieux) Date: Wed, 1 Feb 2023 11:22:30 -0600 Subject: [RndTbl] MUUG Online Meeting, Tuesday, Feb 7, 7:30pm (Date Change) -- CheckMK Message-ID: <69ae1a38-c29d-4455-b8ab-b6d2a6b56106@umanitoba.ca> The Manitoba UNIX User Group (MUUG) will be holding its next monthly meeting online, on Tuesday, February 7th, at 7:30pm. Yes, that's a week early, i.e. the FIRST Tuesday of the month... CheckMK Alberto Abrao will present CheckMK, a great platform for monitoring various IT infrastructure components. It has powerful tools to monitor all different kinds of devices that comprises a regular Enterprise IT environment. Agents are available for Linux, *nix (AIX, Solaris), *BSD, Windows, VMware, AWS, Azure, Kubernetes, Docker, among others. These can be easily enhanced with plug-ins for custom functionality. It also allows for the monitoring of devices that support SNMP. Easy to get started with, but packed with features and infinitely customizable, CheckMK is an excellent choice for the monitoring of any IT environment. *Date Change* Please note the change in meeting date for this month, and for the rest of the current year (at least until the July/August break). We are now meeting on the first Tuesday of each month. This will once again be an online meeting. Stay tuned to our muug.ca home page for the official URL, which will be made available about a half hour before the meeting starts. (Reload the page if you don't see the link, or if there are issues with connecting.) The group now holds its meetings at 7:30pm on the *first* Tuesday of every month from September to June. (There are no meetings in July and August.) Meetings are open to the general public; you don't have to be a MUUG member to attend. For more information about MUUG, and its monthly meetings, check out their web server: https://muug.ca/ Help us promote this month's meeting, by putting this poster up on your workplace bulletin board or other suitable public message board, or linking to it on social media: https://muug.ca/meetings/MUUGmeeting.pdf -- Gilbert E. Detillieux E-mail: Manitoba UNIX User Group Web: http://muug.ca/ From trevor at tecnopolis.ca Fri Feb 3 21:36:58 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Fri, 3 Feb 2023 21:36:58 -0600 Subject: [RndTbl] mysql update delays when no rows match, when backup running Message-ID: Question: Mysql (MariaDB actually, fairly recent version) issue. Innodb. Full db backup (table by table) runs each night 5am. There's a massive (few GB) table that takes a couple of mins to backup. When this table is getting backed-up, updates to the table pause until the backup is done. Selects don't seem to pause(?). However, even updates that will match zero rows seem to pause! Shouldn't the engine be doing a select (within a transaction internally) and then quitting the query? It seems like it's wanting to get some sort of lock before doing anything for the update, even the select step. Maybe this makes sense? I suppose if I was doing the locking with transactions in my code (I'm not), I would do a select FOR UPDATE and then update if I had to? Would the "select for update" also pause on the select step? I was thinking change my db library to make all update calls a select-for-update then the update only if needed, but if my hunch is correct, it won't fix anything, and slow things down a touch because I'm doing the internal work myself? And I can't blanket replace all updates with select/update without a "for update" because there could be race conditions between the select/update? Maybe the correct approach is on an instance-by-instance basis where I know I don't care at all about races (like this case) I could replace the update with select (no "for update") plus an update. If I make that an option in my db library, and use it liberally, will I be slowing down (much) the common case of updates running outside of backup time, because then both I and the engine are doing a select? There's no way I'm going to change it to "if backup running, do select first, if not do update by itself" :-) At least, I hope I'm not going to do that!! It's really only 1 table of mine that is multi-GB and has a long backup time, otherwise this would be a non-issue. I was kind of hoping inno took care of all this stuff for me... From scott at 100percenthelpdesk.com Sat Feb 4 07:22:07 2023 From: scott at 100percenthelpdesk.com (Scott Toderash) Date: Sat, 04 Feb 2023 07:22:07 -0600 Subject: [RndTbl] mysql update delays when no rows match, when backup running In-Reply-To: References: Message-ID: <8e602ffda2547d36bc286ddadd52582c@100percenthelpdesk.com> Assuming you use mysqldump. It does lock tables. If it didn't then data could change while the backup of that table is happening, producing unpredictable results. You could set up MySQL replication to another machine and run your backup on that one instead. That scenario would avoid the delay you are seeing but seems like a lot of trouble for this unless you need constantly consistent performance. On 2023-02-03 21:36, Trevor Cordes wrote: > Question: > Mysql (MariaDB actually, fairly recent version) issue. Innodb. > > Full db backup (table by table) runs each night 5am. > > There's a massive (few GB) table that takes a couple of mins to backup. > > When this table is getting backed-up, updates to the table pause until > the > backup is done. Selects don't seem to pause(?). However, even updates > that will match zero rows seem to pause! Shouldn't the engine be doing > a > select (within a transaction internally) and then quitting the query? > It > seems like it's wanting to get some sort of lock before doing anything > for > the update, even the select step. > > Maybe this makes sense? I suppose if I was doing the locking with > transactions in my code (I'm not), I would do a select FOR UPDATE and > then > update if I had to? Would the "select for update" also pause on the > select step? > > I was thinking change my db library to make all update calls a > select-for-update then the update only if needed, but if my hunch is > correct, it won't fix anything, and slow things down a touch because > I'm > doing the internal work myself? > > And I can't blanket replace all updates with select/update without a > "for > update" because there could be race conditions between the > select/update? > > Maybe the correct approach is on an instance-by-instance basis where I > know I don't care at all about races (like this case) I could replace > the > update with select (no "for update") plus an update. > > If I make that an option in my db library, and use it liberally, will I > be > slowing down (much) the common case of updates running outside of > backup time, because then both I and the engine are doing a select? > There's no way I'm going to change it to "if backup running, do select > first, if not do update by itself" :-) At least, I hope I'm not > going > to do that!! > > It's really only 1 table of mine that is multi-GB and has a long backup > time, otherwise this would be a non-issue. I was kind of hoping inno > took > care of all this stuff for me... > _______________________________________________ > Roundtable mailing list > Roundtable at muug.ca > https://muug.ca/mailman/listinfo/roundtable From cj.audet at gmail.com Sat Feb 4 11:36:05 2023 From: cj.audet at gmail.com (Chris Audet) Date: Sat, 4 Feb 2023 11:36:05 -0600 Subject: [RndTbl] libvirt, vmware, and Windows 2000 In-Reply-To: <3ad1c941-92a6-a09d-aaf1-8b8ce8298891@100percenthelpdesk.com> References: <3ad1c941-92a6-a09d-aaf1-8b8ce8298891@100percenthelpdesk.com> Message-ID: @Scott Haven't worked much with Win 2000, thankfully. Tried to reproduce your problem in my lab by installing Win 2000 Professional on ESXI, adding a bunch of virtual HDDs using FAT or NTFS with basic or dynamic disks (trying to see if some combination caused it to fail), downloading the VMDK files, converting to qcow2, and importing into Proxmox. However wasn't able to make it past the converting VMDK to qcow2 step. I must've messed up somewhere because after importing the disks into Proxmox I just get a Windows "boot drive inaccessible" error. Sadly I didn't take many notes while experimenting with all this, but if you end up finding a solution I'd be very curious to learn it. The only part that stuck out to me while doing this is that when initializing new disks on Win 2000 it seemed to default to dynamic disks, and was trying to build a software raid by default. I'm not sure if this would be a factor with your disk conversion - but I can see how if the disks are configured as a raid but "qemu-img convert" is handling the disks one at a time it could have strange results. bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000.vmdk Chris_Win2000.qcow2 (100.00/100%) bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_1.vmdk Chris_Win2000_1.qcow2 (100.00/100%) bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_2.vmdk Chris_Win2000_2.qcow2 (100.00/100%) bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_3.vmdk Chris_Win2000_3.qcow2 (100.00/100%) root at MGV7091:~# qm importdisk 102 Chris_Win2000.qcow2 local-lvm root at MGV7091:~# qm importdisk 102 Chris_Win2000_1.qcow2 local-lvm root at MGV7091:~# qm importdisk 102 Chris_Win2000_2.qcow2 local-lvm root at MGV7091:~# qm importdisk 102 Chris_Win2000_3.qcow2 local-lvm https://ostechnix.com/import-qcow2-into-proxmox/ https://docs.openstack.org/image-guide/convert-images.html On Tue, Jan 31, 2023 at 12:47 PM Scott Toderash < scott at 100percenthelpdesk.com> wrote: > If the subject line doesn't scare you then maybe you can help me out > with this one. Windows 2000 is its own punishment at this point, but > it's part of the mission in this case. > > I have a working Windows 2000 server running on VMWare esxi. I > downloaded the image as a vmdk file and attempted to import it into my > Linux system as a KVM image. It mostly works. > > I did this successfully with 2 other machines that were Windows 2012R2 > from the same source and same destination. Those work. > > W2k is having issues because it has C: E: and F: but only C: is > recognized when I boot it up in the new environment. It shows that E: > exists but it believes that it is corrupted. > > There are probably some parameters that I don't know about that I need > to pass in order to make this work. > > The general process was download the vmdk image, use "qemu-img convert" > to make a raw file and then try to boot that. I tried using virt-install > and I tried some other manual config methods but this is as far as I > have been able to get. > > Does anyone here have experience with this sort of scenario? > > > _______________________________________________ > Roundtable mailing list > Roundtable at muug.ca > https://muug.ca/mailman/listinfo/roundtable > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vsankar at foretell.ca Sat Feb 4 12:40:29 2023 From: vsankar at foretell.ca (vsankar at foretell.ca) Date: Sat, 4 Feb 2023 12:40:29 -0600 Subject: [RndTbl] How can one execute a script just before shutdown on Ubuntu Message-ID: <752A4F7A-462B-450C-87FD-2ADD85CE99E8@foretell.ca> Sorry to bother you all with this question ? can you please point me to a helpful document or give me some instructions on how to execute a script just before shutting down an Ubuntu system (Linux vijay-iMac 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux)? I created a simple script -rwxr-xr-x 1 root root 51 Jan 30 11:46 /etc/init.d/before-shutdown Then did a link so that I had lrwxrwxrwx 1 root root 25 Jan 30 11:47 K04before-shutdown -> ../init.d/before-shutdown Nothing happened!! So I wasted a lot of time reading up on systemd etc., and set up unit files to execute before shutdown.target, network.target, power off.target, as well as halt.target. Since these did not work either and life is too short, I went back to what I thought was the old way of doing things and did a lrwxrwxrwx 1 root root 25 Feb 4 11:58 /etc/rc6.d/K99before-shutdown -> ../init.d/before-shutdown This was a complete fail as well. Would really appreciate any suggestions on how to make this work. Thanks very much, Vijay Vijay Sankar ForeTell Technologies Limited vsankar at foretell.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From athompso at athompso.net Sat Feb 4 13:11:02 2023 From: athompso at athompso.net (Adam Thompson) Date: Sat, 4 Feb 2023 19:11:02 +0000 Subject: [RndTbl] mysql update delays when no rows match, when backup running In-Reply-To: References: Message-ID: Unfortunately, you're describing normal, documented, InnoDB behaviour that's at least partially "for historical reasons". Backups take a lock on tables by default, and because the READ LOCAL lock mysqldump uses is a MyISAM-only thing, the lock gets promoted to a write lock automatically in InnoDB. MySQL (et al.) UPDATE statements take a write lock at the *beginning* of query execution - NOT at the moment they want to write something. Ditto for DELETEs, AFAIK. Normally not a problem for transactional behaviour, but inconvenient here. There's one glimmer of hope, which I'll get to in a sec... mysqldump(1) documents that --opt includes --lock-tables, and also that --opt is turned on by default, so locking is the default. You can use --skip-lock-tables, at the risk of potentially getting an even more inconsistent (non-isolated) view of the data during backups. Note that you are NOT guaranteed full transaction isolation / consistency during a mysqldump anyway, as --lock-tables says: >> [...] The tables are locked with READ LOCAL to allow concurrent inserts in the case of MyISAM tables. For transactional tables such as InnoDB, --single-transaction is a much better option than --lock-tables because it does not need to lock the tables at all. >> Because --lock-tables locks tables for each database separately, this option does not guarantee that the tables in the dump file are logically consistent between databases. Tables in different databases may be dumped in completely different states. I would try --single-transaction and test; I don't have any convenient way of testing it right now. If you absolutely need 100% *perfect* self-consistent backups while the underlying tables are still being written to, you need a different RDBMS. For example, PostgreSQL does transaction-isolated backups correctly, out of the box. I think --single-transaction should get you most of the way there without switching products. -Adam -----Original Message----- From: Roundtable On Behalf Of Trevor Cordes Sent: Friday, February 3, 2023 9:37 PM To: MUUG RndTbl Subject: [RndTbl] mysql update delays when no rows match, when backup running Question: Mysql (MariaDB actually, fairly recent version) issue. Innodb. Full db backup (table by table) runs each night 5am. There's a massive (few GB) table that takes a couple of mins to backup. When this table is getting backed-up, updates to the table pause until the backup is done. Selects don't seem to pause(?). However, even updates that will match zero rows seem to pause! Shouldn't the engine be doing a select (within a transaction internally) and then quitting the query? It seems like it's wanting to get some sort of lock before doing anything for the update, even the select step. Maybe this makes sense? I suppose if I was doing the locking with transactions in my code (I'm not), I would do a select FOR UPDATE and then update if I had to? Would the "select for update" also pause on the select step? I was thinking change my db library to make all update calls a select-for-update then the update only if needed, but if my hunch is correct, it won't fix anything, and slow things down a touch because I'm doing the internal work myself? And I can't blanket replace all updates with select/update without a "for update" because there could be race conditions between the select/update? Maybe the correct approach is on an instance-by-instance basis where I know I don't care at all about races (like this case) I could replace the update with select (no "for update") plus an update. If I make that an option in my db library, and use it liberally, will I be slowing down (much) the common case of updates running outside of backup time, because then both I and the engine are doing a select? There's no way I'm going to change it to "if backup running, do select first, if not do update by itself" :-) At least, I hope I'm not going to do that!! It's really only 1 table of mine that is multi-GB and has a long backup time, otherwise this would be a non-issue. I was kind of hoping inno took care of all this stuff for me... _______________________________________________ Roundtable mailing list Roundtable at muug.ca https://muug.ca/mailman/listinfo/roundtable From athompso at athompso.net Sat Feb 4 13:31:54 2023 From: athompso at athompso.net (Adam Thompson) Date: Sat, 4 Feb 2023 19:31:54 +0000 Subject: [RndTbl] How can one execute a script just before shutdown on Ubuntu In-Reply-To: <752A4F7A-462B-450C-87FD-2ADD85CE99E8@foretell.ca> References: <752A4F7A-462B-450C-87FD-2ADD85CE99E8@foretell.ca> Message-ID: You pretty much have to do it through a SystemD unit nowadays. I?ve found the backwards-compatibility bits to be mildly unreliable, to put it, er, mildly? Repeating stuff you probably already know, but saying it out loud for everyone?s benefit. You need two pieces ? a script, and a systemd unit file. The unit file goes in /etc/systemd/system/ and looks like any other unit, pointing to a script that from systemd?s POV is just another executable. For example: /etc/systemd/system/last-gasp.service [Unit] Description=script that does stuff DefaultDependencies=no Before=shutdown.target [Service] Type=oneshot ExecStart=/some/where/lastgasp.sh TimeoutStartSec=0 [Install] WantedBy=shutdown.target (Unit file syntax reference docs: systemd.service (www.freedesktop.org) and systemd.unit (www.freedesktop.org)) Make sure the script is executable or this will fail (more or less) silently. Run ?systemctl daemon-reload? to make it notice your new unit file. Run ?systemctl enable last-gasp? to ask systemd to, y?know, actually *do* something with it at shutdown. Test with ?reboot?. If the script takes long enough to run (hint: add ?sleep 30? to make it run long enough!) you?ll see systemd print something on screen about waiting for the task to complete. I think the /etc/systemd/system/ path is correct on pretty much any Linux, but the proof is in the pudding ? if the ?enable? step works, then the unit file is in the correct location, or at least a correct-enough location. Note that systemd may have unmounted the filesystem where your script is located before it tries to run the script; put the script somewhere on the root filesystem to work around this, if needed. In theory you can add ?RequiresMountsFor=/some/where? to the [Unit] section of the unit to ensure the filesystem doesn?t get unmounted until the script exits ? never tried it. Also, even if the filesystem is still mounted, it may be mounted read-only at this point, so you may not be able to write anything to disk. This appears to be undefined, I can?t find anything that documents whether RequiresMountsFor leaves you with read-write or read-only mounts. IN THEORY you can also just drop your script into /usr/lib/systemd/system-shutdown/ and magic will happen, but? dunno, I?ve never tried that. Let us know if it actually works? (See systemd-poweroff.service (www.freedesktop.org).) This approach likely happens too late in the process to be useful, but read the docs and assess for yourself. -Adam From: Roundtable On Behalf Of vsankar at foretell.ca Sent: Saturday, February 4, 2023 12:40 PM To: roundtable at muug.ca Subject: [RndTbl] How can one execute a script just before shutdown on Ubuntu Sorry to bother you all with this question ? can you please point me to a helpful document or give me some instructions on how to execute a script just before shutting down an Ubuntu system (Linux vijay-iMac 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux)? I created a simple script -rwxr-xr-x 1 root root 51 Jan 30 11:46 /etc/init.d/before-shutdown Then did a link so that I had lrwxrwxrwx 1 root root 25 Jan 30 11:47 K04before-shutdown -> ../init.d/before-shutdown Nothing happened!! So I wasted a lot of time reading up on systemd etc., and set up unit files to execute before shutdown.target, network.target, power off.target, as well as halt.target. Since these did not work either and life is too short, I went back to what I thought was the old way of doing things and did a lrwxrwxrwx 1 root root 25 Feb 4 11:58 /etc/rc6.d/K99before-shutdown -> ../init.d/before-shutdown This was a complete fail as well. Would really appreciate any suggestions on how to make this work. Thanks very much, Vijay Vijay Sankar ForeTell Technologies Limited vsankar at foretell.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From vsankar at foretell.ca Sat Feb 4 14:15:05 2023 From: vsankar at foretell.ca (Vijay Sankar) Date: Sat, 4 Feb 2023 14:15:05 -0600 Subject: [RndTbl] How can one execute a script just before shutdown on Ubuntu In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From brian2 at groupbcl.ca Sun Feb 5 01:47:42 2023 From: brian2 at groupbcl.ca (Brian Lowe) Date: Sun, 05 Feb 2023 01:47:42 -0600 Subject: [RndTbl] mysql update delays when no rows match, when backup running In-Reply-To: References: Message-ID: <3314098.zj3CrXkjeS@haremya.renyamon.net> On Friday, February 3, 2023 9:36:58 P.M. CST Trevor Cordes wrote: > Question: > Mysql (MariaDB actually, fairly recent version) issue. Innodb. > > Full db backup (table by table) runs each night 5am. > > There's a massive (few GB) table that takes a couple of mins to backup. > > When this table is getting backed-up, updates to the table pause until the > backup is done. Selects don't seem to pause(?). However, even updates > that will match zero rows seem to pause! Shouldn't the engine be doing a > select (within a transaction internally) and then quitting the query? It > seems like it's wanting to get some sort of lock before doing anything for > the update, even the select step. > > Maybe this makes sense? I suppose if I was doing the locking with > transactions in my code (I'm not), I would do a select FOR UPDATE and then > update if I had to? Would the "select for update" also pause on the > select step? This is probably unhelpful, but I'll throw it in. If the process that updates the database can be stopped momentarily without inconveniencing users, and your the system is set up with logical volume management and a few spare GB in the volume group, you can stop the process, shut down MariaDB, create a snapshot volume (which is very quick), then restart MariaDB and the process that uses it. You now have a clean copy of all the MariaDB files on the snapshot volume. You can copy them to backup storage and dismiss the snapshot. The downside is the MariaDB files take considerably more space than the SQL required to create them, even if they're compressed. The upside is a restore is as fast as copying/ decompressing the files from the backup medium--no need to go through a lengthy SQL reload. If you want a SQL file, you can mount the snapshot volume, start a second MariaDB process to connect to the database on that volume, and perform a mysqldump. Of course, all this assumes the application in question can be shut down for 30 seconds to a minute. Most of that time is spent in MariaDB shutting down cleanly and restarting. Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott at 100percenthelpdesk.com Sun Feb 5 06:44:17 2023 From: scott at 100percenthelpdesk.com (Scott Toderash) Date: Sun, 05 Feb 2023 06:44:17 -0600 Subject: [RndTbl] libvirt, vmware, and Windows 2000 In-Reply-To: References: <3ad1c941-92a6-a09d-aaf1-8b8ce8298891@100percenthelpdesk.com> Message-ID: <61d81d0e4c9c987bf22a46b9403e8adf@100percent.ninja> I had done steps very similar to yours. qemu-img convert -O raw S1\ -\ Production-0.vmdk /dev/vg_vmhost9/kvm140_img virt-install --name kvm140 --memory 4096 --vcpus 2 --disk /dev/vg_vmhost9/kvm140_img,bus=ide --import --network default --os-variant win2k Initially I had tried using bus=sata and that was not bootable. IDE made the C: accessible but I wonder if I need more parameters to map out the other drives. The virt-install was helpful to help me generate a decent XML file and then I tweaked it a bit from there. On first boot it went through the old "Windows found new hardware" thing, which I had completely forgot about. It couldn't find any drivers of course, so oh well. It's possible that without having installed virtio drivers before I got my snapshot it isn't going to work but I"m not sure about that. Then I thought I should be able to fire up vmware player and boot the vmdk image. Then I discovered that running player on a remote headless machine is a real hassle. It seems possible but I haven't actually got it working yet. On 2023-02-04 11:36, Chris Audet wrote: > @Scott Haven't worked much with Win 2000, thankfully. > > Tried to reproduce your problem in my lab by installing Win 2000 > Professional on ESXI, adding a bunch of virtual HDDs using FAT or NTFS > with basic or dynamic disks (trying to see if some combination caused > it to fail), downloading the VMDK files, converting to qcow2, and > importing into Proxmox. > > However wasn't able to make it past the converting VMDK to qcow2 step. > I must've messed up somewhere because after importing the disks into > Proxmox I just get a Windows "boot drive inaccessible" error. > > Sadly I didn't take many notes while experimenting with all this, but > if you end up finding a solution I'd be very curious to learn it. > > The only part that stuck out to me while doing this is that when > initializing new disks on Win 2000 it seemed to default to dynamic > disks, and was trying to build a software raid by default. I'm not > sure if this would be a factor with your disk conversion - but I can > see how if the disks are configured as a raid but "qemu-img convert" > is handling the disks one at a time it could have strange results. > > bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000.vmdk > Chris_Win2000.qcow2 > (100.00/100%) > bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_1.vmdk > Chris_Win2000_1.qcow2 > (100.00/100%) > bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_2.vmdk > Chris_Win2000_2.qcow2 > (100.00/100%) > bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_3.vmdk > Chris_Win2000_3.qcow2 > (100.00/100%) > root at MGV7091:~# qm importdisk 102 Chris_Win2000.qcow2 local-lvm > root at MGV7091:~# qm importdisk 102 Chris_Win2000_1.qcow2 local-lvm > root at MGV7091:~# qm importdisk 102 Chris_Win2000_2.qcow2 local-lvm > root at MGV7091:~# qm importdisk 102 Chris_Win2000_3.qcow2 local-lvm > https://ostechnix.com/import-qcow2-into-proxmox/ > https://docs.openstack.org/image-guide/convert-images.html > > On Tue, Jan 31, 2023 at 12:47 PM Scott Toderash > wrote: > >> If the subject line doesn't scare you then maybe you can help me out >> >> with this one. Windows 2000 is its own punishment at this point, but >> >> it's part of the mission in this case. >> >> I have a working Windows 2000 server running on VMWare esxi. I >> downloaded the image as a vmdk file and attempted to import it into >> my >> Linux system as a KVM image. It mostly works. >> >> I did this successfully with 2 other machines that were Windows >> 2012R2 >> from the same source and same destination. Those work. >> >> W2k is having issues because it has C: E: and F: but only C: is >> recognized when I boot it up in the new environment. It shows that >> E: >> exists but it believes that it is corrupted. >> >> There are probably some parameters that I don't know about that I >> need >> to pass in order to make this work. >> >> The general process was download the vmdk image, use "qemu-img >> convert" >> to make a raw file and then try to boot that. I tried using >> virt-install >> and I tried some other manual config methods but this is as far as I >> >> have been able to get. >> >> Does anyone here have experience with this sort of scenario? >> >> _______________________________________________ >> Roundtable mailing list >> Roundtable at muug.ca >> https://muug.ca/mailman/listinfo/roundtable > _______________________________________________ > Roundtable mailing list > Roundtable at muug.ca > https://muug.ca/mailman/listinfo/roundtable From athompso at athompso.net Sun Feb 5 07:50:54 2023 From: athompso at athompso.net (Adam Thompson) Date: Sun, 5 Feb 2023 13:50:54 +0000 Subject: [RndTbl] libvirt, vmware, and Windows 2000 In-Reply-To: <61d81d0e4c9c987bf22a46b9403e8adf@100percent.ninja> References: <3ad1c941-92a6-a09d-aaf1-8b8ce8298891@100percenthelpdesk.com> <61d81d0e4c9c987bf22a46b9403e8adf@100percent.ninja> Message-ID: Kinda orthogonal to the original problem, but if you want to run KVM VMs on a remote headless machine, I quite strongly recommend using a canned system for doing that such as ProxmoxVE (PVE) or similar, and not relying on the traditional libvirt CLI tooling. If you don't like PVE, there are quite a few other projects that accomplish much the same ends. -Adam Get Outlook for Android ________________________________ From: Roundtable on behalf of Scott Toderash Sent: Sunday, February 5, 2023 6:44:17 AM To: Continuation of Round Table discussion Subject: Re: [RndTbl] libvirt, vmware, and Windows 2000 I had done steps very similar to yours. qemu-img convert -O raw S1\ -\ Production-0.vmdk /dev/vg_vmhost9/kvm140_img virt-install --name kvm140 --memory 4096 --vcpus 2 --disk /dev/vg_vmhost9/kvm140_img,bus=ide --import --network default --os-variant win2k Initially I had tried using bus=sata and that was not bootable. IDE made the C: accessible but I wonder if I need more parameters to map out the other drives. The virt-install was helpful to help me generate a decent XML file and then I tweaked it a bit from there. On first boot it went through the old "Windows found new hardware" thing, which I had completely forgot about. It couldn't find any drivers of course, so oh well. It's possible that without having installed virtio drivers before I got my snapshot it isn't going to work but I"m not sure about that. Then I thought I should be able to fire up vmware player and boot the vmdk image. Then I discovered that running player on a remote headless machine is a real hassle. It seems possible but I haven't actually got it working yet. On 2023-02-04 11:36, Chris Audet wrote: > @Scott Haven't worked much with Win 2000, thankfully. > > Tried to reproduce your problem in my lab by installing Win 2000 > Professional on ESXI, adding a bunch of virtual HDDs using FAT or NTFS > with basic or dynamic disks (trying to see if some combination caused > it to fail), downloading the VMDK files, converting to qcow2, and > importing into Proxmox. > > However wasn't able to make it past the converting VMDK to qcow2 step. > I must've messed up somewhere because after importing the disks into > Proxmox I just get a Windows "boot drive inaccessible" error. > > Sadly I didn't take many notes while experimenting with all this, but > if you end up finding a solution I'd be very curious to learn it. > > The only part that stuck out to me while doing this is that when > initializing new disks on Win 2000 it seemed to default to dynamic > disks, and was trying to build a software raid by default. I'm not > sure if this would be a factor with your disk conversion - but I can > see how if the disks are configured as a raid but "qemu-img convert" > is handling the disks one at a time it could have strange results. > > bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000.vmdk > Chris_Win2000.qcow2 > (100.00/100%) > bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_1.vmdk > Chris_Win2000_1.qcow2 > (100.00/100%) > bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_2.vmdk > Chris_Win2000_2.qcow2 > (100.00/100%) > bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_3.vmdk > Chris_Win2000_3.qcow2 > (100.00/100%) > root at MGV7091:~# qm importdisk 102 Chris_Win2000.qcow2 local-lvm > root at MGV7091:~# qm importdisk 102 Chris_Win2000_1.qcow2 local-lvm > root at MGV7091:~# qm importdisk 102 Chris_Win2000_2.qcow2 local-lvm > root at MGV7091:~# qm importdisk 102 Chris_Win2000_3.qcow2 local-lvm > https://ostechnix.com/import-qcow2-into-proxmox/ > https://docs.openstack.org/image-guide/convert-images.html > > On Tue, Jan 31, 2023 at 12:47 PM Scott Toderash > wrote: > >> If the subject line doesn't scare you then maybe you can help me out >> >> with this one. Windows 2000 is its own punishment at this point, but >> >> it's part of the mission in this case. >> >> I have a working Windows 2000 server running on VMWare esxi. I >> downloaded the image as a vmdk file and attempted to import it into >> my >> Linux system as a KVM image. It mostly works. >> >> I did this successfully with 2 other machines that were Windows >> 2012R2 >> from the same source and same destination. Those work. >> >> W2k is having issues because it has C: E: and F: but only C: is >> recognized when I boot it up in the new environment. It shows that >> E: >> exists but it believes that it is corrupted. >> >> There are probably some parameters that I don't know about that I >> need >> to pass in order to make this work. >> >> The general process was download the vmdk image, use "qemu-img >> convert" >> to make a raw file and then try to boot that. I tried using >> virt-install >> and I tried some other manual config methods but this is as far as I >> >> have been able to get. >> >> Does anyone here have experience with this sort of scenario? >> >> _______________________________________________ >> Roundtable mailing list >> Roundtable at muug.ca >> https://muug.ca/mailman/listinfo/roundtable > _______________________________________________ > Roundtable mailing list > Roundtable at muug.ca > https://muug.ca/mailman/listinfo/roundtable _______________________________________________ Roundtable mailing list Roundtable at muug.ca https://muug.ca/mailman/listinfo/roundtable -------------- next part -------------- An HTML attachment was scrubbed... URL: From athompso at athompso.net Sun Feb 5 14:27:25 2023 From: athompso at athompso.net (Adam Thompson) Date: Sun, 5 Feb 2023 20:27:25 +0000 Subject: [RndTbl] libvirt, vmware, and Windows 2000 In-Reply-To: <128bc3d63af5e532f5ee2b594e87ed7b@100percenthelpdesk.com> References: <3ad1c941-92a6-a09d-aaf1-8b8ce8298891@100percenthelpdesk.com> <61d81d0e4c9c987bf22a46b9403e8adf@100percent.ninja> <128bc3d63af5e532f5ee2b594e87ed7b@100percenthelpdesk.com> Message-ID: Cool - I've never heard of that one! Would you recommend it? Get Outlook for Android ________________________________ From: Scott Toderash Sent: Sunday, February 5, 2023 2:26:16 PM To: Continuation of Round Table discussion Cc: Adam Thompson Subject: Re: [RndTbl] libvirt, vmware, and Windows 2000 I've been using SolusVM for a while. Originally I picked it because it integrates with WHMCS but have not been leveraging that. It helped a lot in that it does the dirty work and then I can look under the hood at the XML etc and learn more about how to use libvirt CLI. The result is I can do a few things in libvirt and then import into SolusVM and have a properly managed VM. (In most cases.) On 2023-02-05 07:50, Adam Thompson wrote: > Kinda orthogonal to the original problem, but if you want to run KVM > VMs on a remote headless machine, I quite strongly recommend using a > canned system for doing that such as ProxmoxVE (PVE) or similar, and > not relying on the traditional libvirt CLI tooling. If you don't like > PVE, there are quite a few other projects that accomplish much the > same ends. > -Adam > > Get Outlook for Android [1] > ------------------------- > > From: Roundtable on behalf of Scott > Toderash > Sent: Sunday, February 5, 2023 6:44:17 AM > To: Continuation of Round Table discussion > Subject: Re: [RndTbl] libvirt, vmware, and Windows 2000 > > I had done steps very similar to yours. > > qemu-img convert -O raw S1\ -\ Production-0.vmdk > /dev/vg_vmhost9/kvm140_img > > virt-install --name kvm140 --memory 4096 --vcpus 2 --disk > /dev/vg_vmhost9/kvm140_img,bus=ide --import --network default > --os-variant win2k > > Initially I had tried using bus=sata and that was not bootable. IDE > made > the C: accessible but I wonder if I need more parameters to map out > the > other drives. > > The virt-install was helpful to help me generate a decent XML file and > > then I tweaked it a bit from there. > > On first boot it went through the old "Windows found new hardware" > thing, which I had completely forgot about. It couldn't find any > drivers > of course, so oh well. It's possible that without having installed > virtio drivers before I got my snapshot it isn't going to work but I"m > > not sure about that. > > Then I thought I should be able to fire up vmware player and boot the > vmdk image. Then I discovered that running player on a remote headless > > machine is a real hassle. It seems possible but I haven't actually got > > it working yet. > > On 2023-02-04 11:36, Chris Audet wrote: >> @Scott Haven't worked much with Win 2000, thankfully. >> >> Tried to reproduce your problem in my lab by installing Win 2000 >> Professional on ESXI, adding a bunch of virtual HDDs using FAT or > NTFS >> with basic or dynamic disks (trying to see if some combination > caused >> it to fail), downloading the VMDK files, converting to qcow2, and >> importing into Proxmox. >> >> However wasn't able to make it past the converting VMDK to qcow2 > step. >> I must've messed up somewhere because after importing the disks > into >> Proxmox I just get a Windows "boot drive inaccessible" error. >> >> Sadly I didn't take many notes while experimenting with all this, > but >> if you end up finding a solution I'd be very curious to learn it. >> >> The only part that stuck out to me while doing this is that when >> initializing new disks on Win 2000 it seemed to default to dynamic >> disks, and was trying to build a software raid by default. I'm not >> sure if this would be a factor with your disk conversion - but I can >> see how if the disks are configured as a raid but "qemu-img convert" >> is handling the disks one at a time it could have strange results. >> >> bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000.vmdk >> Chris_Win2000.qcow2 >> (100.00/100%) >> bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_1.vmdk >> Chris_Win2000_1.qcow2 >> (100.00/100%) >> bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_2.vmdk >> Chris_Win2000_2.qcow2 >> (100.00/100%) >> bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_3.vmdk >> Chris_Win2000_3.qcow2 >> (100.00/100%) >> root at MGV7091:~# qm importdisk 102 Chris_Win2000.qcow2 local-lvm >> root at MGV7091:~# qm importdisk 102 Chris_Win2000_1.qcow2 local-lvm >> root at MGV7091:~# qm importdisk 102 Chris_Win2000_2.qcow2 local-lvm >> root at MGV7091:~# qm importdisk 102 Chris_Win2000_3.qcow2 local-lvm >> https://ostechnix.com/import-qcow2-into-proxmox/ >> https://docs.openstack.org/image-guide/convert-images.html >> >> On Tue, Jan 31, 2023 at 12:47 PM Scott Toderash >> wrote: >> >>> If the subject line doesn't scare you then maybe you can help me > out >>> >>> with this one. Windows 2000 is its own punishment at this point, > but >>> >>> it's part of the mission in this case. >>> >>> I have a working Windows 2000 server running on VMWare esxi. I >>> downloaded the image as a vmdk file and attempted to import it into >>> my >>> Linux system as a KVM image. It mostly works. >>> >>> I did this successfully with 2 other machines that were Windows >>> 2012R2 >>> from the same source and same destination. Those work. >>> >>> W2k is having issues because it has C: E: and F: but only C: is >>> recognized when I boot it up in the new environment. It shows that >>> E: >>> exists but it believes that it is corrupted. >>> >>> There are probably some parameters that I don't know about that I >>> need >>> to pass in order to make this work. >>> >>> The general process was download the vmdk image, use "qemu-img >>> convert" >>> to make a raw file and then try to boot that. I tried using >>> virt-install >>> and I tried some other manual config methods but this is as far as > I >>> >>> have been able to get. >>> >>> Does anyone here have experience with this sort of scenario? >>> >>> _______________________________________________ >>> Roundtable mailing list >>> Roundtable at muug.ca >>> https://muug.ca/mailman/listinfo/roundtable >> _______________________________________________ >> Roundtable mailing list >> Roundtable at muug.ca >> https://muug.ca/mailman/listinfo/roundtable > _______________________________________________ > Roundtable mailing list > Roundtable at muug.ca > https://muug.ca/mailman/listinfo/roundtable > > > Links: > ------ > [1] https://aka.ms/AAb9ysg > _______________________________________________ > Roundtable mailing list > Roundtable at muug.ca > https://muug.ca/mailman/listinfo/roundtable -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott at 100percenthelpdesk.com Sun Feb 5 14:26:16 2023 From: scott at 100percenthelpdesk.com (Scott Toderash) Date: Sun, 05 Feb 2023 14:26:16 -0600 Subject: [RndTbl] libvirt, vmware, and Windows 2000 In-Reply-To: References: <3ad1c941-92a6-a09d-aaf1-8b8ce8298891@100percenthelpdesk.com> <61d81d0e4c9c987bf22a46b9403e8adf@100percent.ninja> Message-ID: <128bc3d63af5e532f5ee2b594e87ed7b@100percenthelpdesk.com> I've been using SolusVM for a while. Originally I picked it because it integrates with WHMCS but have not been leveraging that. It helped a lot in that it does the dirty work and then I can look under the hood at the XML etc and learn more about how to use libvirt CLI. The result is I can do a few things in libvirt and then import into SolusVM and have a properly managed VM. (In most cases.) On 2023-02-05 07:50, Adam Thompson wrote: > Kinda orthogonal to the original problem, but if you want to run KVM > VMs on a remote headless machine, I quite strongly recommend using a > canned system for doing that such as ProxmoxVE (PVE) or similar, and > not relying on the traditional libvirt CLI tooling. If you don't like > PVE, there are quite a few other projects that accomplish much the > same ends. > -Adam > > Get Outlook for Android [1] > ------------------------- > > From: Roundtable on behalf of Scott > Toderash > Sent: Sunday, February 5, 2023 6:44:17 AM > To: Continuation of Round Table discussion > Subject: Re: [RndTbl] libvirt, vmware, and Windows 2000 > > I had done steps very similar to yours. > > qemu-img convert -O raw S1\ -\ Production-0.vmdk > /dev/vg_vmhost9/kvm140_img > > virt-install --name kvm140 --memory 4096 --vcpus 2 --disk > /dev/vg_vmhost9/kvm140_img,bus=ide --import --network default > --os-variant win2k > > Initially I had tried using bus=sata and that was not bootable. IDE > made > the C: accessible but I wonder if I need more parameters to map out > the > other drives. > > The virt-install was helpful to help me generate a decent XML file and > > then I tweaked it a bit from there. > > On first boot it went through the old "Windows found new hardware" > thing, which I had completely forgot about. It couldn't find any > drivers > of course, so oh well. It's possible that without having installed > virtio drivers before I got my snapshot it isn't going to work but I"m > > not sure about that. > > Then I thought I should be able to fire up vmware player and boot the > vmdk image. Then I discovered that running player on a remote headless > > machine is a real hassle. It seems possible but I haven't actually got > > it working yet. > > On 2023-02-04 11:36, Chris Audet wrote: >> @Scott Haven't worked much with Win 2000, thankfully. >> >> Tried to reproduce your problem in my lab by installing Win 2000 >> Professional on ESXI, adding a bunch of virtual HDDs using FAT or > NTFS >> with basic or dynamic disks (trying to see if some combination > caused >> it to fail), downloading the VMDK files, converting to qcow2, and >> importing into Proxmox. >> >> However wasn't able to make it past the converting VMDK to qcow2 > step. >> I must've messed up somewhere because after importing the disks > into >> Proxmox I just get a Windows "boot drive inaccessible" error. >> >> Sadly I didn't take many notes while experimenting with all this, > but >> if you end up finding a solution I'd be very curious to learn it. >> >> The only part that stuck out to me while doing this is that when >> initializing new disks on Win 2000 it seemed to default to dynamic >> disks, and was trying to build a software raid by default. I'm not >> sure if this would be a factor with your disk conversion - but I can >> see how if the disks are configured as a raid but "qemu-img convert" >> is handling the disks one at a time it could have strange results. >> >> bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000.vmdk >> Chris_Win2000.qcow2 >> (100.00/100%) >> bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_1.vmdk >> Chris_Win2000_1.qcow2 >> (100.00/100%) >> bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_2.vmdk >> Chris_Win2000_2.qcow2 >> (100.00/100%) >> bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_3.vmdk >> Chris_Win2000_3.qcow2 >> (100.00/100%) >> root at MGV7091:~# qm importdisk 102 Chris_Win2000.qcow2 local-lvm >> root at MGV7091:~# qm importdisk 102 Chris_Win2000_1.qcow2 local-lvm >> root at MGV7091:~# qm importdisk 102 Chris_Win2000_2.qcow2 local-lvm >> root at MGV7091:~# qm importdisk 102 Chris_Win2000_3.qcow2 local-lvm >> https://ostechnix.com/import-qcow2-into-proxmox/ >> https://docs.openstack.org/image-guide/convert-images.html >> >> On Tue, Jan 31, 2023 at 12:47 PM Scott Toderash >> wrote: >> >>> If the subject line doesn't scare you then maybe you can help me > out >>> >>> with this one. Windows 2000 is its own punishment at this point, > but >>> >>> it's part of the mission in this case. >>> >>> I have a working Windows 2000 server running on VMWare esxi. I >>> downloaded the image as a vmdk file and attempted to import it into >>> my >>> Linux system as a KVM image. It mostly works. >>> >>> I did this successfully with 2 other machines that were Windows >>> 2012R2 >>> from the same source and same destination. Those work. >>> >>> W2k is having issues because it has C: E: and F: but only C: is >>> recognized when I boot it up in the new environment. It shows that >>> E: >>> exists but it believes that it is corrupted. >>> >>> There are probably some parameters that I don't know about that I >>> need >>> to pass in order to make this work. >>> >>> The general process was download the vmdk image, use "qemu-img >>> convert" >>> to make a raw file and then try to boot that. I tried using >>> virt-install >>> and I tried some other manual config methods but this is as far as > I >>> >>> have been able to get. >>> >>> Does anyone here have experience with this sort of scenario? >>> >>> _______________________________________________ >>> Roundtable mailing list >>> Roundtable at muug.ca >>> https://muug.ca/mailman/listinfo/roundtable >> _______________________________________________ >> Roundtable mailing list >> Roundtable at muug.ca >> https://muug.ca/mailman/listinfo/roundtable > _______________________________________________ > Roundtable mailing list > Roundtable at muug.ca > https://muug.ca/mailman/listinfo/roundtable > > > Links: > ------ > [1] https://aka.ms/AAb9ysg > _______________________________________________ > Roundtable mailing list > Roundtable at muug.ca > https://muug.ca/mailman/listinfo/roundtable From cj.audet at gmail.com Sun Feb 5 14:56:04 2023 From: cj.audet at gmail.com (Chris Audet) Date: Sun, 5 Feb 2023 14:56:04 -0600 Subject: [RndTbl] Best ways to find where disk space is being used? Message-ID: I've got a fairly long lived CentOS server that stubbornly stopped installing updates because the HDD is full. Can someone share their favourite way to determine where disk space is being used up on a system? For example, on Windows I'd use Wiztree/Treesize/Windirstat. On Linux desktop I've been using Gnome Disk Usage Analyzer (aka Baobab) . But I'm not sure what the best solutions are in cases where there's no GUI available. I could always mount / over SSH and use Baobab to crawl the remote filesystem, but that seems less than optimal ? [root at dogmeat ~]# yum update Loaded plugins: fastestmirror, versionlock Loading mirror speeds from cached hostfile * base: mirror.csclub.uwaterloo.ca * epel: ftp.cse.buffalo.edu * extras: mirror.xenyth.net * updates: mirror.csclub.uwaterloo.ca Excluding 5 updates due to versionlock (use "yum versionlock status" to show them) Resolving Dependencies --> Running transaction check ---> Package bind-export-libs.x86_64 32:9.11.4-26.P2.el7_9.10 will be updated ---> Package bind-export-libs.x86_64 32:9.11.4-26.P2.el7_9.13 will be an update ---> Package bind-libs.x86_64 32:9.11.4-26.P2.el7_9.10 will be updated ---> Package bind-libs.x86_64 32:9.11.4-26.P2.el7_9.13 will be an update ---> Package bind-libs-lite.x86_64 32:9.11.4-26.P2.el7_9.10 will be updated ---> Package bind-libs-lite.x86_64 32:9.11.4-26.P2.el7_9.13 will be an update ---> Package bind-license.noarch 32:9.11.4-26.P2.el7_9.10 will be updated ---> Package bind-license.noarch 32:9.11.4-26.P2.el7_9.13 will be an update ---> Package bind-utils.x86_64 32:9.11.4-26.P2.el7_9.10 will be updated ---> Package bind-utils.x86_64 32:9.11.4-26.P2.el7_9.13 will be an update ---> Package dkms.noarch 0:3.0.9-2.el7 will be updated ---> Package dkms.noarch 0:3.0.10-1.el7 will be an update ---> Package httpd.x86_64 0:2.4.6-97.el7.centos.5 will be updated ---> Package httpd.x86_64 0:2.4.6-98.el7.centos.6 will be an update ---> Package httpd-tools.x86_64 0:2.4.6-97.el7.centos.5 will be updated ---> Package httpd-tools.x86_64 0:2.4.6-98.el7.centos.6 will be an update ---> Package java-1.8.0-openjdk.x86_64 1:1.8.0.352.b08-2.el7_9 will be updated ---> Package java-1.8.0-openjdk.x86_64 1:1.8.0.362.b08-1.el7_9 will be an update ---> Package java-1.8.0-openjdk-headless.x86_64 1:1.8.0.352.b08-2.el7_9 will be updated ---> Package java-1.8.0-openjdk-headless.x86_64 1:1.8.0.362.b08-1.el7_9 will be an update ---> Package kernel.x86_64 0:3.10.0-1160.83.1.el7 will be installed ---> Package kernel-devel.x86_64 0:3.10.0-1160.83.1.el7 will be installed ---> Package kernel-headers.x86_64 0:3.10.0-1160.81.1.el7 will be updated ---> Package kernel-headers.x86_64 0:3.10.0-1160.83.1.el7 will be an update ---> Package kernel-tools.x86_64 0:3.10.0-1160.81.1.el7 will be updated ---> Package kernel-tools.x86_64 0:3.10.0-1160.83.1.el7 will be an update ---> Package kernel-tools-libs.x86_64 0:3.10.0-1160.81.1.el7 will be updated ---> Package kernel-tools-libs.x86_64 0:3.10.0-1160.83.1.el7 will be an update ---> Package python-perf.x86_64 0:3.10.0-1160.81.1.el7 will be updated ---> Package python-perf.x86_64 0:3.10.0-1160.83.1.el7 will be an update ---> Package sudo.x86_64 0:1.8.23-10.el7_9.2 will be updated ---> Package sudo.x86_64 0:1.8.23-10.el7_9.3 will be an update ---> Package xorg-x11-server-Xvfb.x86_64 0:1.20.4-19.el7_9 will be updated ---> Package xorg-x11-server-Xvfb.x86_64 0:1.20.4-21.el7_9 will be an update ---> Package xorg-x11-server-common.x86_64 0:1.20.4-19.el7_9 will be updated ---> Package xorg-x11-server-common.x86_64 0:1.20.4-21.el7_9 will be an update --> Finished Dependency Resolution --> Running transaction check ---> Package kernel.x86_64 0:3.10.0-1160.45.1.el7 will be erased ---> Package kernel-devel.x86_64 0:3.10.0-1160.45.1.el7 will be erased --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: kernel x86_64 3.10.0-1160.83.1.el7 updates 52 M kernel-devel x86_64 3.10.0-1160.83.1.el7 updates 18 M Updating: bind-export-libs x86_64 32:9.11.4-26.P2.el7_9.13 updates 1.1 M bind-libs x86_64 32:9.11.4-26.P2.el7_9.13 updates 158 k bind-libs-lite x86_64 32:9.11.4-26.P2.el7_9.13 updates 1.1 M bind-license noarch 32:9.11.4-26.P2.el7_9.13 updates 92 k bind-utils x86_64 32:9.11.4-26.P2.el7_9.13 updates 262 k dkms noarch 3.0.10-1.el7 epel 85 k httpd x86_64 2.4.6-98.el7.centos.6 updates 2.7 M httpd-tools x86_64 2.4.6-98.el7.centos.6 updates 94 k java-1.8.0-openjdk x86_64 1:1.8.0.362.b08-1.el7_9 updates 317 k java-1.8.0-openjdk-headless x86_64 1:1.8.0.362.b08-1.el7_9 updates 33 M kernel-headers x86_64 3.10.0-1160.83.1.el7 updates 9.1 M kernel-tools x86_64 3.10.0-1160.83.1.el7 updates 8.2 M kernel-tools-libs x86_64 3.10.0-1160.83.1.el7 updates 8.1 M python-perf x86_64 3.10.0-1160.83.1.el7 updates 8.2 M sudo x86_64 1.8.23-10.el7_9.3 updates 844 k xorg-x11-server-Xvfb x86_64 1.20.4-21.el7_9 updates 857 k xorg-x11-server-common x86_64 1.20.4-21.el7_9 updates 57 k Removing: kernel x86_64 3.10.0-1160.45.1.el7 @updates 64 M kernel-devel x86_64 3.10.0-1160.45.1.el7 @updates 38 M Transaction Summary ================================================================================ Install 2 Packages Upgrade 17 Packages Remove 2 Packages Total size: 144 M Is this ok [y/d/N]: y Downloading packages: Running transaction check Running transaction test Transaction check error: installing package python-perf-3.10.0-1160.83.1.el7.x86_64 needs 23MB on the / filesystem installing package sudo-1.8.23-10.el7_9.3.x86_64 needs 26MB on the / filesystem installing package kernel-3.10.0-1160.83.1.el7.x86_64 needs 106MB on the / filesystem installing package bind-export-libs-32:9.11.4-26.P2.el7_9.13.x86_64 needs 109MB on the / filesystem *Error Summary-------------Disk Requirements: At least 109MB more space needed on the / filesystem.* [root at dogmeat ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.9G 148K 3.9G 1% /dev/shm tmpfs 3.9G 11M 3.8G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup */dev/mapper/centos_ba--bog--v-root 41G 40G 355M 100% /* /dev/sda1 497M 346M 151M 70% /boot /dev/mapper/centos_ba--bog--v-home 20G 99M 20G 1% /home tmpfs 779M 0 779M 0% /run/user/0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From athompso at athompso.net Sun Feb 5 15:05:25 2023 From: athompso at athompso.net (Adam Thompson) Date: Sun, 5 Feb 2023 21:05:25 +0000 Subject: [RndTbl] Best ways to find where disk space is being used? In-Reply-To: References: Message-ID: Ncdu(1). It's in EPEL among other places, and IIRC it's not too hard to compile if you absolutely must. Its defaults are sane, but check out the options especially "-x". -Adam Get Outlook for Android ________________________________ From: Roundtable on behalf of Chris Audet Sent: Sunday, February 5, 2023 2:56:04 PM To: roundtable at muug.ca Subject: [RndTbl] Best ways to find where disk space is being used? I've got a fairly long lived CentOS server that stubbornly stopped installing updates because the HDD is full. Can someone share their favourite way to determine where disk space is being used up on a system? For example, on Windows I'd use Wiztree/Treesize/Windirstat. On Linux desktop I've been using Gnome Disk Usage Analyzer (aka Baobab). But I'm not sure what the best solutions are in cases where there's no GUI available. I could always mount / over SSH and use Baobab to crawl the remote filesystem, but that seems less than optimal ? [root at dogmeat ~]# yum update Loaded plugins: fastestmirror, versionlock Loading mirror speeds from cached hostfile * base: mirror.csclub.uwaterloo.ca * epel: ftp.cse.buffalo.edu * extras: mirror.xenyth.net * updates: mirror.csclub.uwaterloo.ca Excluding 5 updates due to versionlock (use "yum versionlock status" to show them) Resolving Dependencies --> Running transaction check ---> Package bind-export-libs.x86_64 32:9.11.4-26.P2.el7_9.10 will be updated ---> Package bind-export-libs.x86_64 32:9.11.4-26.P2.el7_9.13 will be an update ---> Package bind-libs.x86_64 32:9.11.4-26.P2.el7_9.10 will be updated ---> Package bind-libs.x86_64 32:9.11.4-26.P2.el7_9.13 will be an update ---> Package bind-libs-lite.x86_64 32:9.11.4-26.P2.el7_9.10 will be updated ---> Package bind-libs-lite.x86_64 32:9.11.4-26.P2.el7_9.13 will be an update ---> Package bind-license.noarch 32:9.11.4-26.P2.el7_9.10 will be updated ---> Package bind-license.noarch 32:9.11.4-26.P2.el7_9.13 will be an update ---> Package bind-utils.x86_64 32:9.11.4-26.P2.el7_9.10 will be updated ---> Package bind-utils.x86_64 32:9.11.4-26.P2.el7_9.13 will be an update ---> Package dkms.noarch 0:3.0.9-2.el7 will be updated ---> Package dkms.noarch 0:3.0.10-1.el7 will be an update ---> Package httpd.x86_64 0:2.4.6-97.el7.centos.5 will be updated ---> Package httpd.x86_64 0:2.4.6-98.el7.centos.6 will be an update ---> Package httpd-tools.x86_64 0:2.4.6-97.el7.centos.5 will be updated ---> Package httpd-tools.x86_64 0:2.4.6-98.el7.centos.6 will be an update ---> Package java-1.8.0-openjdk.x86_64 1:1.8.0.352.b08-2.el7_9 will be updated ---> Package java-1.8.0-openjdk.x86_64 1:1.8.0.362.b08-1.el7_9 will be an update ---> Package java-1.8.0-openjdk-headless.x86_64 1:1.8.0.352.b08-2.el7_9 will be updated ---> Package java-1.8.0-openjdk-headless.x86_64 1:1.8.0.362.b08-1.el7_9 will be an update ---> Package kernel.x86_64 0:3.10.0-1160.83.1.el7 will be installed ---> Package kernel-devel.x86_64 0:3.10.0-1160.83.1.el7 will be installed ---> Package kernel-headers.x86_64 0:3.10.0-1160.81.1.el7 will be updated ---> Package kernel-headers.x86_64 0:3.10.0-1160.83.1.el7 will be an update ---> Package kernel-tools.x86_64 0:3.10.0-1160.81.1.el7 will be updated ---> Package kernel-tools.x86_64 0:3.10.0-1160.83.1.el7 will be an update ---> Package kernel-tools-libs.x86_64 0:3.10.0-1160.81.1.el7 will be updated ---> Package kernel-tools-libs.x86_64 0:3.10.0-1160.83.1.el7 will be an update ---> Package python-perf.x86_64 0:3.10.0-1160.81.1.el7 will be updated ---> Package python-perf.x86_64 0:3.10.0-1160.83.1.el7 will be an update ---> Package sudo.x86_64 0:1.8.23-10.el7_9.2 will be updated ---> Package sudo.x86_64 0:1.8.23-10.el7_9.3 will be an update ---> Package xorg-x11-server-Xvfb.x86_64 0:1.20.4-19.el7_9 will be updated ---> Package xorg-x11-server-Xvfb.x86_64 0:1.20.4-21.el7_9 will be an update ---> Package xorg-x11-server-common.x86_64 0:1.20.4-19.el7_9 will be updated ---> Package xorg-x11-server-common.x86_64 0:1.20.4-21.el7_9 will be an update --> Finished Dependency Resolution --> Running transaction check ---> Package kernel.x86_64 0:3.10.0-1160.45.1.el7 will be erased ---> Package kernel-devel.x86_64 0:3.10.0-1160.45.1.el7 will be erased --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: kernel x86_64 3.10.0-1160.83.1.el7 updates 52 M kernel-devel x86_64 3.10.0-1160.83.1.el7 updates 18 M Updating: bind-export-libs x86_64 32:9.11.4-26.P2.el7_9.13 updates 1.1 M bind-libs x86_64 32:9.11.4-26.P2.el7_9.13 updates 158 k bind-libs-lite x86_64 32:9.11.4-26.P2.el7_9.13 updates 1.1 M bind-license noarch 32:9.11.4-26.P2.el7_9.13 updates 92 k bind-utils x86_64 32:9.11.4-26.P2.el7_9.13 updates 262 k dkms noarch 3.0.10-1.el7 epel 85 k httpd x86_64 2.4.6-98.el7.centos.6 updates 2.7 M httpd-tools x86_64 2.4.6-98.el7.centos.6 updates 94 k java-1.8.0-openjdk x86_64 1:1.8.0.362.b08-1.el7_9 updates 317 k java-1.8.0-openjdk-headless x86_64 1:1.8.0.362.b08-1.el7_9 updates 33 M kernel-headers x86_64 3.10.0-1160.83.1.el7 updates 9.1 M kernel-tools x86_64 3.10.0-1160.83.1.el7 updates 8.2 M kernel-tools-libs x86_64 3.10.0-1160.83.1.el7 updates 8.1 M python-perf x86_64 3.10.0-1160.83.1.el7 updates 8.2 M sudo x86_64 1.8.23-10.el7_9.3 updates 844 k xorg-x11-server-Xvfb x86_64 1.20.4-21.el7_9 updates 857 k xorg-x11-server-common x86_64 1.20.4-21.el7_9 updates 57 k Removing: kernel x86_64 3.10.0-1160.45.1.el7 @updates 64 M kernel-devel x86_64 3.10.0-1160.45.1.el7 @updates 38 M Transaction Summary ================================================================================ Install 2 Packages Upgrade 17 Packages Remove 2 Packages Total size: 144 M Is this ok [y/d/N]: y Downloading packages: Running transaction check Running transaction test Transaction check error: installing package python-perf-3.10.0-1160.83.1.el7.x86_64 needs 23MB on the / filesystem installing package sudo-1.8.23-10.el7_9.3.x86_64 needs 26MB on the / filesystem installing package kernel-3.10.0-1160.83.1.el7.x86_64 needs 106MB on the / filesystem installing package bind-export-libs-32:9.11.4-26.P2.el7_9.13.x86_64 needs 109MB on the / filesystem Error Summary ------------- Disk Requirements: At least 109MB more space needed on the / filesystem. [root at dogmeat ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.9G 148K 3.9G 1% /dev/shm tmpfs 3.9G 11M 3.8G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/mapper/centos_ba--bog--v-root 41G 40G 355M 100% / /dev/sda1 497M 346M 151M 70% /boot /dev/mapper/centos_ba--bog--v-home 20G 99M 20G 1% /home tmpfs 779M 0 779M 0% /run/user/0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott at 100percenthelpdesk.com Sun Feb 5 15:11:47 2023 From: scott at 100percenthelpdesk.com (Scott Toderash) Date: Sun, 05 Feb 2023 15:11:47 -0600 Subject: [RndTbl] libvirt, vmware, and Windows 2000 In-Reply-To: References: <3ad1c941-92a6-a09d-aaf1-8b8ce8298891@100percenthelpdesk.com> <61d81d0e4c9c987bf22a46b9403e8adf@100percent.ninja> <128bc3d63af5e532f5ee2b594e87ed7b@100percenthelpdesk.com> Message-ID: <26d1fa6fbd1de0466019ed4b14076827@100percenthelpdesk.com> Yes, it's pretty good. Was acquired by Plesk a while ago. (Meaning: proper maintenance happens.) Pricing model is quite reasonble. THB though I did not do a thorough comparison beforehand. On 2023-02-05 14:27, Adam Thompson wrote: > Cool - I've never heard of that one! Would you recommend it? > > Get Outlook for Android [1] > ------------------------- > > From: Scott Toderash > Sent: Sunday, February 5, 2023 2:26:16 PM > To: Continuation of Round Table discussion > Cc: Adam Thompson > Subject: Re: [RndTbl] libvirt, vmware, and Windows 2000 > > I've been using SolusVM for a while. Originally I picked it because it > > integrates with WHMCS but have not been leveraging that. It helped a > lot > in that it does the dirty work and then I can look under the hood at > the > XML etc and learn more about how to use libvirt CLI. The result is I > can > do a few things in libvirt and then import into SolusVM and have a > properly managed VM. (In most cases.) > > On 2023-02-05 07:50, Adam Thompson wrote: >> Kinda orthogonal to the original problem, but if you want to run KVM >> VMs on a remote headless machine, I quite strongly recommend using a >> canned system for doing that such as ProxmoxVE (PVE) or similar, and >> not relying on the traditional libvirt CLI tooling. If you don't > like >> PVE, there are quite a few other projects that accomplish much the >> same ends. >> -Adam >> >> Get Outlook for Android [1] >> ------------------------- >> >> From: Roundtable on behalf of Scott >> Toderash >> Sent: Sunday, February 5, 2023 6:44:17 AM >> To: Continuation of Round Table discussion >> Subject: Re: [RndTbl] libvirt, vmware, and Windows 2000 >> >> I had done steps very similar to yours. >> >> qemu-img convert -O raw S1\ -\ Production-0.vmdk >> /dev/vg_vmhost9/kvm140_img >> >> virt-install --name kvm140 --memory 4096 --vcpus 2 --disk >> /dev/vg_vmhost9/kvm140_img,bus=ide --import --network default >> --os-variant win2k >> >> Initially I had tried using bus=sata and that was not bootable. IDE >> made >> the C: accessible but I wonder if I need more parameters to map out >> the >> other drives. >> >> The virt-install was helpful to help me generate a decent XML file > and >> >> then I tweaked it a bit from there. >> >> On first boot it went through the old "Windows found new hardware" >> thing, which I had completely forgot about. It couldn't find any >> drivers >> of course, so oh well. It's possible that without having installed >> virtio drivers before I got my snapshot it isn't going to work but > I"m >> >> not sure about that. >> >> Then I thought I should be able to fire up vmware player and boot > the >> vmdk image. Then I discovered that running player on a remote > headless >> >> machine is a real hassle. It seems possible but I haven't actually > got >> >> it working yet. >> >> On 2023-02-04 11:36, Chris Audet wrote: >>> @Scott Haven't worked much with Win 2000, thankfully. >>> >>> Tried to reproduce your problem in my lab by installing Win 2000 >>> Professional on ESXI, adding a bunch of virtual HDDs using FAT or >> NTFS >>> with basic or dynamic disks (trying to see if some combination >> caused >>> it to fail), downloading the VMDK files, converting to qcow2, and >>> importing into Proxmox. >>> >>> However wasn't able to make it past the converting VMDK to qcow2 >> step. >>> I must've messed up somewhere because after importing the disks >> into >>> Proxmox I just get a Windows "boot drive inaccessible" error. >>> >>> Sadly I didn't take many notes while experimenting with all this, >> but >>> if you end up finding a solution I'd be very curious to learn it. >>> >>> The only part that stuck out to me while doing this is that when >>> initializing new disks on Win 2000 it seemed to default to dynamic >>> disks, and was trying to build a software raid by default. I'm not >>> sure if this would be a factor with your disk conversion - but I > can >>> see how if the disks are configured as a raid but "qemu-img > convert" >>> is handling the disks one at a time it could have strange results. >>> >>> bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000.vmdk >>> Chris_Win2000.qcow2 >>> (100.00/100%) >>> bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_1.vmdk >>> Chris_Win2000_1.qcow2 >>> (100.00/100%) >>> bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_2.vmdk >>> Chris_Win2000_2.qcow2 >>> (100.00/100%) >>> bash-5.1$ qemu-img convert -p -f vmdk -O qcow2 Chris_Win2000_3.vmdk >>> Chris_Win2000_3.qcow2 >>> (100.00/100%) >>> root at MGV7091:~# qm importdisk 102 Chris_Win2000.qcow2 local-lvm >>> root at MGV7091:~# qm importdisk 102 Chris_Win2000_1.qcow2 local-lvm >>> root at MGV7091:~# qm importdisk 102 Chris_Win2000_2.qcow2 local-lvm >>> root at MGV7091:~# qm importdisk 102 Chris_Win2000_3.qcow2 local-lvm >>> https://ostechnix.com/import-qcow2-into-proxmox/ >>> https://docs.openstack.org/image-guide/convert-images.html >>> >>> On Tue, Jan 31, 2023 at 12:47 PM Scott Toderash >>> wrote: >>> >>>> If the subject line doesn't scare you then maybe you can help me >> out >>>> >>>> with this one. Windows 2000 is its own punishment at this point, >> but >>>> >>>> it's part of the mission in this case. >>>> >>>> I have a working Windows 2000 server running on VMWare esxi. I >>>> downloaded the image as a vmdk file and attempted to import it > into >>>> my >>>> Linux system as a KVM image. It mostly works. >>>> >>>> I did this successfully with 2 other machines that were Windows >>>> 2012R2 >>>> from the same source and same destination. Those work. >>>> >>>> W2k is having issues because it has C: E: and F: but only C: is >>>> recognized when I boot it up in the new environment. It shows that >>>> E: >>>> exists but it believes that it is corrupted. >>>> >>>> There are probably some parameters that I don't know about that I >>>> need >>>> to pass in order to make this work. >>>> >>>> The general process was download the vmdk image, use "qemu-img >>>> convert" >>>> to make a raw file and then try to boot that. I tried using >>>> virt-install >>>> and I tried some other manual config methods but this is as far as >> I >>>> >>>> have been able to get. >>>> >>>> Does anyone here have experience with this sort of scenario? >>>> >>>> _______________________________________________ >>>> Roundtable mailing list >>>> Roundtable at muug.ca >>>> https://muug.ca/mailman/listinfo/roundtable >>> _______________________________________________ >>> Roundtable mailing list >>> Roundtable at muug.ca >>> https://muug.ca/mailman/listinfo/roundtable >> _______________________________________________ >> Roundtable mailing list >> Roundtable at muug.ca >> https://muug.ca/mailman/listinfo/roundtable >> >> >> Links: >> ------ >> [1] https://aka.ms/AAb9ysg >> _______________________________________________ >> Roundtable mailing list >> Roundtable at muug.ca >> https://muug.ca/mailman/listinfo/roundtable > > > Links: > ------ > [1] https://aka.ms/AAb9ysg From scott at 100percenthelpdesk.com Sun Feb 5 15:14:27 2023 From: scott at 100percenthelpdesk.com (Scott Toderash) Date: Sun, 05 Feb 2023 15:14:27 -0600 Subject: [RndTbl] Best ways to find where disk space is being used? In-Reply-To: References: Message-ID: <29f45173d4a5b6b479ec14ba0a3422d0@100percenthelpdesk.com> I never really progressed beyond du -ms *|sort -nr|head -25 But that works wonders in these situations. On 2023-02-05 14:56, Chris Audet wrote: > I've got a fairly long lived CentOS server that stubbornly stopped > installing updates because the HDD is full. > > Can someone share their favourite way to determine where disk space is > being used up on a system? > > For example, on Windows I'd use Wiztree/Treesize/Windirstat. On Linux > desktop I've been using Gnome Disk Usage Analyzer (aka Baobab) [1]. > > But I'm not sure what the best solutions are in cases where there's no > GUI available. I could always mount / over SSH and use Baobab to > crawl the remote filesystem, but that seems less than optimal ? > > [root at dogmeat ~]# yum update > Loaded plugins: fastestmirror, versionlock > Loading mirror speeds from cached hostfile > * base: mirror.csclub.uwaterloo.ca [2] > * epel: ftp.cse.buffalo.edu [3] > * extras: mirror.xenyth.net [4] > * updates: mirror.csclub.uwaterloo.ca [2] > Excluding 5 updates due to versionlock (use "yum versionlock status" > to show them) > Resolving Dependencies > --> Running transaction check > ---> Package bind-export-libs.x86_64 32:9.11.4-26.P2.el7_9.10 will be > updated > ---> Package bind-export-libs.x86_64 32:9.11.4-26.P2.el7_9.13 will be > an update > ---> Package bind-libs.x86_64 32:9.11.4-26.P2.el7_9.10 will be updated > ---> Package bind-libs.x86_64 32:9.11.4-26.P2.el7_9.13 will be an > update > ---> Package bind-libs-lite.x86_64 32:9.11.4-26.P2.el7_9.10 will be > updated > ---> Package bind-libs-lite.x86_64 32:9.11.4-26.P2.el7_9.13 will be an > update > ---> Package bind-license.noarch 32:9.11.4-26.P2.el7_9.10 will be > updated > ---> Package bind-license.noarch 32:9.11.4-26.P2.el7_9.13 will be an > update > ---> Package bind-utils.x86_64 32:9.11.4-26.P2.el7_9.10 will be > updated > ---> Package bind-utils.x86_64 32:9.11.4-26.P2.el7_9.13 will be an > update > ---> Package dkms.noarch 0:3.0.9-2.el7 will be updated > ---> Package dkms.noarch 0:3.0.10-1.el7 will be an update > ---> Package httpd.x86_64 0:2.4.6-97.el7.centos.5 will be updated > ---> Package httpd.x86_64 0:2.4.6-98.el7.centos.6 will be an update > ---> Package httpd-tools.x86_64 0:2.4.6-97.el7.centos.5 will be > updated > ---> Package httpd-tools.x86_64 0:2.4.6-98.el7.centos.6 will be an > update > ---> Package java-1.8.0-openjdk.x86_64 1:1.8.0.352.b08-2.el7_9 will be > updated > ---> Package java-1.8.0-openjdk.x86_64 1:1.8.0.362.b08-1.el7_9 will be > an update > ---> Package java-1.8.0-openjdk-headless.x86_64 > 1:1.8.0.352.b08-2.el7_9 will be updated > ---> Package java-1.8.0-openjdk-headless.x86_64 > 1:1.8.0.362.b08-1.el7_9 will be an update > ---> Package kernel.x86_64 0:3.10.0-1160.83.1.el7 will be installed > ---> Package kernel-devel.x86_64 0:3.10.0-1160.83.1.el7 will be > installed > ---> Package kernel-headers.x86_64 0:3.10.0-1160.81.1.el7 will be > updated > ---> Package kernel-headers.x86_64 0:3.10.0-1160.83.1.el7 will be an > update > ---> Package kernel-tools.x86_64 0:3.10.0-1160.81.1.el7 will be > updated > ---> Package kernel-tools.x86_64 0:3.10.0-1160.83.1.el7 will be an > update > ---> Package kernel-tools-libs.x86_64 0:3.10.0-1160.81.1.el7 will be > updated > ---> Package kernel-tools-libs.x86_64 0:3.10.0-1160.83.1.el7 will be > an update > ---> Package python-perf.x86_64 0:3.10.0-1160.81.1.el7 will be updated > ---> Package python-perf.x86_64 0:3.10.0-1160.83.1.el7 will be an > update > ---> Package sudo.x86_64 0:1.8.23-10.el7_9.2 will be updated > ---> Package sudo.x86_64 0:1.8.23-10.el7_9.3 will be an update > ---> Package xorg-x11-server-Xvfb.x86_64 0:1.20.4-19.el7_9 will be > updated > ---> Package xorg-x11-server-Xvfb.x86_64 0:1.20.4-21.el7_9 will be an > update > ---> Package xorg-x11-server-common.x86_64 0:1.20.4-19.el7_9 will be > updated > ---> Package xorg-x11-server-common.x86_64 0:1.20.4-21.el7_9 will be > an update > --> Finished Dependency Resolution > --> Running transaction check > ---> Package kernel.x86_64 0:3.10.0-1160.45.1.el7 will be erased > ---> Package kernel-devel.x86_64 0:3.10.0-1160.45.1.el7 will be erased > --> Finished Dependency Resolution > > Dependencies Resolved > > ================================================================================ > Package Arch Version > Repository > > Size > ================================================================================ > Installing: > kernel x86_64 3.10.0-1160.83.1.el7 > updates 52 M > kernel-devel x86_64 3.10.0-1160.83.1.el7 > updates 18 M > Updating: > bind-export-libs x86_64 32:9.11.4-26.P2.el7_9.13 > updates 1.1 M > bind-libs x86_64 32:9.11.4-26.P2.el7_9.13 > updates 158 k > bind-libs-lite x86_64 32:9.11.4-26.P2.el7_9.13 > updates 1.1 M > bind-license noarch 32:9.11.4-26.P2.el7_9.13 > updates 92 k > bind-utils x86_64 32:9.11.4-26.P2.el7_9.13 > updates 262 k > dkms noarch 3.0.10-1.el7 epel > 85 k > httpd x86_64 2.4.6-98.el7.centos.6 > updates 2.7 M > httpd-tools x86_64 2.4.6-98.el7.centos.6 > updates 94 k > java-1.8.0-openjdk x86_64 1:1.8.0.362.b08-1.el7_9 > updates 317 k > java-1.8.0-openjdk-headless x86_64 1:1.8.0.362.b08-1.el7_9 > updates 33 M > kernel-headers x86_64 3.10.0-1160.83.1.el7 > updates 9.1 M > kernel-tools x86_64 3.10.0-1160.83.1.el7 > updates 8.2 M > kernel-tools-libs x86_64 3.10.0-1160.83.1.el7 > updates 8.1 M > python-perf x86_64 3.10.0-1160.83.1.el7 > updates 8.2 M > sudo x86_64 1.8.23-10.el7_9.3 > updates 844 k > xorg-x11-server-Xvfb x86_64 1.20.4-21.el7_9 > updates 857 k > xorg-x11-server-common x86_64 1.20.4-21.el7_9 > updates 57 k > Removing: > kernel x86_64 3.10.0-1160.45.1.el7 > @updates 64 M > kernel-devel x86_64 3.10.0-1160.45.1.el7 > @updates 38 M > > Transaction Summary > ================================================================================ > Install 2 Packages > Upgrade 17 Packages > Remove 2 Packages > > Total size: 144 M > Is this ok [y/d/N]: y > Downloading packages: > Running transaction check > Running transaction test > > Transaction check error: > installing package python-perf-3.10.0-1160.83.1.el7.x86_64 needs > 23MB on the / filesystem > installing package sudo-1.8.23-10.el7_9.3.x86_64 needs 26MB on the / > filesystem > installing package kernel-3.10.0-1160.83.1.el7.x86_64 needs 106MB on > the / filesystem > installing package bind-export-libs-32:9.11.4-26.P2.el7_9.13.x86_64 > needs 109MB on the / filesystem > > Error Summary > ------------- > Disk Requirements: > At least 109MB more space needed on the / filesystem. > > [root at dogmeat ~]# df -h > Filesystem Size Used Avail Use% Mounted > on > devtmpfs 3.8G 0 3.8G 0% /dev > tmpfs 3.9G 148K 3.9G 1% /dev/shm > tmpfs 3.9G 11M 3.8G 1% /run > tmpfs 3.9G 0 3.9G 0% > /sys/fs/cgroup > /dev/mapper/centos_ba--bog--v-root 41G 40G 355M 100% / > /dev/sda1 497M 346M 151M 70% /boot > /dev/mapper/centos_ba--bog--v-home 20G 99M 20G 1% /home > tmpfs 779M 0 779M 0% > /run/user/0 > > > Links: > ------ > [1] https://wiki.gnome.org/Apps/DiskUsageAnalyzer > [2] http://mirror.csclub.uwaterloo.ca > [3] http://ftp.cse.buffalo.edu > [4] http://mirror.xenyth.net > _______________________________________________ > Roundtable mailing list > Roundtable at muug.ca > https://muug.ca/mailman/listinfo/roundtable From brian2 at groupbcl.ca Mon Feb 6 00:36:53 2023 From: brian2 at groupbcl.ca (Brian Lowe) Date: Mon, 06 Feb 2023 00:36:53 -0600 Subject: [RndTbl] Best ways to find where disk space is being used? In-Reply-To: References: Message-ID: <8868623.9zINWIitvN@haremya.renyamon.net> On Sunday, February 5, 2023 2:56:04 P.M. CST Chris Audet wrote: > I've got a fairly long lived CentOS server that stubbornly stopped > installing updates because the HDD is full. > > Can someone share their favourite way to determine where disk space is > being used up on a system? > > For example, on Windows I'd use Wiztree/Treesize/Windirstat. On Linux > desktop I've been using Gnome Disk Usage Analyzer (aka Baobab) > . > > But I'm not sure what the best solutions are in cases where there's no GUI > available. I could always mount / over SSH and use Baobab to crawl the > remote filesystem, but that seems less than optimal ? I use this method from the command line. As root, `cd /` and issue the following command: find . -maxdepth 1 -type d 2>&1 -print0 | grep -zv '^\.$' | xargs -0 du -sm | sort -rn | more The directory with the greatest usage appears first. `cd` into it and issue the above command again. Clean directories and large files, then repeat as needed. Two things to note: 1. Some programs in Linux do a trick where they allocate a file and then delete it while keeping the file open. The inode remains busy and the space isn't freed up until the process terminates. The advantage to this is other processes can't open the file to look into its contents. The disadvantage is you can't see the file using 'ls'. However, such files show up in 'lsof' with the tag '(deleted)'. 2. A file system can report "full" if it runs out of inodes. This used to be a problem on old, small systems, but probably isn't any more because file systems today tend to be very large and have loads of spare inodes. The command "df -i" shows the inode counts. Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From athompso at athompso.net Mon Feb 6 06:15:34 2023 From: athompso at athompso.net (Adam Thompson) Date: Mon, 6 Feb 2023 12:15:34 +0000 Subject: [RndTbl] Best ways to find where disk space is being used? In-Reply-To: <8868623.9zINWIitvN@haremya.renyamon.net> References: <8868623.9zINWIitvN@haremya.renyamon.net> Message-ID: There's a shell trick I stumbled upon years ago to simplify that: filename globbing will only include directories if you have a trailing slash. So "du -sm */ | sort ..." usually gives the same result as the find pipeline. I have run into situations where it didn't, and this is one 'feature' I've never felt the need to fully explore & understand, so very much YMMV. I also have a vague recollection that the shellglob way also has some corner cases with spaces or control characters in the directory names, that the find (1) approach handles better. Quick'n'dirty, not necessarily "best", but sometimes useful. -Adam Get Outlook for Android ________________________________ From: Roundtable on behalf of Brian Lowe Sent: Monday, February 6, 2023, 00:37 To: Continuation of Round Table discussion Subject: Re: [RndTbl] Best ways to find where disk space is being used? On Sunday, February 5, 2023 2:56:04 P.M. CST Chris Audet wrote: > I've got a fairly long lived CentOS server that stubbornly stopped > installing updates because the HDD is full. > > Can someone share their favourite way to determine where disk space is > being used up on a system? > > For example, on Windows I'd use Wiztree/Treesize/Windirstat. On Linux > desktop I've been using Gnome Disk Usage Analyzer (aka Baobab) > . > > But I'm not sure what the best solutions are in cases where there's no GUI > available. I could always mount / over SSH and use Baobab to crawl the > remote filesystem, but that seems less than optimal ? I use this method from the command line. As root, `cd /` and issue the following command: find . -maxdepth 1 -type d 2>&1 -print0 | grep -zv '^\.$' | xargs -0 du -sm | sort -rn | more The directory with the greatest usage appears first. `cd` into it and issue the above command again. Clean directories and large files, then repeat as needed. Two things to note: 1. Some programs in Linux do a trick where they allocate a file and then delete it while keeping the file open. The inode remains busy and the space isn't freed up until the process terminates. The advantage to this is other processes can't open the file to look into its contents. The disadvantage is you can't see the file using 'ls'. However, such files show up in 'lsof' with the tag '(deleted)'. 2. A file system can report "full" if it runs out of inodes. This used to be a problem on old, small systems, but probably isn't any more because file systems today tend to be very large and have loads of spare inodes. The command "df -i" shows the inode counts. Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From trevor at tecnopolis.ca Mon Feb 6 18:12:34 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Mon, 6 Feb 2023 18:12:34 -0600 Subject: [RndTbl] Uh oh, Spectre redux Message-ID: <20230206181234.24ac11e1@pog.tecnopolis.ca> https://gruss.cc/files/prefetch.pdf CVE-2023-0597 The CVE's are empty (reserved) until people install the fixes. Fedora already has a fix, as I'm sure many other distros do. This looks like a bad one. Spectre-like in its scope. Another fundamental flaw in the design of modern CPUs in terms of side-channel attacks. But this one is on address-space knowledge, allowing the defeat of ASLR/SMAP. So in that sense it is not a direct attack vector, but one that could be leveraged by other attacks that can benefit from address space knowledge. (I think? Thoughts?) Yet another fix that is going to slow down our systems. The authors claim "only" up to 5% slowdown. All of these 5% slowdowns from the last 3 years are starting to add up... It's like the atomic bomb: at times one might wish no one had discovered it... :-/ From trevor at tecnopolis.ca Mon Feb 6 18:31:55 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Mon, 6 Feb 2023 18:31:55 -0600 Subject: [RndTbl] Best ways to find where disk space is being used? In-Reply-To: References: Message-ID: <20230206183155.1b79c924@pog.tecnopolis.ca> On 2023-02-05 Chris Audet wrote: > I've got a fairly long lived CentOS server that stubbornly stopped > installing updates because the HDD is full. All great ideas from others here. Essentially all amount to the same thing. I've written my own "du-dirs" ages ago that basically does the same thing. When disk is full I du-dirs in a likely-culprit dir (usually /var or /home) and start there. Then pick the biggest one and cd/du-dirs in that one; repeat. /var/logs is a great cheap/quick place to immediately start hosing stuff when in "we're 100% disk on production" panic mode. ll -Srh is your friend once you're in the dir. Oh ya, and my personal favorite (might not work on super ancient rpm): rpm -qa --queryformat="%10{SIZE}\t%{NAME}\n" | sort -k1,1n followed by dnf remove of the biggest one that doesn't look critical (though never include -y until you see the deps!!). What I mainly wanted to add, though, is an idea to address part of the root cause of your problem: your box probably has enough space for the updates to be installed, just not enough space for that plus the temp space for the rpm files themselves. I finally ran into a situation for a dist-upgrade type of thing with dnf where it was going to be impossible clear up enough space on / to house both all the rpms and the installed files... at least as how dnf was calculating it. Luckily dnf (and maybe yum??) have a temp-dir option where you can redirect all of the rpms to be placed on another fs where you have lots of space. This cuts down the space required for any updates (esp system upgrades) drastically. dnf system-upgrade download --releasever=36 --allowerasing --downloaddir=/some/other/fs/updates-tmp Conceivably one could modify their update-cron-thing to make sure that option was always present (maybe they let you put it in the .conf). Of course, that doesn't solve the root issue of other things slowly filling up your disk, but it would probably postpone the "argh" moment by a few days/weeks. Note: there appears to still be a bug in this option: you must make & specify a subdir of your temp fs area because some guy said --downloaddir=/home and dnf proceeded to update his whole system and then rm -rf /home. Hahahaha. From trevor at tecnopolis.ca Mon Feb 6 18:50:41 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Mon, 6 Feb 2023 18:50:41 -0600 Subject: [RndTbl] mysql update delays when no rows match, when backup running In-Reply-To: References: Message-ID: <20230206185041.1c9be4b9@pog.tecnopolis.ca> On 2023-02-04 Adam Thompson wrote: > > I would try --single-transaction and test; I don't have any > convenient way of testing it right now. Oooh... --single-transaction is perfect!!! I didn't know about that! All my tables are inno. I definitely won't do any create/drop/rename /truncate while backing up. Seriously, the perfect solution and one that actually makes sense. It's just the same as saying START TRANSACTION before a bunch of queries in my code. That transaction gets a view of data frozen in time so you have consistencies even across tables... WITHOUT holding up the rest of the system from doing its normal inserts/updates/selects. You just saved me writing some insert/update workaround asynchronous queue! > If you absolutely need 100% *perfect* self-consistent backups while > the underlying tables are still being written to, you need a In this instance we don't have to be perfectly consistent across tables, though it is always desirable. I'm pretty sure, though, that --single-transaction will provide this perfection. No? I can't see a downside. On 2023-02-05 Brian Lowe wrote: > If the process that updates the database can be stopped momentarily > without inconveniencing users, and your the system is set up with > logical volume management and a few spare GB in the volume group, you > can stop the process, shut down MariaDB, create a snapshot volume > (which is very quick), then restart MariaDB and the process that uses > it. You now have a clean copy of all the MariaDB files on the > snapshot volume. You can copy them to backup storage and dismiss the > snapshot. This is also a very good idea. Turns out the default rackspace RHEL install does use LVM, so we could use this as an option. Shutting down the system for 60s at 05:00 would actually be feasible, at least at present. I'll start with --single-transaction and if that is utopia I'll thank my lucky stars. If not, I'll investigate LVM. Still better than writing an async update queue for this big contentious table. I thought of a possible third option, but would have too big a cost: run a second instance of mysql probably on another box with mysql replication turned on from first box to new box. Then run the backup on box 2. I'm pretty sure the comms between box 1 & 2 are queue-like (in the "binary log") so that box 1 never waits for box 2 to confirm a transaction. So box 2 can lock that table for 2 mins whilst box 1 continues like nothing is going on. Purely theoretical purely based on my not so pure understanding of the theory of mysql mirroring which I've never toyed with in practice. I knew there was a reason I'm a MUUG member! :-) MUUG members for the win. From athompso at athompso.net Mon Feb 6 18:59:55 2023 From: athompso at athompso.net (Adam Thompson) Date: Tue, 7 Feb 2023 00:59:55 +0000 Subject: [RndTbl] mysql update delays when no rows match, when backup running In-Reply-To: <20230206185041.1c9be4b9@pog.tecnopolis.ca> References: <20230206185041.1c9be4b9@pog.tecnopolis.ca> Message-ID: Your third option would work quite nicely, albeit at the cost of quite some complexity. Running two copies of MySQL on the same system is fully supported, and then can in a replication topology. If you use ZFS or RH's VDO deduping fses, you could do it without taking up nearly as much disk space as you imagine. The secondary copy of MySQL doesn't need much resources, either. And it's trivial to even shut it down completely if you want fs backups instead. Yes, replication is sort of async, but I think you have the option of sync 2-phase commits if you really want them. -Adam Get Outlook for Android ________________________________ From: Trevor Cordes Sent: Monday, February 6, 2023 6:50:41 PM To: Adam Thompson Cc: Continuation of Round Table discussion Subject: Re: [RndTbl] mysql update delays when no rows match, when backup running On 2023-02-04 Adam Thompson wrote: > > I would try --single-transaction and test; I don't have any > convenient way of testing it right now. Oooh... --single-transaction is perfect!!! I didn't know about that! All my tables are inno. I definitely won't do any create/drop/rename /truncate while backing up. Seriously, the perfect solution and one that actually makes sense. It's just the same as saying START TRANSACTION before a bunch of queries in my code. That transaction gets a view of data frozen in time so you have consistencies even across tables... WITHOUT holding up the rest of the system from doing its normal inserts/updates/selects. You just saved me writing some insert/update workaround asynchronous queue! > If you absolutely need 100% *perfect* self-consistent backups while > the underlying tables are still being written to, you need a In this instance we don't have to be perfectly consistent across tables, though it is always desirable. I'm pretty sure, though, that --single-transaction will provide this perfection. No? I can't see a downside. On 2023-02-05 Brian Lowe wrote: > If the process that updates the database can be stopped momentarily > without inconveniencing users, and your the system is set up with > logical volume management and a few spare GB in the volume group, you > can stop the process, shut down MariaDB, create a snapshot volume > (which is very quick), then restart MariaDB and the process that uses > it. You now have a clean copy of all the MariaDB files on the > snapshot volume. You can copy them to backup storage and dismiss the > snapshot. This is also a very good idea. Turns out the default rackspace RHEL install does use LVM, so we could use this as an option. Shutting down the system for 60s at 05:00 would actually be feasible, at least at present. I'll start with --single-transaction and if that is utopia I'll thank my lucky stars. If not, I'll investigate LVM. Still better than writing an async update queue for this big contentious table. I thought of a possible third option, but would have too big a cost: run a second instance of mysql probably on another box with mysql replication turned on from first box to new box. Then run the backup on box 2. I'm pretty sure the comms between box 1 & 2 are queue-like (in the "binary log") so that box 1 never waits for box 2 to confirm a transaction. So box 2 can lock that table for 2 mins whilst box 1 continues like nothing is going on. Purely theoretical purely based on my not so pure understanding of the theory of mysql mirroring which I've never toyed with in practice. I knew there was a reason I'm a MUUG member! :-) MUUG members for the win. -------------- next part -------------- An HTML attachment was scrubbed... URL: From trevor at tecnopolis.ca Tue Feb 7 04:02:31 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Tue, 7 Feb 2023 04:02:31 -0600 Subject: [RndTbl] mysql update delays when no rows match, when backup running In-Reply-To: <8e602ffda2547d36bc286ddadd52582c@100percenthelpdesk.com> References: <8e602ffda2547d36bc286ddadd52582c@100percenthelpdesk.com> Message-ID: <20230207040231.2590e8fd@pog.tecnopolis.ca> On 2023-02-04 Scott Toderash wrote: > Assuming you use mysqldump. It does lock tables. If it didn't then > data could change while the backup of that table is happening, > producing unpredictable results. > > You could set up MySQL replication to another machine and run your > backup on that one instead. That scenario would avoid the delay you > are seeing but seems like a lot of trouble for this unless you need > constantly consistent performance. Doh! I didn't see your reply as for some reason it wasn't in the thread. I need to eyeball things more carefully. Glad you thought up the same replication idea that dawned on me in the interim. And yes, I needed to also consider the implications on not doing a full db-wide lock. I think the new transactional method Adam found will achieve the same thing as a db-wide lock without actually doing a db-wide lock: it looks like the xaction will be initiated db-wide (i.e. once for the whole db). From Gilbert.Detillieux at umanitoba.ca Tue Feb 7 15:21:10 2023 From: Gilbert.Detillieux at umanitoba.ca (Gilbert Detillieux) Date: Tue, 7 Feb 2023 15:21:10 -0600 Subject: [RndTbl] Reminder: MUUG Online Meeting, Tonight, Feb 7, 7:30pm (Date Change) -- CheckMK Message-ID: A reminder that the MUUG online meeting will be on BBB this evening... The Manitoba UNIX User Group (MUUG) will be holding its next monthly meeting online, on Tuesday, February 7th, at 7:30pm. Yes, that's a week early, i.e. the FIRST Tuesday of the month... CheckMK Alberto Abrao will present CheckMK, a great platform for monitoring various IT infrastructure components. It has powerful tools to monitor all different kinds of devices that comprises a regular Enterprise IT environment. Agents are available for Linux, *nix (AIX, Solaris), *BSD, Windows, VMware, AWS, Azure, Kubernetes, Docker, among others. These can be easily enhanced with plug-ins for custom functionality. It also allows for the monitoring of devices that support SNMP. Easy to get started with, but packed with features and infinitely customizable, CheckMK is an excellent choice for the monitoring of any IT environment. *Date Change* Please note the change in meeting date for this month, and for the rest of the current year (at least until the July/August break). We are now meeting on the first Tuesday of each month. This will once again be an online meeting. Stay tuned to our muug.ca home page for the official URL, which will be made available about a half hour before the meeting starts. (Reload the page if you don't see the link, or if there are issues with connecting.) The group now holds its meetings at 7:30pm on the *first* Tuesday of every month from September to June. (There are no meetings in July and August.) Meetings are open to the general public; you don't have to be a MUUG member to attend. For more information about MUUG, and its monthly meetings, check out their web server: https://muug.ca/ Help us promote this month's meeting, by putting this poster up on your workplace bulletin board or other suitable public message board, or linking to it on social media: https://muug.ca/meetings/MUUGmeeting.pdf -- Gilbert E. Detillieux E-mail: Manitoba UNIX User Group Web: http://muug.ca/ From eh at eduardhiebert.com Thu Feb 9 21:32:15 2023 From: eh at eduardhiebert.com (eh at eduardhiebert.com) Date: Thu, 09 Feb 2023 19:32:15 -0800 Subject: [RndTbl] Is there an Muug like community based appetite for alternative to USB3 to Ethernet, to make NAS In-Reply-To: <20230128030901.3c141ebb@pog.tecnopolis.ca> References: <60645df8-9463-4970-a393-78018f1d8104@app.fastmail.com> <1849265B-1391-4BD5-8EFE-82533637B76C@foretell.ca> <5a6402a7-3cbe-4023-bf63-c2dfd9850448@app.fastmail.com> <20230128030901.3c141ebb@pog.tecnopolis.ca> Message-ID: <0d6f2b8b7c47a9a6e5353c0ec56114b8@eduardhiebert.com> Who would not be better served with more back-ups including offsite redundancies? For good reason many of us have our own vehicles, housing, computers...? However, but a casual read of these exchanges on having one's own NAS is clearly not for the faint hearted. i would also imagine there many others who like Trevor are not found of so-called third party cloud services with all the risks he articulated. Wondering what interest there might be in doing another variant on a Muug type project but specific to creating a Muug type back-up service and mirrored for increased saftey. One system with huge memory would be but a tiny fraction of many of us doing our own. I'm not suggesting our people with know how do this as a labour of love but as a kind of community minded coop. Looking ahead to security needs, I would however suggest data entry security be as tight as access, cause one would not want a rip-van-Winkle poison pill virus to be added for potential nefarious wake-up if and when ever the back-up was used as a restore and have a trogan nested within. Eduard On 28/01/2023 3:09 AM, Trevor Cordes wrote: > People who know me know I *hate* cloud. If this is the cheapest cloud > out there, then it's still too much money *if you are a techie who > knows > what they are doing and likes futzing with hardware*. (For average > Joes, ya ok.) > > Your 5TB use case would appear to equal $25/mo at this place. You can > often find 5TB USB3 drives on blowout for under $100. So in 4 months > you're paying more than the DIY backup solution. The cloud solution > will cost you that fee *forever*. > > And with the cloud, all your base belong to them (and NSA, and other > .gov with or without a warrant). And you must trust they are having > the appropriate level of redundancy/protection, and it's not just some > dude with their own external USB3 drive on a Pi!! Will you bet your > life on the data being there when you need it? What's your recourse? > Almost all places will just refund the fee you paid (if that)... can't > sue for the value of lost data. > _______________________________________________ > Roundtable mailing list > Roundtable at muug.ca > https://muug.ca/mailman/listinfo/roundtable From athompso at athompso.net Thu Feb 9 22:17:54 2023 From: athompso at athompso.net (Adam Thompson) Date: Fri, 10 Feb 2023 04:17:54 +0000 Subject: [RndTbl] Is there an Muug like community based appetite for alternative to USB3 to Ethernet, to make NAS In-Reply-To: <0d6f2b8b7c47a9a6e5353c0ec56114b8@eduardhiebert.com> References: <60645df8-9463-4970-a393-78018f1d8104@app.fastmail.com> <1849265B-1391-4BD5-8EFE-82533637B76C@foretell.ca> <5a6402a7-3cbe-4023-bf63-c2dfd9850448@app.fastmail.com> <20230128030901.3c141ebb@pog.tecnopolis.ca> <0d6f2b8b7c47a9a6e5353c0ec56114b8@eduardhiebert.com> Message-ID: While I'm no longer on the board, this - and similar ideas - have been floated several times in the past. The capsule summary is, that with MUUG running such a service even on a strict cost-recovery-only, volunteer-only basis, we would not be able to meet, never mind beat, any of the commercial providers on cost or quality. Economies of scale are a massive part of the business model, and we would have all the invariant costs, with none of the incremental revenue to amortize them. The last time I ran the numbers for my own reasons (last year), what I pay Backblaze ~$10/mo for would cost roughly ~$200/month for a local operation, if operated on a "hobby" scale. That's not a typo: scale allows them to reduce total end-user costs by 20-fold and still (presumably) make a profit. Even more extreme ratios apply to public or private cloud hosting. If encrypting your data locally and only then uploading it doesn't meet your privacy requirements, you probably shouldn't be connected to the internet at all... Also, see Tarsnap, an online backup service for the EXTREMELY privacy-conscious. -Adam Get Outlook for Android ________________________________ From: Roundtable on behalf of eh at eduardhiebert.com Sent: Thursday, February 9, 2023 9:32:15 PM To: Continuation of Round Table discussion Subject: Re: [RndTbl] Is there an Muug like community based appetite for alternative to USB3 to Ethernet, to make NAS Who would not be better served with more back-ups including offsite redundancies? For good reason many of us have our own vehicles, housing, computers...? However, but a casual read of these exchanges on having one's own NAS is clearly not for the faint hearted. i would also imagine there many others who like Trevor are not found of so-called third party cloud services with all the risks he articulated. Wondering what interest there might be in doing another variant on a Muug type project but specific to creating a Muug type back-up service and mirrored for increased saftey. One system with huge memory would be but a tiny fraction of many of us doing our own. I'm not suggesting our people with know how do this as a labour of love but as a kind of community minded coop. Looking ahead to security needs, I would however suggest data entry security be as tight as access, cause one would not want a rip-van-Winkle poison pill virus to be added for potential nefarious wake-up if and when ever the back-up was used as a restore and have a trogan nested within. Eduard On 28/01/2023 3:09 AM, Trevor Cordes wrote: > People who know me know I *hate* cloud. If this is the cheapest cloud > out there, then it's still too much money *if you are a techie who > knows > what they are doing and likes futzing with hardware*. (For average > Joes, ya ok.) > > Your 5TB use case would appear to equal $25/mo at this place. You can > often find 5TB USB3 drives on blowout for under $100. So in 4 months > you're paying more than the DIY backup solution. The cloud solution > will cost you that fee *forever*. > > And with the cloud, all your base belong to them (and NSA, and other > .gov with or without a warrant). And you must trust they are having > the appropriate level of redundancy/protection, and it's not just some > dude with their own external USB3 drive on a Pi!! Will you bet your > life on the data being there when you need it? What's your recourse? > Almost all places will just refund the fee you paid (if that)... can't > sue for the value of lost data. > _______________________________________________ > Roundtable mailing list > Roundtable at muug.ca > https://muug.ca/mailman/listinfo/roundtable _______________________________________________ Roundtable mailing list Roundtable at muug.ca https://muug.ca/mailman/listinfo/roundtable -------------- next part -------------- An HTML attachment was scrubbed... URL: From trevor at tecnopolis.ca Fri Feb 10 00:08:08 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Fri, 10 Feb 2023 00:08:08 -0600 Subject: [RndTbl] Is there an Muug like community based appetite for alternative to USB3 to Ethernet, to make NAS In-Reply-To: <0d6f2b8b7c47a9a6e5353c0ec56114b8@eduardhiebert.com> References: <60645df8-9463-4970-a393-78018f1d8104@app.fastmail.com> <1849265B-1391-4BD5-8EFE-82533637B76C@foretell.ca> <5a6402a7-3cbe-4023-bf63-c2dfd9850448@app.fastmail.com> <20230128030901.3c141ebb@pog.tecnopolis.ca> <0d6f2b8b7c47a9a6e5353c0ec56114b8@eduardhiebert.com> Message-ID: <20230210000808.678783ca@pog.tecnopolis.ca> On 2023-02-09 eh at eduardhiebert.com wrote: > Wondering what interest there might be in doing another variant on a > Muug type project but specific to creating a Muug type back-up > service and mirrored for increased saftey. One system with huge > memory would be but a tiny fraction of many of us doing our own. I'm > not suggesting our people with know how do this as a labour of love > but as a kind of community minded coop. It's not a bad idea, but there are a couple of sticking points that make it a bit tough for MUUG: 1. Liability for what members put on it. What if they put naughty stuff? Not sure tiny MUUG would get the same protection the big providers do. Even if ultimately protected, any legal fight alone would be disaster. 2. Liability if we lose the data. 3. Liability if the server gets hacked and data is leaked. 4. Trust and need-to-know access levels on the server for board members (the admins). 5. Hard to police that only members are using it. (Pw sharing, etc.) 6. Don't want to irk Les. On the pro side, we do have decent space left still, and it would make a nice benefit for members. My guess is it's not going to happen, unless lawyers and govs cease to exist tomorrow. :-) But we will certainly discuss it at a board meeting as a member suggestion. On 2023-02-10 Adam Thompson wrote: > The last time I ran the numbers for my own reasons (last year), what > I pay Backblaze ~$10/mo for would cost roughly ~$200/month for a > local operation, if operated on a "hobby" scale. That's not a typo: > scale allows them to reduce total end-user costs by 20-fold and still > (presumably) make a profit. Even more extreme ratios apply to public > or private cloud hosting. Well, my personal business has run a backup service for local people / businesses for 22 years. And I do it for gobs under $200/mo. But yes, it's over $10 mo: then again it's a service, not a DIY self-managed cloud thing, and includes a free managed firewall/router setup (minus hardware)! My point is, there are affordable options that still keep your data out of Silicon Valley/NSA, even if you don't want to DIY. If someone wanted to do the same idea as a co-op thing, maybe that could work. It might better fit a venue like Skullspace though? > If encrypting your data locally and only then uploading it doesn't > meet your privacy requirements, you probably shouldn't be connected > to the internet at all... Also, see Tarsnap, an online backup > service for the EXTREMELY privacy-conscious. Ah, you're assuming SSL and don't all have NSA backdoors. You're not thinking like a paranoid, Adam! ;-) Personally, my own data is on RAID-6 and is backed-up periodically to encrypted optical media and stored off-site using my own custom software. I consider my data very safe, as it would take a volcano sprouting at Portage & Main or a nuke of all of the Wpg area to hose my data. :-) From eh at eduardhiebert.com Sun Feb 12 20:23:43 2023 From: eh at eduardhiebert.com (eh at eduardhiebert.com) Date: Sun, 12 Feb 2023 18:23:43 -0800 Subject: [RndTbl] Is there an Muug like community based appetite for alternative to USB3 to Ethernet, to make NAS In-Reply-To: <20230210000808.678783ca@pog.tecnopolis.ca> References: <60645df8-9463-4970-a393-78018f1d8104@app.fastmail.com> <1849265B-1391-4BD5-8EFE-82533637B76C@foretell.ca> <5a6402a7-3cbe-4023-bf63-c2dfd9850448@app.fastmail.com> <20230128030901.3c141ebb@pog.tecnopolis.ca> <0d6f2b8b7c47a9a6e5353c0ec56114b8@eduardhiebert.com> <20230210000808.678783ca@pog.tecnopolis.ca> Message-ID: <30c0f8041b25e17992cbda7095746aae@eduardhiebert.com> Although with fingers crossed I was hoping for better. Nevertheless the facts are always friendly and I thank each of you for your detailed reply. Thanks! The idea of using encryption as part of backups with third parties was a novel new safety application for me but raises concerns like the guy who bought cryptocurrency but lost a huge sum because you know what he forgot? :) Eduard On 2023-02-09 22:08, Trevor Cordes wrote: > On 2023-02-09 eh at eduardhiebert.com wrote: >> Wondering what interest there might be in doing another variant on a >> Muug type project but specific to creating a Muug type back-up >> service and mirrored for increased saftey. One system with huge >> memory would be but a tiny fraction of many of us doing our own. I'm >> not suggesting our people with know how do this as a labour of love >> but as a kind of community minded coop. > > It's not a bad idea, but there are a couple of sticking points that > make it a bit tough for MUUG: > > 1. Liability for what members put on it. What if they put naughty > stuff? Not sure tiny MUUG would get the same protection the big > providers do. Even if ultimately protected, any legal fight alone > would > be disaster. > > 2. Liability if we lose the data. > > 3. Liability if the server gets hacked and data is leaked. > > 4. Trust and need-to-know access levels on the server for board members > (the admins). > > 5. Hard to police that only members are using it. (Pw sharing, etc.) > > 6. Don't want to irk Les. > > On the pro side, we do have decent space left still, and it would make > a nice benefit for members. My guess is it's not going to happen, > unless lawyers and govs cease to exist tomorrow. :-) But we will > certainly discuss it at a board meeting as a member suggestion. > > On 2023-02-10 Adam Thompson wrote: >> The last time I ran the numbers for my own reasons (last year), what >> I pay Backblaze ~$10/mo for would cost roughly ~$200/month for a >> local operation, if operated on a "hobby" scale. That's not a typo: >> scale allows them to reduce total end-user costs by 20-fold and still >> (presumably) make a profit. Even more extreme ratios apply to public >> or private cloud hosting. > > Well, my personal business has run a backup service for local people / > businesses for 22 years. And I do it for gobs under $200/mo. But yes, > it's over $10 mo: then again it's a service, not a DIY self-managed > cloud thing, and includes a free managed firewall/router setup (minus > hardware)! My point is, there are affordable options that still keep > your data out of Silicon Valley/NSA, even if you don't want to DIY. > > If someone wanted to do the same idea as a co-op thing, maybe that > could work. It might better fit a venue like Skullspace though? > >> If encrypting your data locally and only then uploading it doesn't >> meet your privacy requirements, you probably shouldn't be connected >> to the internet at all... Also, see Tarsnap, an online backup >> service for the EXTREMELY privacy-conscious. > > Ah, you're assuming SSL and > don't all have NSA backdoors. You're not thinking like a paranoid, > Adam! ;-) > > Personally, my own data is on RAID-6 and is backed-up periodically to > encrypted optical media and stored off-site using my own custom > software. I consider my data very safe, as it would take a volcano > sprouting at Portage & Main or a nuke of all of the Wpg area to hose my > data. :-) From cj.audet at gmail.com Mon Feb 13 12:34:18 2023 From: cj.audet at gmail.com (Chris Audet) Date: Mon, 13 Feb 2023 12:34:18 -0600 Subject: [RndTbl] Is there an Muug like community based appetite for alternative to USB3 to Ethernet, to make NAS In-Reply-To: <30c0f8041b25e17992cbda7095746aae@eduardhiebert.com> References: <60645df8-9463-4970-a393-78018f1d8104@app.fastmail.com> <1849265B-1391-4BD5-8EFE-82533637B76C@foretell.ca> <5a6402a7-3cbe-4023-bf63-c2dfd9850448@app.fastmail.com> <20230128030901.3c141ebb@pog.tecnopolis.ca> <0d6f2b8b7c47a9a6e5353c0ec56114b8@eduardhiebert.com> <20230210000808.678783ca@pog.tecnopolis.ca> <30c0f8041b25e17992cbda7095746aae@eduardhiebert.com> Message-ID: @Eduard In cases like this I think it's helpful to refer back to the MUUG mission statement: https://muug.ca/pub/bylaws/bylaws-nov2011.pdf The objectives of the group shall be (Paraphrased) - Promote free exchange of information and practice of open systems tech in MB - Enhance professional efficiency and effectiveness of its members - Encourage cooperation among industry, gov, edu, and other special interest groups The way I've always interpreted the mission is that the primary objectives are community building, and to support local users by providing a platform to share learnings, and receive feedback on whatever they might be working on. I'm not opposed to expanding the scope of services offered at some point, but since the server admin team are all volunteers I'd be concerned about keeping the amount of ongoing maintenance at a sustainable level. Going down the service provider route isn't impossible, but I'd consider it a significant change of direction for the club ? *Quick aside*: I've been using *Deja-Dup * since switching to desktop Linux mid-2022. This app rocks, it's very simple to set up, and thanks to using *Duplicity * under the hood it allows you to encrypt all backups using a password. Even though my backups are uploaded to Microsoft Onedrive, the actual contents of the backups are not visible. Would 100% recommend giving it a look if you're evaluating cheap offsite backup solutions. *Quick aside 2*: look up tilde communities for an example of a Linux group that's more focused on providing services than community building. Very interesting topic, but a little out of the scope of this discussion so I'll leave it at that for now. On Sun, Feb 12, 2023 at 8:24 PM wrote: > Although with fingers crossed I was hoping for better. Nevertheless the > facts are always friendly and I thank each of you for your detailed > reply. Thanks! > > The idea of using encryption as part of backups with third parties was a > novel new safety application for me but raises concerns like the guy who > bought cryptocurrency but lost a huge sum because you know what he > forgot? :) > > Eduard > > On 2023-02-09 22:08, Trevor Cordes wrote: > > On 2023-02-09 eh at eduardhiebert.com wrote: > >> Wondering what interest there might be in doing another variant on a > >> Muug type project but specific to creating a Muug type back-up > >> service and mirrored for increased saftey. One system with huge > >> memory would be but a tiny fraction of many of us doing our own. I'm > >> not suggesting our people with know how do this as a labour of love > >> but as a kind of community minded coop. > > > > It's not a bad idea, but there are a couple of sticking points that > > make it a bit tough for MUUG: > > > > 1. Liability for what members put on it. What if they put naughty > > stuff? Not sure tiny MUUG would get the same protection the big > > providers do. Even if ultimately protected, any legal fight alone > > would > > be disaster. > > > > 2. Liability if we lose the data. > > > > 3. Liability if the server gets hacked and data is leaked. > > > > 4. Trust and need-to-know access levels on the server for board members > > (the admins). > > > > 5. Hard to police that only members are using it. (Pw sharing, etc.) > > > > 6. Don't want to irk Les. > > > > On the pro side, we do have decent space left still, and it would make > > a nice benefit for members. My guess is it's not going to happen, > > unless lawyers and govs cease to exist tomorrow. :-) But we will > > certainly discuss it at a board meeting as a member suggestion. > > > > On 2023-02-10 Adam Thompson wrote: > >> The last time I ran the numbers for my own reasons (last year), what > >> I pay Backblaze ~$10/mo for would cost roughly ~$200/month for a > >> local operation, if operated on a "hobby" scale. That's not a typo: > >> scale allows them to reduce total end-user costs by 20-fold and still > >> (presumably) make a profit. Even more extreme ratios apply to public > >> or private cloud hosting. > > > > Well, my personal business has run a backup service for local people / > > businesses for 22 years. And I do it for gobs under $200/mo. But yes, > > it's over $10 mo: then again it's a service, not a DIY self-managed > > cloud thing, and includes a free managed firewall/router setup (minus > > hardware)! My point is, there are affordable options that still keep > > your data out of Silicon Valley/NSA, even if you don't want to DIY. > > > > If someone wanted to do the same idea as a co-op thing, maybe that > > could work. It might better fit a venue like Skullspace though? > > > >> If encrypting your data locally and only then uploading it doesn't > >> meet your privacy requirements, you probably shouldn't be connected > >> to the internet at all... Also, see Tarsnap, an online backup > >> service for the EXTREMELY privacy-conscious. > > > > Ah, you're assuming SSL and > > don't all have NSA backdoors. You're not thinking like a paranoid, > > Adam! ;-) > > > > Personally, my own data is on RAID-6 and is backed-up periodically to > > encrypted optical media and stored off-site using my own custom > > software. I consider my data very safe, as it would take a volcano > > sprouting at Portage & Main or a nuke of all of the Wpg area to hose my > > data. :-) > _______________________________________________ > Roundtable mailing list > Roundtable at muug.ca > https://muug.ca/mailman/listinfo/roundtable > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.a.mcgregor at gmail.com Mon Feb 13 13:12:58 2023 From: kevin.a.mcgregor at gmail.com (Kevin McGregor) Date: Mon, 13 Feb 2023 13:12:58 -0600 Subject: [RndTbl] Is there an Muug like community based appetite for alternative to USB3 to Ethernet, to make NAS In-Reply-To: References: <60645df8-9463-4970-a393-78018f1d8104@app.fastmail.com> <1849265B-1391-4BD5-8EFE-82533637B76C@foretell.ca> <5a6402a7-3cbe-4023-bf63-c2dfd9850448@app.fastmail.com> <20230128030901.3c141ebb@pog.tecnopolis.ca> <0d6f2b8b7c47a9a6e5353c0ec56114b8@eduardhiebert.com> <20230210000808.678783ca@pog.tecnopolis.ca> <30c0f8041b25e17992cbda7095746aae@eduardhiebert.com> Message-ID: Another option is to find someone who will host your backup server at their home. You could do the same for them. With encrypted backups, that might cover most personal use cases. With many internet plans these days, there is bandwidth to spare. FYI I'm using TrueNAS Core (previously known as FreeNAS) at both ends. On Mon, Feb 13, 2023 at 12:35 PM Chris Audet wrote: > @Eduard In cases like this I think it's helpful to refer back to the MUUG > mission statement: > https://muug.ca/pub/bylaws/bylaws-nov2011.pdf > > The objectives of the group shall be (Paraphrased) > > - Promote free exchange of information and practice of open systems > tech in MB > - Enhance professional efficiency and effectiveness of its members > - Encourage cooperation among industry, gov, edu, and other special > interest groups > > > The way I've always interpreted the mission is that the primary objectives > are community building, and to support local users by providing a platform > to share learnings, and receive feedback on whatever they might be working > on. > > I'm not opposed to expanding the scope of services offered at some point, > but since the server admin team are all volunteers I'd be concerned about > keeping the amount of ongoing maintenance at a sustainable level. Going > down the service provider route isn't impossible, but I'd consider it a > significant change of direction for the club ? > > *Quick aside*: I've been using *Deja-Dup > * since switching to desktop Linux > mid-2022. This app rocks, it's very simple to set up, and thanks to using *Duplicity > * under the hood it > allows you to encrypt all backups using a password. Even though my backups > are uploaded to Microsoft Onedrive, the actual contents of the backups are > not visible. Would 100% recommend giving it a look if you're evaluating > cheap offsite backup solutions. > > *Quick aside 2*: look up tilde communities for > an example of a Linux group that's more focused on providing services than > community building. Very interesting topic, but a little out of the scope > of this discussion so I'll leave it at that for now. > > > On Sun, Feb 12, 2023 at 8:24 PM wrote: > >> Although with fingers crossed I was hoping for better. Nevertheless the >> facts are always friendly and I thank each of you for your detailed >> reply. Thanks! >> >> The idea of using encryption as part of backups with third parties was a >> novel new safety application for me but raises concerns like the guy who >> bought cryptocurrency but lost a huge sum because you know what he >> forgot? :) >> >> Eduard >> >> On 2023-02-09 22:08, Trevor Cordes wrote: >> > On 2023-02-09 eh at eduardhiebert.com wrote: >> >> Wondering what interest there might be in doing another variant on a >> >> Muug type project but specific to creating a Muug type back-up >> >> service and mirrored for increased saftey. One system with huge >> >> memory would be but a tiny fraction of many of us doing our own. I'm >> >> not suggesting our people with know how do this as a labour of love >> >> but as a kind of community minded coop. >> > >> > It's not a bad idea, but there are a couple of sticking points that >> > make it a bit tough for MUUG: >> > >> > 1. Liability for what members put on it. What if they put naughty >> > stuff? Not sure tiny MUUG would get the same protection the big >> > providers do. Even if ultimately protected, any legal fight alone >> > would >> > be disaster. >> > >> > 2. Liability if we lose the data. >> > >> > 3. Liability if the server gets hacked and data is leaked. >> > >> > 4. Trust and need-to-know access levels on the server for board members >> > (the admins). >> > >> > 5. Hard to police that only members are using it. (Pw sharing, etc.) >> > >> > 6. Don't want to irk Les. >> > >> > On the pro side, we do have decent space left still, and it would make >> > a nice benefit for members. My guess is it's not going to happen, >> > unless lawyers and govs cease to exist tomorrow. :-) But we will >> > certainly discuss it at a board meeting as a member suggestion. >> > >> > On 2023-02-10 Adam Thompson wrote: >> >> The last time I ran the numbers for my own reasons (last year), what >> >> I pay Backblaze ~$10/mo for would cost roughly ~$200/month for a >> >> local operation, if operated on a "hobby" scale. That's not a typo: >> >> scale allows them to reduce total end-user costs by 20-fold and still >> >> (presumably) make a profit. Even more extreme ratios apply to public >> >> or private cloud hosting. >> > >> > Well, my personal business has run a backup service for local people / >> > businesses for 22 years. And I do it for gobs under $200/mo. But yes, >> > it's over $10 mo: then again it's a service, not a DIY self-managed >> > cloud thing, and includes a free managed firewall/router setup (minus >> > hardware)! My point is, there are affordable options that still keep >> > your data out of Silicon Valley/NSA, even if you don't want to DIY. >> > >> > If someone wanted to do the same idea as a co-op thing, maybe that >> > could work. It might better fit a venue like Skullspace though? >> > >> >> If encrypting your data locally and only then uploading it doesn't >> >> meet your privacy requirements, you probably shouldn't be connected >> >> to the internet at all... Also, see Tarsnap, an online backup >> >> service for the EXTREMELY privacy-conscious. >> > >> > Ah, you're assuming SSL and >> > don't all have NSA backdoors. You're not thinking like a paranoid, >> > Adam! ;-) >> > >> > Personally, my own data is on RAID-6 and is backed-up periodically to >> > encrypted optical media and stored off-site using my own custom >> > software. I consider my data very safe, as it would take a volcano >> > sprouting at Portage & Main or a nuke of all of the Wpg area to hose my >> > data. :-) >> _______________________________________________ >> Roundtable mailing list >> Roundtable at muug.ca >> https://muug.ca/mailman/listinfo/roundtable >> > _______________________________________________ > Roundtable mailing list > Roundtable at muug.ca > https://muug.ca/mailman/listinfo/roundtable > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cj.audet at gmail.com Wed Feb 15 18:13:59 2023 From: cj.audet at gmail.com (Chris Audet) Date: Wed, 15 Feb 2023 18:13:59 -0600 Subject: [RndTbl] intel-media-driver with Intel 1165G7 and Rocky 9 Message-ID: @Alberto just a quick follow up from last meeting, you had suggested installing the official Intel driver to get the most out of this CPU. I installed as per this kb , everything seems OK (it shows up installed because I ran it again as an example, forgot to copy the output first time): [root at what ~]# dnf install intel-media-driver Last metadata expiration check: 2:11:34 ago on Wed 15 Feb 2023 03:49:21 PM. Package intel-media-driver-21.1.3-1.el9.x86_64 is already installed. Dependencies resolved. Nothing to do. Complete! Only thing I was wondering was how to verify that the OS is actually using the newly installed driver. After reboot, the Gnome settings app is reporting "Mesa Intel? Xe Graphics (TGL GT2)", I'm not 100% sure but I think that's what it was displaying before the install as well. In any case I'll keep using it and will report back if anything exciting happens. Thanks for the tip ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2023-02-15 18-07-29-min.png Type: image/png Size: 68878 bytes Desc: not available URL: From trevor at tecnopolis.ca Wed Feb 15 21:38:51 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Wed, 15 Feb 2023 21:38:51 -0600 Subject: [RndTbl] intel-media-driver with Intel 1165G7 and Rocky 9 In-Reply-To: References: Message-ID: <20230215213851.7bc5691b@pog.tecnopolis.ca> On 2023-02-15 Chris Audet wrote: > [root at what ~]# dnf install intel-media-driver > Last metadata expiration check: 2:11:34 ago on Wed 15 Feb 2023 > 03:49:21 PM. Package intel-media-driver-21.1.3-1.el9.x86_64 is rpm -ql intel-media-driver-21.1.3-1.el9.x86_64 |grep ko.xz take the string between the last / and the ko.xz and do: lsmod |grep inteldrivermodname (change inteldrivermodname to the string) If you get output, it's loaded, and likely being used. If you want to see if it's really being used and have some fun (read: crash your video) then try rmmod inteldrivermodname (ugly but it shoudn't foobar your fs's or anything... can likely ssh into the box to reboot properly, or use magic-sys-rq keys) Forget GUI tools, GUIs suck! :-) From cj.audet at gmail.com Sat Feb 18 09:33:17 2023 From: cj.audet at gmail.com (Chris Audet) Date: Sat, 18 Feb 2023 09:33:17 -0600 Subject: [RndTbl] intel-media-driver with Intel 1165G7 and Rocky 9 In-Reply-To: <20230215213851.7bc5691b@pog.tecnopolis.ca> References: <20230215213851.7bc5691b@pog.tecnopolis.ca> Message-ID: >Forget GUI tools, GUIs suck Based ? Judging on the results from this kb the DRM driver appears to be loaded: bash-5.1$ lspci -k | grep -EA3 'VGA|3D|Display' 0000:00:02.0 VGA compatible controller: Intel Corporation TigerLake-LP GT2 [Iris Xe Graphics] (rev 01) Subsystem: Lenovo Device 3f19 Kernel driver in use: i915 Kernel modules: i915 For completeness, here are the results from the suggested commands. I think I'm missing some of the expected output (possibly because this PC uses Wayland)? I tried to grep for 46, intel, and 2c2141cd33dfa6e4d8f1ad9fdc6cd2d1d380c7. bash-5.1$ rpm -ql intel-media-driver-21.1.3-1.el9.x86_64 /usr/lib/.build-id /usr/lib/.build-id/46 /usr/lib/.build-id/46/2c2141cd33dfa6e4d8f1ad9fdc6cd2d1d380c7 /usr/lib64/dri/iHD_drv_video.so /usr/share/doc/intel-media-driver /usr/share/doc/intel-media-driver/README.md /usr/share/licenses/intel-media-driver /usr/share/licenses/intel-media-driver/LICENSE.md /usr/share/metainfo/intel-media-driver.metainfo.xml bash-5.1$ lsmod | grep i915 i915 3321856 21 i2c_algo_bit 16384 1 i915 intel_gtt 24576 1 i915 drm_buddy 20480 1 i915 drm_dp_helper 159744 1 i915 drm_kms_helper 200704 2 drm_dp_helper,i915 cec 53248 2 drm_dp_helper,i915 ttm 86016 1 i915 drm 622592 14 drm_dp_helper,drm_kms_helper,drm_buddy,i915,ttm video 57344 2 ideapad_laptop,i915 On Wed, Feb 15, 2023 at 9:38 PM Trevor Cordes wrote: > On 2023-02-15 Chris Audet wrote: > > [root at what ~]# dnf install intel-media-driver > > Last metadata expiration check: 2:11:34 ago on Wed 15 Feb 2023 > > 03:49:21 PM. Package intel-media-driver-21.1.3-1.el9.x86_64 is > > rpm -ql intel-media-driver-21.1.3-1.el9.x86_64 |grep ko.xz > > take the string between the last / and the ko.xz and do: > > lsmod |grep inteldrivermodname > > (change inteldrivermodname to the string) > > If you get output, it's loaded, and likely being used. > > If you want to see if it's really being used and have some fun (read: > crash your video) then try > > rmmod inteldrivermodname > > (ugly but it shoudn't foobar your fs's or anything... can likely ssh > into the box to reboot properly, or use magic-sys-rq keys) > > Forget GUI tools, GUIs suck! :-) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From athompso at athompso.net Sat Feb 18 10:27:14 2023 From: athompso at athompso.net (Adam Thompson) Date: Sat, 18 Feb 2023 16:27:14 +0000 Subject: [RndTbl] intel-media-driver with Intel 1165G7 and Rocky 9 In-Reply-To: References: <20230215213851.7bc5691b@pog.tecnopolis.ca> Message-ID: You?re not seeing any results because that?s not a kernel module. It?s also not an Intel video driver at all ? see below. There is no newer/better intel video driver than what your distro provides ? Intel is 100% explicitly clear about that. The official driver is whatever comes with the kernel, period. See How to Identify & Find Graphics Drivers for Linux* (intel.com). The only supported way to get an updated Intel video device driver is to upgrade to a newer Linux kernel. The device driver is bundled with the kernel, and is version-locked to the kernel. iHD_drv_video is not a newer/better device driver for Intel video cards, it?s the Intel VAAPI driver, only used for video encoding and decoding and a handful of other acceleration tasks. (GitHub - intel/media-driver) There?s a similar driver for (I think) older chips, GitHub - intel/intel-vaapi-driver: VA-API user mode driver for Intel GEN Graphics family. The distro-provided, kernel-bundled, i915 Kernel DRI driver will continue to be used for everything that isn?t a VAAPI function. (There are plenty of other acceleration tasks the i915 driver handles on its, own, I don?t really understand why VAAPI has to be a separate thing that can?t be bundled into i915.ko, maybe it?s a licensing issue?) The test to see if the VAAPI driver is now functioning is to run the vainfo(1) tool. (On Debian, it?s contained in the ?vainfo? package.) While Ubuntu-specific, this page describes checking for VAAPI support in more details: Enable Hardware Video Acceleration (VA-API) For Firefox in Ubuntu 20.04 / 18.04 & Higher | UbuntuHandbook. More VAAPI links are at vaapi (www.freedesktop.org). Anything with an inefficient or slow H.264 implementation (how would this even happen in 2023???) could benefit from VAAPI acceleration, whether encoding, decoding, or transcoding. I?m not sure what else it could accomplish for you. Also remember that Intel accelerated video functions can sometimes be, even when they?re slightly faster, more power-hungry and heat-producing than the equivalent optimized CPU code ? so in laptops or fanless builds, you?re probably better off not using VAAPI at all. Not very much uses VAAPI ? mostly just video tools, including video players, and they have to be built with VAAPI support. One would hope that there?s a decent amount of widget re-use by now, and that e.g. your system-default video player could make use of this capability without any special configuration, but that?s far from guaranteed. VLC, naturally, supports VAAPI, but they note that using it can be worse than not using it (VLC VAAPI - VideoLAN Wiki); other playback tools may not have the same problem. VAAPI just uses a different set of transistors to accomplish exactly the same task your CPU would normally do in ?software?. If you don?t do H.264 playback or recording, VAAPI is probably pointless for you. The other instructions at RPMFusion?s Multimedia page may be somewhat useful, they add support for a variety of other codecs that your system (excluding VLC, as always?) might not currently support. If you don?t have a legacy collection of files that you can?t currently play, following these instructions isn?t going to change anything for you. I take issue with the last item on that page: randomly installing firmware packages from untrusted 3rd-party repos might improve some aspect of your system, or might break it completely instead ? make sure you know how to back out that change using an emergency boot disk before running that command! I hate that I have to say this, but you probably don?t want to enable RPMFusion on any corporate system without checking with your company?s lawyer(s), or at least VP-level responsible people, first. Their entire existence is founded on shipping RPMs that Red Hat & the Fedora Project (mostly) *can?t* legally or technically include, not *don?t want to* - IMHO that?s a very serious misrepresentation by the RPMFusion project. As a personal user, I loved that there was a source of precompiled packages that followed RMS? ?software wants to be free? ideology, but as a corporate user, it?s? not necessarily toxic, but for sure problematic/questionable. This is totally a YMMV area ? you may work for a small org. that has no chance of ever being on anyone?s target list and therefore doesn?t care, or you might work for some high-profile org. that needs to remain utterly beyond reproach at all times in all things: YMMV, as I said. -Adam From: Roundtable On Behalf Of Chris Audet Sent: Saturday, February 18, 2023 9:33 AM To: Trevor Cordes Cc: Continuation of Round Table discussion Subject: Re: [RndTbl] intel-media-driver with Intel 1165G7 and Rocky 9 >Forget GUI tools, GUIs suck Based ? Judging on the results from this kb the DRM driver appears to be loaded: bash-5.1$ lspci -k | grep -EA3 'VGA|3D|Display' 0000:00:02.0 VGA compatible controller: Intel Corporation TigerLake-LP GT2 [Iris Xe Graphics] (rev 01) Subsystem: Lenovo Device 3f19 Kernel driver in use: i915 Kernel modules: i915 For completeness, here are the results from the suggested commands. I think I'm missing some of the expected output (possibly because this PC uses Wayland)? I tried to grep for 46, intel, and 2c2141cd33dfa6e4d8f1ad9fdc6cd2d1d380c7. bash-5.1$ rpm -ql intel-media-driver-21.1.3-1.el9.x86_64 /usr/lib/.build-id /usr/lib/.build-id/46 /usr/lib/.build-id/46/2c2141cd33dfa6e4d8f1ad9fdc6cd2d1d380c7 /usr/lib64/dri/iHD_drv_video.so /usr/share/doc/intel-media-driver /usr/share/doc/intel-media-driver/README.md /usr/share/licenses/intel-media-driver /usr/share/licenses/intel-media-driver/LICENSE.md /usr/share/metainfo/intel-media-driver.metainfo.xml bash-5.1$ lsmod | grep i915 i915 3321856 21 i2c_algo_bit 16384 1 i915 intel_gtt 24576 1 i915 drm_buddy 20480 1 i915 drm_dp_helper 159744 1 i915 drm_kms_helper 200704 2 drm_dp_helper,i915 cec 53248 2 drm_dp_helper,i915 ttm 86016 1 i915 drm 622592 14 drm_dp_helper,drm_kms_helper,drm_buddy,i915,ttm video 57344 2 ideapad_laptop,i915 On Wed, Feb 15, 2023 at 9:38 PM Trevor Cordes > wrote: On 2023-02-15 Chris Audet wrote: > [root at what ~]# dnf install intel-media-driver > Last metadata expiration check: 2:11:34 ago on Wed 15 Feb 2023 > 03:49:21 PM. Package intel-media-driver-21.1.3-1.el9.x86_64 is rpm -ql intel-media-driver-21.1.3-1.el9.x86_64 |grep ko.xz take the string between the last / and the ko.xz and do: lsmod |grep inteldrivermodname (change inteldrivermodname to the string) If you get output, it's loaded, and likely being used. If you want to see if it's really being used and have some fun (read: crash your video) then try rmmod inteldrivermodname (ugly but it shoudn't foobar your fs's or anything... can likely ssh into the box to reboot properly, or use magic-sys-rq keys) Forget GUI tools, GUIs suck! :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From trevor at tecnopolis.ca Sat Feb 18 23:18:14 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Sat, 18 Feb 2023 23:18:14 -0600 Subject: [RndTbl] intel-media-driver with Intel 1165G7 and Rocky 9 In-Reply-To: References: <20230215213851.7bc5691b@pog.tecnopolis.ca> Message-ID: <20230218231814.09a7ff2b@pog.tecnopolis.ca> On 2023-02-18 Adam Thompson wrote: > You?re not seeing any results because that?s not a kernel module. Ya, no .ko.xz, no kernel module in the rpm. Adam nailed it. But we should go back to Chris' main pain point, which you might have missed because he brought it up at the last meeting: (I think) he wants the hardware/software switching that some laptops do between discrete / onboard video auto-switching to work in linux. But now that we're reading all this detail, I'm a bit baffled because Intel doesn't really do a discrete / onboard thing at all, do they? Now, maybe Chris has a Intel-onboard / Nvidia-discrete laptop, which were somewhat common on the high-end in the past? And AFAIK no one ever got those to work with Linux (in X) without having a reboot in between and doing a bunch of driver disabling/etc. They were more a Windows thing. And I think (from other conversations) he doing this because games. Maybe it would be helpful if Chris told us his laptop brand / model, and confirm what the base issue is. P.S. Nice that Intel is trying to do linux drivers "right" (sans the auto-switching). I fight with the nvidia binary akmod issues every few years, and it's a real pain they don't just integrate it all into the kernel. From athompso at athompso.net Sat Feb 18 23:32:13 2023 From: athompso at athompso.net (Adam Thompson) Date: Sun, 19 Feb 2023 05:32:13 +0000 Subject: [RndTbl] intel-media-driver with Intel 1165G7 and Rocky 9 In-Reply-To: <20230218231814.09a7ff2b@pog.tecnopolis.ca> References: <20230215213851.7bc5691b@pog.tecnopolis.ca> <20230218231814.09a7ff2b@pog.tecnopolis.ca> Message-ID: Ah, yes, much detail was omitted in the OP :-). Both nVidia and AMD have solutions for that under Linux, but IIRC they barely worked. I think nVidia's was called Optimus, can't recall the AMD name. The VAAPI driver may or may not help his system, but it sure won't do GPU switching! The nVidia binary drivers may work with Wayland, but as of ~12mos ago, the consensus was "just run X", and I can't find anything that says it's officially supported at all. The AMD story is barely even documented... typical. :-/ It was relatively rare in the wild to find switchable AMD GPUs in the first place. As usual, ArchLinux has top-notch documentation on the subject: https://wiki.archlinux.org/title/hybrid_graphics Looks like if you want PRIME under Waytland, you may be pulling a Panasonic: just slightly ahead of your time! -Adam Get Outlook for Android ________________________________ From: Trevor Cordes Sent: Saturday, February 18, 2023 11:18:14 PM To: Adam Thompson Cc: Continuation of Round Table discussion Subject: Re: [RndTbl] intel-media-driver with Intel 1165G7 and Rocky 9 On 2023-02-18 Adam Thompson wrote: > You?re not seeing any results because that?s not a kernel module. Ya, no .ko.xz, no kernel module in the rpm. Adam nailed it. But we should go back to Chris' main pain point, which you might have missed because he brought it up at the last meeting: (I think) he wants the hardware/software switching that some laptops do between discrete / onboard video auto-switching to work in linux. But now that we're reading all this detail, I'm a bit baffled because Intel doesn't really do a discrete / onboard thing at all, do they? Now, maybe Chris has a Intel-onboard / Nvidia-discrete laptop, which were somewhat common on the high-end in the past? And AFAIK no one ever got those to work with Linux (in X) without having a reboot in between and doing a bunch of driver disabling/etc. They were more a Windows thing. And I think (from other conversations) he doing this because games. Maybe it would be helpful if Chris told us his laptop brand / model, and confirm what the base issue is. P.S. Nice that Intel is trying to do linux drivers "right" (sans the auto-switching). I fight with the nvidia binary akmod issues every few years, and it's a real pain they don't just integrate it all into the kernel. -------------- next part -------------- An HTML attachment was scrubbed... URL: From trevor at tecnopolis.ca Wed Feb 22 13:51:07 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Wed, 22 Feb 2023 13:51:07 -0600 Subject: [RndTbl] Fw: [SECURITY] Fedora 36 Update: openssl-3.0.8-1.fc36 Message-ID: <20230222135107.19ee9933@pog.tecnopolis.ca> Oh joy, "password timing" attacks come to SSL. e.g. CVE-2022-4304 Published 2023-02-08T20:15:00 A timing based side channel exists in the OpenSSL RSA Decryption implementation which could be sufficient to recover a plaintext across a network in a Bleichenbacher style attack. Begin forwarded message: Date: Wed, 22 Feb 2023 11:09:09 +0000 (GMT) From: updates at fedoraproject.org To: package-announce at lists.fedoraproject.org Subject: [SECURITY] Fedora 36 Update: openssl-3.0.8-1.fc36 -------------------------------------------------------------------------------- Fedora Update Notification FEDORA-2023-a5564c0a3f 2023-02-22 11:06:32.699863 -------------------------------------------------------------------------------- Name : openssl Product : Fedora 36 Version : 3.0.8 Release : 1.fc36 * Thu Feb 9 2023 Dmitry Belyavskiy - 1:3.0.8-1 - Rebase to upstream version 3.0.8 Resolves: CVE-2022-4203 Resolves: CVE-2022-4304 Resolves: CVE-2022-4450 Resolves: CVE-2023-0215 Resolves: CVE-2023-0216 Resolves: CVE-2023-0217 Resolves: CVE-2023-0286 Resolves: CVE-2023-0401 From Gilbert.Detillieux at umanitoba.ca Wed Feb 22 14:17:53 2023 From: Gilbert.Detillieux at umanitoba.ca (Gilbert Detillieux) Date: Wed, 22 Feb 2023 14:17:53 -0600 Subject: [RndTbl] Fw: [SECURITY] Fedora 36 Update: openssl-3.0.8-1.fc36 In-Reply-To: <20230222135107.19ee9933@pog.tecnopolis.ca> References: <20230222135107.19ee9933@pog.tecnopolis.ca> Message-ID: As if we didn't already have enough issues with OpenSSL, what with buffer overrun vulnerabilities in new/recent code*, and more direct coding flaws (pointer free/dereference and such) that were recently announced**. You'd think with the combined wealth and resources of Alphabet/Google, Apple, and Microsoft, they'd find it in their best collective self-interest to fund a project to replace this garbage with some, you know, actually secure code. Sigh! Gilbert * https://nsfocusglobal.com/openssl-multiple-buffer-overflow-vulnerability-notice/ ** https://www.openssl.org/news/secadv/20230207.txt https://linuxsecurity.com/features/urgent-openssl-security-advisory https://www.lansweeper.com/vulnerability/8-vulnerabilities-in-openssl-could-lead-to-system-crashes/ https://www.ibm.com/support/pages/security-bulletin-multiple-vulnerabilities-openssl-affect-aix (Many of the above do mention the side-channel attack too.) On 2023-02-22 1:51 p.m., Trevor Cordes wrote: > Oh joy, "password timing" attacks come to SSL. > > e.g. CVE-2022-4304 Published 2023-02-08T20:15:00 > A timing based side channel exists in the OpenSSL RSA Decryption > implementation which could be sufficient to recover a plaintext across > a network in a Bleichenbacher style attack. > > > Begin forwarded message: > > Date: Wed, 22 Feb 2023 11:09:09 +0000 (GMT) > From: updates at fedoraproject.org > To: package-announce at lists.fedoraproject.org > Subject: [SECURITY] Fedora 36 Update: openssl-3.0.8-1.fc36 > > -------------------------------------------------------------------------------- > Fedora Update Notification > FEDORA-2023-a5564c0a3f > 2023-02-22 11:06:32.699863 > -------------------------------------------------------------------------------- > > Name : openssl > Product : Fedora 36 > Version : 3.0.8 > Release : 1.fc36 > > * Thu Feb 9 2023 Dmitry Belyavskiy - 1:3.0.8-1 > - Rebase to upstream version 3.0.8 > Resolves: CVE-2022-4203 > Resolves: CVE-2022-4304 > Resolves: CVE-2022-4450 > Resolves: CVE-2023-0215 > Resolves: CVE-2023-0216 > Resolves: CVE-2023-0217 > Resolves: CVE-2023-0286 > Resolves: CVE-2023-0401 -- Gilbert Detillieux E-mail: Gilbert.Detillieux at umanitoba.ca Computer Science Web: http://www.cs.umanitoba.ca/~gedetil/ University of Manitoba Phone: 204-474-8161 Winnipeg MB CANADA R3T 2N2 For best CS dept. service, contact . From athompso at athompso.net Wed Feb 22 15:12:30 2023 From: athompso at athompso.net (Adam Thompson) Date: Wed, 22 Feb 2023 21:12:30 +0000 Subject: [RndTbl] Fw: [SECURITY] Fedora 36 Update: openssl-3.0.8-1.fc36 In-Reply-To: References: <20230222135107.19ee9933@pog.tecnopolis.ca> Message-ID: Bob Beck et al. from the OpenBSD project already "secured" OpenSSL, with the result being called LibreSSL. It's drop-in compatible for many applications, but does require recompiling. That team did a number of presentations on it, and apparently you can still hear the swearing echoing late at night when it's quiet... The OpenSSL team, however, appear to be rather resistant to help. Serious NIH syndrome. Also they're more focused on preserving backwards compatibility than correctness or security. And also don't respond well to criticism, from what I've seen. All the large orgs you mentioned already have their own OpenSSL-replacement projects in-house, some of them public. None of those are even remotely drop-in replacements, they're re-imagninings of what a secure-connection library should be. -Adam ________________________________ From: Roundtable on behalf of Gilbert Detillieux Sent: February 22, 2023 2:17 PM To: Continuation of Round Table discussion Subject: Re: [RndTbl] Fw: [SECURITY] Fedora 36 Update: openssl-3.0.8-1.fc36 As if we didn't already have enough issues with OpenSSL, what with buffer overrun vulnerabilities in new/recent code*, and more direct coding flaws (pointer free/dereference and such) that were recently announced**. You'd think with the combined wealth and resources of Alphabet/Google, Apple, and Microsoft, they'd find it in their best collective self-interest to fund a project to replace this garbage with some, you know, actually secure code. Sigh! Gilbert * https://nsfocusglobal.com/openssl-multiple-buffer-overflow-vulnerability-notice/ ** https://www.openssl.org/news/secadv/20230207.txt https://linuxsecurity.com/features/urgent-openssl-security-advisory https://www.lansweeper.com/vulnerability/8-vulnerabilities-in-openssl-could-lead-to-system-crashes/ https://www.ibm.com/support/pages/security-bulletin-multiple-vulnerabilities-openssl-affect-aix (Many of the above do mention the side-channel attack too.) On 2023-02-22 1:51 p.m., Trevor Cordes wrote: > Oh joy, "password timing" attacks come to SSL. > > e.g. CVE-2022-4304 Published 2023-02-08T20:15:00 > A timing based side channel exists in the OpenSSL RSA Decryption > implementation which could be sufficient to recover a plaintext across > a network in a Bleichenbacher style attack. > > > Begin forwarded message: > > Date: Wed, 22 Feb 2023 11:09:09 +0000 (GMT) > From: updates at fedoraproject.org > To: package-announce at lists.fedoraproject.org > Subject: [SECURITY] Fedora 36 Update: openssl-3.0.8-1.fc36 > > -------------------------------------------------------------------------------- > Fedora Update Notification > FEDORA-2023-a5564c0a3f > 2023-02-22 11:06:32.699863 > -------------------------------------------------------------------------------- > > Name : openssl > Product : Fedora 36 > Version : 3.0.8 > Release : 1.fc36 > > * Thu Feb 9 2023 Dmitry Belyavskiy - 1:3.0.8-1 > - Rebase to upstream version 3.0.8 > Resolves: CVE-2022-4203 > Resolves: CVE-2022-4304 > Resolves: CVE-2022-4450 > Resolves: CVE-2023-0215 > Resolves: CVE-2023-0216 > Resolves: CVE-2023-0217 > Resolves: CVE-2023-0286 > Resolves: CVE-2023-0401 -- Gilbert Detillieux E-mail: Gilbert.Detillieux at umanitoba.ca Computer Science Web: http://www.cs.umanitoba.ca/~gedetil/ University of Manitoba Phone: 204-474-8161 Winnipeg MB CANADA R3T 2N2 For best CS dept. service, contact . _______________________________________________ Roundtable mailing list Roundtable at muug.ca https://muug.ca/mailman/listinfo/roundtable -------------- next part -------------- An HTML attachment was scrubbed... URL: From Gilbert.Detillieux at umanitoba.ca Wed Feb 22 15:37:47 2023 From: Gilbert.Detillieux at umanitoba.ca (Gilbert Detillieux) Date: Wed, 22 Feb 2023 15:37:47 -0600 Subject: [RndTbl] [SECURITY] Fedora 36 Update: openssl-3.0.8-1.fc36 In-Reply-To: References: <20230222135107.19ee9933@pog.tecnopolis.ca> Message-ID: <3e575e54-7475-f3c2-d675-3bb4f3beebaf@umanitoba.ca> Thanks for the update on LibreSSL. Last time I looked at it, they were still in version 1.x, and still only supported on BSD-based systems. I see they're at version 3.x now. I wonder how much of this is still the case about support on Linux?... https://lwn.net/Articles/841664/ There's also an overwhelming level of "this-is-fine"-ism in the industry, so as long as OpenSSL isn't a complete dumpster fire at the moment, people aren't willing to invest in alternatives, regardless of how drop-in-ready they may be (which is apparently still a debatable point with LibreSSL on Linux, or at least still was 2 years ago, when the above article was written). Longer term, maybe a complete re-imagining is what the industry will need to move forward. Most companies and developers are motivated more by new features than by correctness or security, sadly. Gilbert On 2023-02-22 3:12 p.m., Adam Thompson wrote: > Bob Beck et al. from the OpenBSD project already "secured" OpenSSL, with > the result being called LibreSSL.? It's drop-in compatible for many > applications, but does require recompiling.? That team did a number of > presentations on it, and apparently you can still hear the swearing > echoing late at night when it's quiet... > > The OpenSSL team, however, appear to be rather resistant to help. > Serious NIH syndrome.? Also they're more focused on preserving backwards > compatibility than correctness or security.? And also don't respond well > to criticism, from what I've seen. > > All the large orgs you mentioned already have their own > OpenSSL-replacement projects in-house, some of them public.? None of > those are even remotely drop-in replacements, they're re-imagninings of > what a secure-connection library should be. > > -Adam > ------------------------------------------------------------------------ > *From:* Roundtable on behalf of Gilbert > Detillieux > *Sent:* February 22, 2023 2:17 PM > *To:* Continuation of Round Table discussion > *Subject:* Re: [RndTbl] Fw: [SECURITY] Fedora 36 Update: > openssl-3.0.8-1.fc36 > As if we didn't already have enough issues with OpenSSL, what with > buffer overrun vulnerabilities in new/recent code*, and more direct > coding flaws (pointer free/dereference and such) that were recently > announced**. > > You'd think with the combined wealth and resources of Alphabet/Google, > Apple, and Microsoft, they'd find it in their best collective > self-interest to fund a project to replace this garbage with some, you > know, actually secure code. > > Sigh! > > Gilbert > > * > https://nsfocusglobal.com/openssl-multiple-buffer-overflow-vulnerability-notice/ > > ** https://www.openssl.org/news/secadv/20230207.txt > > https://linuxsecurity.com/features/urgent-openssl-security-advisory > > > https://www.lansweeper.com/vulnerability/8-vulnerabilities-in-openssl-could-lead-to-system-crashes/ > > https://www.ibm.com/support/pages/security-bulletin-multiple-vulnerabilities-openssl-affect-aix > ??? (Many of the above do mention the side-channel attack too.) > > On 2023-02-22 1:51 p.m., Trevor Cordes wrote: >> Oh joy, "password timing" attacks come to SSL. >> >> e.g. CVE-2022-4304? Published 2023-02-08T20:15:00 >> A timing based side channel exists in the OpenSSL RSA Decryption >> implementation which could be sufficient to recover a plaintext across >> a network in a Bleichenbacher style attack. >> >> >> Begin forwarded message: >> >> Date: Wed, 22 Feb 2023 11:09:09 +0000 (GMT) >> From: updates at fedoraproject.org >> To: package-announce at lists.fedoraproject.org >> Subject: [SECURITY] Fedora 36 Update: openssl-3.0.8-1.fc36 >> >> -------------------------------------------------------------------------------- >> Fedora Update Notification >> FEDORA-2023-a5564c0a3f >> 2023-02-22 11:06:32.699863 >> -------------------------------------------------------------------------------- >> >> Name??????? : openssl >> Product???? : Fedora 36 >> Version???? : 3.0.8 >> Release???? : 1.fc36 >> >> * Thu Feb? 9 2023 Dmitry Belyavskiy - 1:3.0.8-1 >> - Rebase to upstream version 3.0.8 >>??? Resolves: CVE-2022-4203 >>??? Resolves: CVE-2022-4304 >>??? Resolves: CVE-2022-4450 >>??? Resolves: CVE-2023-0215 >>??? Resolves: CVE-2023-0216 >>??? Resolves: CVE-2023-0217 >>??? Resolves: CVE-2023-0286 >>??? Resolves: CVE-2023-0401 -- Gilbert Detillieux E-mail: Gilbert.Detillieux at umanitoba.ca Computer Science Web: http://www.cs.umanitoba.ca/~gedetil/ University of Manitoba Phone: 204-474-8161 Winnipeg MB CANADA R3T 2N2 From trevor at tecnopolis.ca Wed Feb 22 17:02:43 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Wed, 22 Feb 2023 17:02:43 -0600 Subject: [RndTbl] Fw: [SECURITY] Fedora 36 Update: openssl-3.0.8-1.fc36 In-Reply-To: References: <20230222135107.19ee9933@pog.tecnopolis.ca> Message-ID: <20230222170243.2eb57277@pog.tecnopolis.ca> On 2023-02-22 Gilbert Detillieux wrote: > As if we didn't already have enough issues with OpenSSL, what with > buffer overrun vulnerabilities in new/recent code*, and more direct > coding flaws (pointer free/dereference and such) that were recently > announced**. To be fair, "password timing" attacks are a relatively new class of attack vectors. And by new I mean maybe 10-15 years old? Many projects are still finding buffer overrun and null-pointer deref bugs 40 years after that class was identified. And the tools to combat timing attacks are still (relatively) in their infancy, in terms of language support and standardized libraries. So programmers have (had) little help. Many will just put their heads in the sand. Even worse, you can find these vulnerabilities in places that aren't readily apparent (like SSL). We all thought "password" when really it's comparing any strings in an auth (or even encryption?) scenario. I remember a few years back when PHP was starting to address this that to solve it immediately in my own projects I had to write custom password comparison code, because it was going to be years before the PHP tools showed up on our production boxes. It was one of the most challenging, and fun, projects I've ever worked on, though I hated the fact I had to waste time on mitigating the minds of autist hackers. The disturbing thing I see in the industry these days is that it's not new bugs people are finding, it's entirely new classes of bugs. Ones that no one really thought of before (a blessing?). Like the Spectre- class gift that will forever keep on giving. And password timing attacks. As we fix those, shudder to think what new class has yet to be discovered... From alberto at abrao.net Wed Feb 22 22:52:46 2023 From: alberto at abrao.net (Alberto Abrao) Date: Wed, 22 Feb 2023 22:52:46 -0600 Subject: [RndTbl] Fw: [SECURITY] Fedora 36 Update: openssl-3.0.8-1.fc36 In-Reply-To: References: <20230222135107.19ee9933@pog.tecnopolis.ca> Message-ID: <91490de5-5b1a-c40e-5d23-1fc2c5dedca9@abrao.net> On 2023-02-22 14:17, Gilbert Detillieux wrote: > You'd think with the combined wealth and resources of Alphabet/Google, > Apple, and Microsoft, they'd find it in their best collective > self-interest to fund a project to replace this garbage with some, you > know, actually secure code. 1) not having to pay for it; and 2) having a scapegoat for stuff that goes sideways. Both sound awful to me, but I am not a CEO for a reason... On 2023-02-22 15:12, Adam Thompson wrote: > The OpenSSL team, however, appear to be rather resistant to help. > Serious NIH syndrome.? Also they're more focused on preserving > backwards compatibility than correctness or security.? And also don't > respond well to criticism, from what I've seen. Amusing, isn't it? Every once in a while someone shows up smearing the OpenBSD developers? for *reasons*, but as far as I can tell they strike a good balance between stability - avoiding changes for the sake of it -? while regularly dropping the dead weight to make things secure and to move forward. A reasonable compromise, if you will. On 2023-02-22 15:37, Gilbert Detillieux wrote: > Longer term, maybe a complete re-imagining is what the industry will > need to move forward.? Most companies and developers are motivated > more by new features than by correctness or security, sadly. Let me present the Schr?dinger's SysAdmin: - If things break, well, it's your fault. You shouldn't have messed with anything, it was working before. Don't fix what isn't broken. - If things work, are you even doing anything? If nothing is breaking, you must be useless. It's hard to sell something that, when done, won't change anything as far as most people are concerned. No new apparent features; instead, potential for disruption and costs. All of that to protect from the *threat* - as in something that may or may not happen - of an attack. At best, you can try and argue that something /could have happened/, but didn't. Even if you can prove it, more often than not, someone could easily think you're exaggerating to prove a point. The optimist wishes executives could see the light.? The realist knows that, as long as someone other than themselves can be blamed, more often than not they won't let you do what you must.... until the moment where you /should have done it. /Then, it's /your fault./ Or, instead, they just buy insurance for it, pretend no one could ever have seen it coming, and move along. Yes, some places do see past all the cynicism, and have some accountability. But we would not have landed where we are if that weren't the exception to the rule, so it is what it is. Let's hope it finds a way to go that does not involve a huge BANG. -- Kind regards, Alberto Abrao -------------- next part -------------- An HTML attachment was scrubbed... URL: From trevor at tecnopolis.ca Fri Feb 24 21:26:19 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Fri, 24 Feb 2023 21:26:19 -0600 Subject: [RndTbl] DOCTYPE holy smokes Message-ID: File this under wacky !#%* bug that takes 1 hour to solve and you're #*%@# lucky you solved it... I have some javascript that takes a textarea and turns it into arrays of GSM-7 chars for SMS texting purposes. Based on the nice (FLOSS) online SMS calculator: https://twiliodeved.github.io/message-segment-calculator/ If I input a space (unicode hex code 0x20, same as ascii) in the above page, it'd show up as one GSM-7 codepoint 0x0020. But if I input it in my version of a similar calculator it would always come out as 2 spaces! 0x0020 0x0020 Was I inputting a NBS by accident (which "smart" encoding changes to 2 spaces)? No. Was the js library at fault? Debugging an hour later, probably not. While watching the js debug box in FF I noticed it moaning every page load about the page being loaded in "quirks mode" because of my DOCTYPE first line of all my HTML. With nothing else left to blame it on, I went and changed my DOCTYPE (which is from the first days of my program: 2010ish) to what FF suggests when you click on the quirk (basically eliminating the "transitional" stuff). Boom. Bug goes away. One space is now one space. Doh. Now my curiosity is wondering why a quirk mode needs to double all the spaces to maintain compatibility with something somewhere... That's probably a long story by itself. I'm lucky I even solved this one, as that seemed an unlikely culprit as I've literally never had any issue using my ancient DOCTYPE and IIABDFI. On the downside, I probably just broke my program for all ancient-browser users. Oh well. P.S. GSM-7 char encoding is really really @#*%&ed up. Whoever came up with that travesty should be flogged. From trevor at tecnopolis.ca Sat Feb 25 01:37:09 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Sat, 25 Feb 2023 01:37:09 -0600 Subject: [RndTbl] DOCTYPE holy smokes In-Reply-To: References: Message-ID: <20230225013709.27048e68@pog.tecnopolis.ca> I may have spoken too soon. Thinking I won, I removed all my debugging code and the bug came back. Even with the updated DOCTYPE. Doh. So egg on my face, but for posterity I thought I'd post an update so someone doesn't curse this non-fix 5 years down the road. It appeared the bug was solved because my test input must have had an emoji or something in it. The bug doesn't appear when you have un-smart-able (long story) unicode. At least that's my only guess. I solved it (again?) by checking the js library code and it appears they are straight up changing space to two spaces. Uh, ok. That code area is supposed to change non-ascii (i.e. non-0x20) unicode spaces to an ascii space (for a short space) or 2 (or more) ascii spaces (for a long space). But somehow it got a 0x20 in the rule. Or at least FF ^F search " " matched the character? I changed the 0x20 in the rule to 0xa0 and now everything is fixed. And why a single 0xa0 should ever be turned to 2 spaces in the first place is beyond me. Looks like something messed up the js source and changed a 0xa0 to a 0x20. This may be FF... maybe when I saved the js source? If I cut a NBSP from a unicode sample web page and paste it into my form textarea, it always seems to turn into a 0x20! If I type it in place with CTRL-SHIFT then it properly shows up as a 0xa0. If I paste it in from a nano editor where I know for sure it's 0xa0 then it also works. I found some ancient bz's about FF doing bad things with NBSP's when c&p'ing... maybe there's still some bugs in there. Anyhow, it's not a DOCTYPE problem: it's the wrong unicode char in the source file. And since it's just a bloody empty space character you can't really see it when debugging without spitting out hex codes somehow. Fun! P.S. Quirks mode being off is screwing up some of my tables cosmetically... so I guess it really does something after all... From athompso at athompso.net Sun Feb 26 18:07:39 2023 From: athompso at athompso.net (Adam Thompson) Date: Mon, 27 Feb 2023 00:07:39 +0000 Subject: [RndTbl] shell quoting inside $( )? Message-ID: I?m trying to figure out if there?s a way to do this more compactly (for reasons?): T=$(openssl x509 -noout -text -in "$1") EXPD=$(date -d"$(echo "$T" | sed -n 's/^.*Not After : //p')" +%Y%b%d) SUBJ=$(echo "$T" | sed -n 's/^.*Subject: .*CN = //p') I am having two issues: 1. Bash doesn?t happily let me embed double quotes inside a subshell inside a subshell inside double-quotes. All of the double-quotes are required, AFAICT. So doing X=$(??$(??xxx?)?) just doesn?t work because bash doesn?t parse nested double-quotes. But both the inner var and the result can and will have spaces in them. 2. I can?t remember how in shell (bash, in this case) how to bifurcate stdout and have it run to two pipelines. I figure echo is a builtin, so should at least be lighter than re-running the monstrosity that is openssl over and over. I have a vague recollection that this is harder than it sounds, and may not be worth it? Any suggestions on how to solve either issue? I?m sure I?ve just forgotten some obvious technique, hopefully someone can jog my mind. -Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott at 100percent.ninja Sun Feb 26 19:09:38 2023 From: scott at 100percent.ninja (Scott Toderash) Date: Sun, 26 Feb 2023 19:09:38 -0600 Subject: [RndTbl] shell quoting inside $( )? In-Reply-To: References: Message-ID: <8858ff1cfef1636150472c3cf26745af@100percent.ninja> Not sure if this is it, but backticks. eg. EXPD=$(date -d"$(`$(openssl x509 -noout -text -in "$1")` | sed -n 's/^.*Not After : //p')" +%Y%b%d) On 2023-02-26 18:07, Adam Thompson wrote: > I?m trying to figure out if there?s a way to do this more > compactly (for reasons?): > > T=$(openssl x509 -noout -text -in "$1") > > EXPD=$(date -d"$(echo "$T" | sed -n 's/^.*Not After : //p')" +%Y%b%d) > > SUBJ=$(echo "$T" | sed -n 's/^.*Subject: .*CN = //p') > > I am having two issues: > > * Bash doesn?t happily let me embed double quotes inside a > subshell inside a subshell inside double-quotes. All of the > double-quotes are required, AFAICT. So doing > X=$(??$(??xxx?)?) just doesn?t work because bash > doesn?t parse nested double-quotes. But both the inner var and the > result can and will have spaces in them. > * I can?t remember how in shell (bash, in this case) how to > bifurcate stdout and have it run to two pipelines. I figure echo is a > builtin, so should at least be lighter than re-running the monstrosity > that is openssl over and over. I have a vague recollection that this > is harder than it sounds, and may not be worth it? > > Any suggestions on how to solve either issue? I?m sure I?ve just > forgotten some obvious technique, hopefully someone can jog my mind. > > -Adam > _______________________________________________ > Roundtable mailing list > Roundtable at muug.ca > https://muug.ca/mailman/listinfo/roundtable From trevor at tecnopolis.ca Sun Feb 26 20:23:08 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Sun, 26 Feb 2023 20:23:08 -0600 Subject: [RndTbl] shell quoting inside $( )? In-Reply-To: References: Message-ID: <20230226202308.69ee82dc@pog.tecnopolis.ca> I can't see any way to "bifurcate output" in bash without using tee. I'm pretty sure zsh (and probably fish) can do it, but not sure of the syntax there either: $ eval $((openssl x509 -noout -text -in /etc/pki/tls/certs/tecnopolis.ca.crt | tee >(echo -n EXPD="'"`date -d"$(sed -n 's/^.*Not After : //p')" +%Y%b%d`"'" ) >(echo SUBJ="'"`sed -n 's/^.*Subject: .*CN = //p'`"'" ) 1>&2 ) 2>/dev/null) $ echo $SUBJ tecnopolis.ca $ echo $EXPD 2024Feb22 If you can figure out a way to get the vars out of the subshell and into 2 different vars without using the eval $() then you're probably better off. Even though I protect the tainted inputs with '', someone could possibly plant a ' in the SUBJ and thus this is a sec hole. Of course you could eliminate the eval and assign to a var and then run a 2nd command to split them into SUBJ and EXPD, but I was going for a oneliner. (Get rid of the "'" adders then.) I'm going to toy with the idea of using {} subshells which might allow elimination of the eval. Of course this would be much cleaner in a perl oneliner... and if you're already bringing in sed, how much worse is perl? From athompso at athompso.net Sun Feb 26 21:00:56 2023 From: athompso at athompso.net (Adam Thompson) Date: Mon, 27 Feb 2023 03:00:56 +0000 Subject: [RndTbl] shell quoting inside $( )? In-Reply-To: <20230226202308.69ee82dc@pog.tecnopolis.ca> References: <20230226202308.69ee82dc@pog.tecnopolis.ca> Message-ID: Found my answer, sort of. Either use mkfiko and tee, or use bash/zsh/ksh88 process substitution a la "cmd >(subcmd1) >(subcmd2)", but I don't see any good way of getting the output from subcmd1/2 into variables as they run in subshells. It would be do-able by piping the whole thing into a "while read X" loop, but that's arguably getting into "the cure is worse than the disease" territory. > T=$(openssl x509 -noout -text -in "$1") > EXPD=$(date -d"$(echo "$T" | sed -n 's/^.*Not After : //p')" +%Y%b%d) > SUBJ=$(echo "$T" | sed -n 's/^.*Subject: .*CN = //p') Untested as yet, but should work: ( openssl x509 -noout -text -in "$1" >(sed -n 's/^.*Not After : /A /p' | xargs date +%Y%b%d -d) >(sed -n 's/^.*Subject: .*CN = /B /p') ) | while read X; do case $X in A) EXPD="$X" ;; B) SUBJ="$X" ;; esac ; Talk about unreadable, though! This does several things, as I understand it: 1. duplicate /dev/fd/1 (stdout) to, usually, /dev/fd/4 but it doesn't really matter which FD# 2. spawns the first subshell, and passes /dev/fd/1 as that subshell's stdin, and the subshell's stdout as the parent process's stdout 3. spawns the second subshell, and passes the dup'd FD (/dev/fd/4 or whatever it is) as that subshell's stdin, and the subshell's stdout as the parent process's stdout 4. collects all the output If there were an easy way to "promote" shell variables up out of their subshell namespaces without needing `` or $() or read, subshells would be a heck of a lot more useful... -Adam -----Original Message----- From: Trevor Cordes Sent: Sunday, February 26, 2023 8:23 PM To: Adam Thompson Cc: Continuation of Round Table discussion Subject: Re: [RndTbl] shell quoting inside $( )? I can't see any way to "bifurcate output" in bash without using tee. I'm pretty sure zsh (and probably fish) can do it, but not sure of the syntax there either: $ eval $((openssl x509 -noout -text -in /etc/pki/tls/certs/tecnopolis.ca.crt | tee >(echo -n EXPD="'"`date -d"$(sed -n 's/^.*Not After : //p')" +%Y%b%d`"'" ) >(echo SUBJ="'"`sed -n 's/^.*Subject: .*CN = //p'`"'" ) 1>&2 ) 2>/dev/null) $ echo $SUBJ tecnopolis.ca $ echo $EXPD 2024Feb22 If you can figure out a way to get the vars out of the subshell and into 2 different vars without using the eval $() then you're probably better off. Even though I protect the tainted inputs with '', someone could possibly plant a ' in the SUBJ and thus this is a sec hole. Of course you could eliminate the eval and assign to a var and then run a 2nd command to split them into SUBJ and EXPD, but I was going for a oneliner. (Get rid of the "'" adders then.) I'm going to toy with the idea of using {} subshells which might allow elimination of the eval. Of course this would be much cleaner in a perl oneliner... and if you're already bringing in sed, how much worse is perl? From trevor at tecnopolis.ca Sun Feb 26 21:56:54 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Sun, 26 Feb 2023 21:56:54 -0600 Subject: [RndTbl] shell quoting inside $( )? In-Reply-To: References: <20230226202308.69ee82dc@pog.tecnopolis.ca> Message-ID: <20230226215654.6c44f3ce@pog.tecnopolis.ca> On 2023-02-27 Adam Thompson wrote: > Found my answer, sort of. Either use mkfiko and tee, or use > bash/zsh/ksh88 process substitution a la "cmd >(subcmd1) >(subcmd2)", That's what I said half an hour ago! Doh I managed to improve mine to eliminate the eval: $ read SUBJ EXPD <<<$(echo $((openssl x509 -noout -text -in /etc/pki/tls/certs/tecnopolis.ca.crt |tee >(date -d"$(sed -n 's/^.*Not After : //p')" +%Y%b%d) >(sed -n 's/^.*Subject: .*CN = //p') 1>&2 ) 2>/dev/null)) $ echo "subj $SUBJ expd $EXPD" subj tecnopolis.ca expd 2024Feb22 > but I don't see any good way of getting the output from subcmd1/2 > into variables as they run in subshells. It would be do-able by > piping the whole thing into a "while read X" loop, but that's > arguably getting into "the cure is worse than the disease" territory. Yes, the subshell/command problem is the difficult factor here. My updated example above also uses read, but without a loop. You need the echo and multi-subshells to get the 2 outputs onto 1 line. Here's the funny part: the >() constructs start async ps's and you don't know whose output will come first! Yet no matter what I did the SUBJ always comes out first. I even put sleeps in the >() constructs to try to influence who outputs first, but it didn't matter! I wonder why the output order is the way it is (backwards) and always constant... > Untested as yet, but should work: > > ( openssl x509 -noout -text -in "$1" >(sed -n 's/^.*Not After : /A > /p' | xargs date +%Y%b%d -d) >(sed -n 's/^.*Subject: .*CN = /B /p') ) > | while read X; do case $X in A) EXPD="$X" ;; B) SUBJ="$X" ;; esac ; > Pretty sure you are *forced* to use tee or something like it. You can't just use >() with openssl. >() replaces itself with /dev/fd/X and sets up an async to read from it, which means nothing to openssl. You need tee to do the writing to that fd/X. Ran into that grief when I was working on it. Nice: you are getting around the order issue with A / B, which is smart, but then kind of forces you into the loop. Still one line though. And the loop is no more evil than my read <<< echo hack. > If there were an easy way to "promote" shell variables up out of > their subshell namespaces without needing `` or $() or read, > subshells would be a heck of a lot more useful... I was trying really hard to use my own fd's like 3 & 4 (not the ones used by >()) to get the output out of each >() construct and be able to differentiate them. But it wouldn't work... maybe because the >() is async, and I need to write to each fd in the construct and then read them out of the construct. Dunno. Use of fds 3 & 4 hurts my brain. Ideally the EXPD >() could write out to fd3 and SUBJ to fd4 and an outer shell could then read them. Or maybe that's impossible. From Gilbert.Detillieux at umanitoba.ca Mon Feb 27 16:36:25 2023 From: Gilbert.Detillieux at umanitoba.ca (Gilbert Detillieux) Date: Mon, 27 Feb 2023 16:36:25 -0600 Subject: [RndTbl] shell quoting inside $( )? In-Reply-To: References: Message-ID: Rethinking the problem a bit, I came up with the following, which may or may not be an improvement over what others have already suggested... T=$(openssl x509 -noout -text -in "$1" | sed -n 's/^.*Not After : /EXPD=/p;s/^.*Subject: .*CN *= */SUBJ=/p' | sed "s/=\(.*\)/='\1'/") eval $T EXPD=$(date -d"$EXPD" +%Y%b%d) The first command does most of the leg-work, stripping out the crud and leaving us with 2 mostly usable variable definitions. (I've added "*"'s after the spaces surrounding the "CN = " string, since those spaces may not always be there, and were missing for my test case.) The second command sets the two variables (and should be mostly safe, considering the constraints on those fields), and the third one then massages the EXPD variable into the desired date format. Hope this helps! Gilbert On 2023-02-26 6:07 p.m., Adam Thompson wrote: > I?m trying to figure out if there?s a way to do this more compactly (for > reasons?): > > T=$(openssl x509 -noout -text -in "$1") > > EXPD=$(date -d"$(echo "$T" | sed -n 's/^.*Not After : //p')" +%Y%b%d) > > SUBJ=$(echo "$T" | sed -n 's/^.*Subject: .*CN = //p') > > I am having two issues: > > 1. Bash doesn?t happily let me embed double quotes inside a subshell > inside a subshell inside double-quotes.? All of the double-quotes > are required, AFAICT.? So doing X=$(??$(??xxx?)?) just doesn?t work > because bash doesn?t parse nested double-quotes.? But both the inner > var and the result can and will have spaces in them. > 2. I can?t remember how in shell (bash, in this case) how to bifurcate > stdout and have it run to two pipelines. ?I figure echo is a > builtin, so should at least be lighter than re-running the > monstrosity that is openssl over and over.? I have a vague > recollection that this is harder than it sounds, and may not be > worth it? > > Any suggestions on how to solve either issue?? I?m sure I?ve just > forgotten some obvious technique, hopefully someone can jog my mind. > > -Adam -- Gilbert Detillieux E-mail: Gilbert.Detillieux at umanitoba.ca Computer Science Web: http://www.cs.umanitoba.ca/~gedetil/ University of Manitoba Phone: 204-474-8161 Winnipeg MB CANADA R3T 2N2 For best CS dept. service, contact . From trevor at tecnopolis.ca Mon Feb 27 23:43:33 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Mon, 27 Feb 2023 23:43:33 -0600 Subject: [RndTbl] shell quoting inside $( )? In-Reply-To: References: Message-ID: <20230227234333.56cf0389@pog.tecnopolis.ca> On 2023-02-27 Gilbert Detillieux wrote: > Rethinking the problem a bit, I came up with the following, which may > or may not be an improvement over what others have already > suggested... > > T=$(openssl x509 -noout -text -in "$1" | sed -n 's/^.*Not After : > /EXPD=/p;s/^.*Subject: .*CN *= */SUBJ=/p' | sed "s/=\(.*\)/='\1'/") > eval $T > EXPD=$(date -d"$EXPD" +%Y%b%d) Points lost for not-a-one-liner ;-) But you were smart to eliminate the multi-stream (tee >()) ideas, and thus the async-order issue. And save some forks. NOW, the near-ideal way is to do what you did but see if sed has a regex PCRE /e style exec option which you could then use to run the date from within the regex itself. Quoting might get hairy in that case, but it would allow it to go back to just 1 line! Perl could do it (with the eval, or pipe into read) and I might give that a try for fun. (In any case, you can save a line by putting the eval around line 1, no need for $T.) Does no one know how to get FD 3 & 4 used inside the >()'s and thus be able to pass it (I think) out of the <()'s and capture again in the root-shell context with a read thus eliminating the eval? Is this a pipe dream? something like (pseudocode): (openssl | tee >(sed foo|date >3) >(sed subj >4) ) | read -u3 expd | read -u4 subj ??? From trevor at tecnopolis.ca Tue Feb 28 00:42:55 2023 From: trevor at tecnopolis.ca (Trevor Cordes) Date: Tue, 28 Feb 2023 00:42:55 -0600 Subject: [RndTbl] shell quoting inside $( )? In-Reply-To: References: Message-ID: <20230228004255.5ded8796@pog.tecnopolis.ca> Perl version. Cleaner? No eval. No >(). One line. Relies on read to fill the bash vars. Uses Gilbert's just-one-filter-pass idea. Does the date transform at the very end in perl: would be a sec hole if $e is injected with bad things. Could easily fix with setting the $e regex from . to [-.a-zA-Z0-9]. Could also die in the END if !$e. $ read SUBJ EXPD <<<$(openssl x509 -noout -text -in /etc/pki/tls/certs/tecnopolis.ca.crt | perl -ne '($e)=/^.*Not After : (.*)/ if !$e; ($s)=/^.*Subject: .*CN = (.*)/ if !$s; END { print $s." ".`date -d"$e" +%Y%b%d`}') $ echo s=$SUBJ e=$EXPD s=tecnopolis.ca e=2024Feb22 I like the perl approach because it has the least # of forks, and really the sky is the limit for taint cleaning and sanity checks. Plus I find it more readable than bash, and perl is highly optimized for PCRE so should be pretty fast. I also understand perl's quoting intimately vs my general haze with bash. From athompso at athompso.net Tue Feb 28 07:14:04 2023 From: athompso at athompso.net (Adam Thompson) Date: Tue, 28 Feb 2023 13:14:04 +0000 Subject: [RndTbl] shell quoting inside $( )? In-Reply-To: <20230228004255.5ded8796@pog.tecnopolis.ca> References: <20230228004255.5ded8796@pog.tecnopolis.ca> Message-ID: Wow, I think I unwittingly invoked Cunningham's Law with my initial post... Thank you to everyone for the hints, tips, and alternate approaches - I've learned a few new things through this! -Adam Get Outlook for Android ________________________________ From: Trevor Cordes Sent: Tuesday, February 28, 2023 12:42:55 AM To: Gilbert Detillieux Cc: Continuation of Round Table discussion ; Adam Thompson Subject: Re: [RndTbl] shell quoting inside $( )? Perl version. Cleaner? No eval. No >(). One line. Relies on read to fill the bash vars. Uses Gilbert's just-one-filter-pass idea. Does the date transform at the very end in perl: would be a sec hole if $e is injected with bad things. Could easily fix with setting the $e regex from . to [-.a-zA-Z0-9]. Could also die in the END if !$e. $ read SUBJ EXPD <<<$(openssl x509 -noout -text -in /etc/pki/tls/certs/tecnopolis.ca.crt | perl -ne '($e)=/^.*Not After : (.*)/ if !$e; ($s)=/^.*Subject: .*CN = (.*)/ if !$s; END { print $s." ".`date -d"$e" +%Y%b%d`}') $ echo s=$SUBJ e=$EXPD s=tecnopolis.ca e=2024Feb22 I like the perl approach because it has the least # of forks, and really the sky is the limit for taint cleaning and sanity checks. Plus I find it more readable than bash, and perl is highly optimized for PCRE so should be pretty fast. I also understand perl's quoting intimately vs my general haze with bash. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Gilbert.Detillieux at umanitoba.ca Tue Feb 28 15:28:28 2023 From: Gilbert.Detillieux at umanitoba.ca (Gilbert Detillieux) Date: Tue, 28 Feb 2023 15:28:28 -0600 Subject: [RndTbl] MUUG Meeting, Tuesday, Mar 7, 7:30pm (Date Change, In-person Meeting) -- KVM without a GUI Message-ID: <61da1b1f-ca5d-f8ed-e4ed-137d11a753d2@umanitoba.ca> The Manitoba UNIX User Group (MUUG) will be holding its next monthly meeting IN PERSON, on Tuesday, March 7th, at 7:30pm. Yes, that's a week early, i.e. the FIRST Tuesday of the month... KVM without a GUI - Have you lost your head? Linux KVM (Kernel-Based Virtual Machine) is a full virtualization solution for Linux, using hardware accelerated virtualization extensions to provide near physical performance. New users of KVM may find the command line interface daunting and difficult to understand. Even the majority of KVM tutorials online rely on the GUI to do the initial installation and setup of your virtual machines, but what if we don't have a GUI? As Linux admins, we are accustomed to doing most of our work in a remote terminal, most of our servers don't even have a keyboard or monitor plugged in, and are located in remote data centers. In this presentation, Wyatt Zacharias will present the basics of how to use the KVM command line tools to create and setup a new virtual machine from scratch. Where to find the Meeting: Fortress Software Inc., 350 Keewatin St -- Unit #2 We have a new in-person meeting location now! Brad Vokey has graciously let us use his work office for our next in-person meeting. The meeting room will be open by 7:00 pm, with the actual meeting starting at 7:30 pm. If driving, enter the lot using the most north east entrance and drive around to the south west corner of the building (see route in map detail on poster linked below). You can use any of the free, amble, and safe parking spots that say "reserved" in front of units #1 through #4 before entering unit #2. Bus stops #30814 and #30880 (route 77) are only 150 meters away. The last bus leaves for Polo Park at 10:15 pm and for Garden City at 10:31 pm. Logan Ave. bus routes #19, #26, and #27 are a 600?meter (8?minute) walk to the south. For those unable or preferring not to attend in person, the meeting will also be available online, using BBB as usual. Stay tuned to our muug.ca home page for the official URL, which will be made available about a half hour before the meeting starts. (Reload the page if you don't see the link, or if there are issues with connecting.) *Date Change* Please note the change in meeting date for this month, and for the rest of the current year (at least until the July/August break). We are now meeting on the first Tuesday of each month. The group now holds its meetings at 7:30pm on the *first* Tuesday of every month from September to June. (There are no meetings in July and August.) Meetings are open to the general public; you don't have to be a MUUG member to attend. For more information about MUUG, and its monthly meetings, check out their web server: https://muug.ca/ Help us promote this month's meeting, by putting this poster up on your workplace bulletin board or other suitable public message board, or linking to it on social media: https://muug.ca/meetings/MUUGmeeting.pdf -- Gilbert E. Detillieux E-mail: Manitoba UNIX User Group Web: http://muug.ca/