tag:blogger.com,1999:blog-78002049918230048272024-03-28T21:13:13.218+00:00Random Bits from BoredomRayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.comBlogger123125tag:blogger.com,1999:blog-7800204991823004827.post-85656558338428583502024-02-15T20:20:00.005+00:002024-03-28T14:34:01.702+00:00Halide Linux Digital Workflow with RawTherapeePhone cameras have steadily been improving over the last decade but one issue for serious photographers has been the ability to obtain full control of their camera settings and then the ability to obtain a RAW image for imrproved processing.
The recent iPhone Pro Max models have Apple ProRAW and can be enabled by default for the iOS camera, but if you have the non-Pro Max models you will need specific camera apps to allow you to make the most of the camera sensor.
The <a ref=https://halide.cam>Halide MkII</a> app has been a very good camera app that gives us RAW images - coupled with the OpenSource RAW editor <a href=https://www.rawtherapee.com>RawTherapee</a> we have, in my opinion, the basis for acceptable iPhone digital workflow.
<a name='more'></a>
<h3>RawTherapee</h3>
RawTherapee is a RAW processor and like many RAW processors generates sidecar files containing the edits associated with each file and uses <i>profile</i> to control how the initial/first time seen RAW file rendering is performed. There exits 2 types of profile: <i>auto matched tone curve</i> (AMTC) and <i>standard film curve</i> with their variations for low/med/high ISO equivalants - the former is noted as to <i>automatically adjust [settings] to match the look of the embedded JPEG thumbnail</i> which is a good starting point.<br>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEie2AeSbFw_O2cFkA97_Y1ixNyst3h3dinSHmKtnX6F4ODbbBq2gwFlYx_Wdc6_C_4i5y1LEjD3skjR6CBeWqjozL4QRxf5h7A9nPWIDR44Xm9PI14H8vsM79rMqnToUYx9YkZvbYpLOtLs4tKIlYI_ykUE1sZA3cb4Tqg2pxN4yKZjh37ZlqHBVQ/s1600/flat-basic.jpg"><img width=95% src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEie2AeSbFw_O2cFkA97_Y1ixNyst3h3dinSHmKtnX6F4ODbbBq2gwFlYx_Wdc6_C_4i5y1LEjD3skjR6CBeWqjozL4QRxf5h7A9nPWIDR44Xm9PI14H8vsM79rMqnToUYx9YkZvbYpLOtLs4tKIlYI_ykUE1sZA3cb4Tqg2pxN4yKZjh37ZlqHBVQ/s1600/flat-basic.jpg"/></a>
<sup>Halide/iOS WB corrected only with AMTC only with before/after minor adjustments</sup><br>
For <a href=http://rawpedia.rawtherapee.com/Editor#Eek.21_My_Raw_Photo_Looks_Different_than_the_Camera_JPEG>Halide RAW files even with low ISO, the images are rendered fairly flat even with AMTC</a> - I find that the <i>Exposure -> Saturation</i> needs to be set in the 15-30 range to provide a good basis for processing. For comparison against, Nikon NEFs with AMTC require much less saturation boosts to get close to OoC jpeg rendering.<br>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHPEnPgMV07XbITxP-MesvTl5NYFrzW66DJ15iEbACVM1ai7Xy2buSjVUFpT5tuahI1cLf5SpZT-gUGV-ilc3Dr9IsKNKGBZAKn3M3sRYjrp8gMp5sh7BD288fg0yRqSQ-ZVjmZ95rpkq0X91HimqdcUdUalclfEIXQDbolU3rjBMkmCSB15wwDA/s1600/nikon-raw.jpg"><img width=95% src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHPEnPgMV07XbITxP-MesvTl5NYFrzW66DJ15iEbACVM1ai7Xy2buSjVUFpT5tuahI1cLf5SpZT-gUGV-ilc3Dr9IsKNKGBZAKn3M3sRYjrp8gMp5sh7BD288fg0yRqSQ-ZVjmZ95rpkq0X91HimqdcUdUalclfEIXQDbolU3rjBMkmCSB15wwDA/s1600/nikon-raw.jpg"/></a>
<sup>Nikon default rendering with AMTC adjustment only</sup><br>
<h4>Noise</h4>
Given the iPhone sensor size, anything above base ISO seems to have noticable noise artifacts but these can be cleaned up using the LMMSE or IGB <i>Demosaicing</i> algorithms and some minor noise reduction.<br>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxmDVnj5VIr6wg6vutC-wceMtO1aYsS7D0jRp92cYCt9vN8FarhaNwmf-HWW_kFaMQezMdkKPO-Tgn9E4OfP0AmQ4yqnhWqFLNODVVds56Z5QO7D6SQuBN3nv0tylzEoXIGTrd56CmX-DNDtX5pRLLUCs8GdhIfoNnDxvgLM_UQOgJyUyVxP7xmg/s1600/noise.jpg"><img width="95%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxmDVnj5VIr6wg6vutC-wceMtO1aYsS7D0jRp92cYCt9vN8FarhaNwmf-HWW_kFaMQezMdkKPO-Tgn9E4OfP0AmQ4yqnhWqFLNODVVds56Z5QO7D6SQuBN3nv0tylzEoXIGTrd56CmX-DNDtX5pRLLUCs8GdhIfoNnDxvgLM_UQOgJyUyVxP7xmg/s1600/noise.jpg"/></a>
<sup>Halide RAW ISO200, demosaic algorithm LMMSE - minor cleanup top: see noise on PCB on right</sup>
<h4>Key Processing</h4>
The key processing elements that I find are adjusted on each image and are the basis for most of my processing profiles, starting with the ATMC bundle profile:
<ul>
<li>Exposure
<ul>
<li>Saturation</li>
<li>Shadows/Highlights - Highlights</li>
</ul>
</li>
<li>Detail
<ul>
<li>Sharpening</li>
<li>Local Contrast - Darkness/Lightness levels: 30-40</li>
<li>Noise Reduction - Luminance: 30-70, Detail Recovery: 60-100</li>
</ul>
</li>
<li>Raw
<ul>
<li>Demosaicing - AMaZE (low ISO), LMMSE or IGV (med-high ISO)/li>
<li>Capture Sharpening - Contrast Threshold</li>
</ul>
</li>
</ul>
This can be saved as your own profile and selected as appropriate.
<h4>Dynamic Profiles</h4>
Obviously having to add all adjustments to each RAW file is suboptimal even with saved profiles. However, RawTherapee provides <i>dynamic profiles</i> that applies specific profiles based on specific file metadata, such as camera model or lens. The default profile selection is controlled by: <i>Settings -> Image Processing -> For raw photos -> (Dynamic)</i>:<br>
<img width="95%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgi0vv0cDedfgOCbKboNs5NPJPxUUbmCaJvbOfeEor6efSylAnm-b2BSA_-KipLbvJFFY99tIV1xWkbfflLWssyVNOphgYPgiiGzb0afXlMiLN70fPBoAr1wTEriIJ4lWcKpEyiNZu87sQt87wFNxAHywdB3FZmQ8W9WaTp6hXdYykUMSGMEh8R6g/s1600/dynamic%20profiles.png"/>
Ensure that the <i>auto matched tone curve</i> button is reselected on each RAW image
<h4>RAW generated images</h4>
Once the output files (jpeg etc) are generated, you may want to <a href=https://whatdoineed2do.blogspot.com/2011/02/embedded-in-raw.html>re-embed the processed files into the RAW files via <code>exiftool</code></a>. A useful script to to update a set of corresponding <code>jpg</code> and <code>dng</code>: <code>for i in *.jpg; do dng-preview-upd "${i%.jpg}.DNG"; done</code>
<div class="code">
cat > dng-preview-upd << EOF
#!/bin/bash
ARG0=$(basename $0)
function usage() {
echo "usage: $ARG0 <dng> <jpg preview>"
}
[ $# -ne 2 -a $# -ne 1 ] && usage && exit 1
EXIFTOOL=${DNG_PREVIEW_UPDATE_EXIFTOOL:-/usr/local/Image-ExifTool-12.70/exiftool}
[ ! -x $EXIFTOOL ] && echo "$ARG0: no such executable '$EXIFTOOL'" && echo 1
PREVIEW=$2
DNG=$1
[ ! -f "$DNG" ] && echo "$ARG0: no such file '$DNG'" && exit 1
# 12.70 and below cannot add a preview image if one doesnt exist
HAS_PREVIEW=$($EXIFTOOL -PreviewImageLength "$DNG")
[ -z "$HAS_PREVIEW" ] && echo "$ARG0: no current preview image, unable to process '$DNG' - try 'dnglab convert -d --dng-preview true INPUT.dng OUTPUT.dng' to rebuild" && exit 1
if [ -z "${PREVIEW}" ]; then
PREVIEW="${DNG%.*}.jpg"
if [ ! -f "$PREVIEW" ]; then
PREVIEW="${DNG%.*}.JPG"
if [ ! -f "$PREVIEW" ]; then
usage
exit 2;
fi
fi
fi
if [ "$(identify -quiet -format "%m\n" "${PREVIEW}")" != "JPEG" ]; then
echo "$ARG0: preview, ${PREVIEW}, is not JPEG"
exit 2
fi
DPU_VERBOSE="-q"
if [ ! -z "$DNG_PREVIEW_UPDATE_VERBOSE" ]; then
DPU_VERBOSE="-v2"
fi
DPU_OVERWRITE_ORIG="-overwrite_original"
if [ ! -z "$DNG_PREVIEW_UPDATE_KEEP" ]; then
DPU_OVERWRITE_ORIG=""
fi
<pre># checks for broken DNGs from halide that put preview images in a different place
${EXIFTOOL} -if '$SubIFD1:SubFileType eq 1' -b -w1 ${DNG};
if [ $? -eq 0 ]; then
exec $EXIFTOOL \
${DPU_OVERWRITE_ORIG} ${DPU_VERBOSE} \
"-previewImage<=${PREVIEW}" \
-tagsfromfile "${PREVIEW}" \
"-subifd1:imagewidth<imagewidth" "-subifd1:imageheight<imageheight" "-subifd1:rowsperstrip<imageheight" \
"${DNG}"
else
exec $EXIFTOOL \
${DPU_OVERWRITE_ORIG} ${DPU_VERBOSE} \
"-previewImage<=${PREVIEW}" \
-tagsfromfile "${PREVIEW}" \
"-ifd0:imagewidth<imagewidth" "-ifd0:imageheight<imageheight" "-ifd0:rowsperstrip<imageheight" \
"${DNG}"
fi
</pre>
EOF
</div>Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-88995084331958515552023-06-23T08:30:00.001+01:002023-06-23T13:56:29.353+01:00Valgrind: noise reductionEnsuring your delivered code is memory leak free is a standard concern for C/C++ developers and <code>valgrind</code> is a great tool to assist. One problem we can run into is extra noise from system/non project libraries masking your own issues. Refocussing is relatively straight forward:
<img src=https://valgrind.org/images/st-george-dragon.png>
<a name='more'></a>
https://valgrind.org/docs/manual/manual-core.html#manual-core.suppress
<div class="code">
$ valgrind <b>--gen-suppressions=all</b> ./audiotag ../tests/test.mp3
==17943== Invalid read of size 32
==17943== at 0x641D5D9: __wmemcmp_avx2_movbe (in /usr/lib64/libc.so.6)
==17943== by 0x48F1785: TagLib::String::operator<(TagLib::String const&) const (in /usr/lib64/libtag.so.1.18.0)
==17943== by 0x48E533B: ??? (in /usr/lib64/libtag.so.1.18.0)
==17943== by 0x48F7E81: TagLib::PropertyMap::operator[](TagLib::String const&) (in /usr/lib64/libtag.so.1.18.0)
==17943== by 0x492052E: TagLib::Tag::properties() const (in /usr/lib64/libtag.so.1.18.0)
==17943== by 0x40A5FA: _properties<TagLib::Tag> (Meta.h:366)
==17943== by 0x40A5FA: AudioTag::MetaMP3::properties(TagLib::Tag const&) const (Meta.cc:473)
==17943== by 0x41041F: AudioTag::MetaOutJson::out(std::ostream&, AudioTag::File const&) (MetaOut.cc:122)
==17943== by 0x410806: execute (Ops.h:32)
==17943== by 0x410806: AudioTag::Ops::execute(AudioTag::File&) const (Ops.cc:23)
==17943== by 0x40779E: main (audiotag.cc:570)
==17943== Address 0xf6ea210 is 0 bytes inside a block of size 28 alloc'd
==17943== at 0x4844FF5: operator new(unsigned long) (vg_replace_malloc.c:422)
==17943== by 0x48E9C79: TagLib::String::upper() const (in /usr/lib64/libtag.so.1.18.0)
==17943== by 0x48F7E69: TagLib::PropertyMap::operator[](TagLib::String const&) (in /usr/lib64/libtag.so.1.18.0)
==17943== by 0x49204DE: TagLib::Tag::properties() const (in /usr/lib64/libtag.so.1.18.0)
==17943== by 0x40A5FA: _properties<TagLib::Tag> (Meta.h:366)
==17943== by 0x40A5FA: AudioTag::MetaMP3::properties(TagLib::Tag const&) const (Meta.cc:473)
==17943== by 0x41041F: AudioTag::MetaOutJson::out(std::ostream&, AudioTag::File const&) (MetaOut.cc:122)
==17943== by 0x410806: execute (Ops.h:32)
==17943== by 0x410806: AudioTag::Ops::execute(AudioTag::File&) const (Ops.cc:23)
==17943== by 0x40779E: main (audiotag.cc:570)
==17943== <b>
{
<<>insert_a_suppression_name_here<>>
Memcheck:Addr32
fun:__wmemcmp_avx2_movbe
fun:_ZNK6TagLib6StringltERKS0_
obj:/usr/lib64/libtag.so.1.18.0
fun:_ZN6TagLib11PropertyMapixERKNS_6StringE
fun:_ZNK6TagLib3Tag10propertiesEv
fun:_properties<TagLib::Tag>
fun:_ZNK8AudioTag7MetaMP310propertiesERKN6TagLib3TagE
fun:_ZN8AudioTag11MetaOutJson3outERSoRKNS_4FileE
fun:execute
fun:_ZNK8AudioTag3Ops7executeERNS_4FileE
fun:main
}
</b>
...
</div>
<div class="code">
$ cat > valgrind.supress << EOF
{
rule_ignore_system_libs
Memcheck:Leak
obj:*/lib*/lib*.so.*
}
{
rule_ignore_tablib_invalid_read
Memcheck:Addr32
fun:__wmemcmp_avx2_movbe
fun:_ZNK6TagLib6StringltERKS0_
obj:/usr/lib64/libtag.so.*
}
EOF
$ valgrind <b>--suppressions=valgrind.suppress</b> ./audiotag ../tests/test.mp3
</div>
With suppressions enabled you should be able to concentrate on items related to your project. However if you need to dig further, there are <a href=https://valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.gdbserver-commandhandling>advanced techniques</a> to help narrowing down your memory issues: using <a href=https://developers.redhat.com/articles/2021/11/01/debug-memory-errors-valgrind-and-gdb#scan_for_memory_leaks><code>gdb</code> along with <code>valgrind</code></a>is a power combination:<br>
<br>
You will need two terminals, one starting <code>valgrind</code> in paused start and then <code>gdb</code> in another terminal connecting to the former.
<div class=code>
$ valgrind <b>--vgdb-error=0</b> ./audiotag -d1A -l ../tests/test.mp3
==40385== Memcheck, a memory error detector
==40385== Copyright (C) 2002-2022, and GNU GPL'd, by Julian Seward et al.
==40385== Using Valgrind-3.19.0 and LibVEX; rerun with -h for copyright info
==40385== Command: ./audiotag -d1A -l ../tests/test.mp3
==40385==
==40385== (action at startup) vgdb me ...
==40385==
==40385== TO DEBUG THIS PROCESS USING GDB: start GDB like this
==40385== /path/to/gdb ./audiotag
==40385== and then give GDB the following command
==40385== <b>target remote | /usr/libexec/valgrind/../../bin/vgdb --pid=40385</b>
==40385== --pid is optional if only one valgrind process is running
==40385==
</div>
In second terminal:
<div class=code>
$ gdb ./audiotag
..
(gdb) <b>target remote | /usr/libexec/valgrind/../../bin/vgdb --pid=40385</b>
(gdb) monitor help
(gdb) <b>monitor leak_check full reachable any</b>
(gdb) br ...
...
(gdb) mo l
==40385== LEAK SUMMARY:
==40385== definitely lost: 0 (+0) bytes in 0 (+0) blocks
==40385== indirectly lost: 0 (+0) bytes in 0 (+0) blocks
==40385== possibly lost: 0 (+0) bytes in 0 (+0) blocks
==40385== still reachable: 299,633 (+0) bytes in 1,109 (+0) blocks
==40385== suppressed: 0 (+0) bytes in 0 (+0) blocks
==40385== Reachable blocks (those to which a pointer was found) are not shown.
==40385== To see them, add 'reachable any' args to leak_check
==40385==
</div>
Once connected you can continue adding break points with the <code>monitor</code> commands evaluating specific execution paths where the memory errors occur.Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-30125616966642671882023-01-29T18:20:00.040+00:002023-03-06T12:11:31.224+00:00Ford Focus Active X 2023 user experienceThe 2022 facelifted Mk4 Focus, <a href=https://uk.motor1.com/news/540529/2022-ford-focus-facelift-revealed/>announced in Oct 2021</a>, replaces the <a href=https://en.wikipedia.org/wiki/Ford_Focus_(fourth_generation)>orginal Focus Mk4 (2018-2020)</a> - only available in Europe - is to be <a href=https://uk.motor1.com/news/593846/ford-focus-production-ends-2025/>the last production of the Focus line</a> as Ford had previously announced it will be pivoting towards electric vehicles.<br>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKy2xAjovsKUUtD6Vz6Oh-fT9_2MlADg_ImYb4Dvv5IAuZ8WeVV6xf2Q2ftWHx5QMNwd2S5HqcKrZKnD5eTTDgeRfwDGoQhw5OL_1dP6Q-Bs_aLVD7x8xQDb4EGSkwsiESH1BjrvJb6ME6XZ8WYt7vRTQuHps5AUTXmaf8x9TZsDMPzRWgEHs/s1600/ford-focus-2022-ford-focus-active.jpg" width=95%>
<sup><i>Focus Active X 2023 facelift</i></sup><br>
I have previously driven a variety of family orientated Fords designed between ~1993-2015, including the Mk1 and Mk2 Monedo/Focus and Mk1 S-Max, there are a number of changes to adjust to and this is the experience of driving the facelifted Mk4 Focus Active X 2023.<br>
<br>
<a name='more'></a>
Whilst the announced in 2021, the delivery of the car to end users was hampered by supply-chain issues resulting from Covid and the Russian-Ukraine conflict; as such, the facelifted version may be referred to as the Focus 2022 or 2023 given the delayed delivery but will be visibly different given the new grill and bonnet blue badge placement.<br>
<img width="95%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8BMRk7i7x4SgT5jvRgagvIJQVuEPKkaUQWEuNqDwt6VHTj1fcNXo18fzJwV-3RyA3gMU-l2njTHuifkNC0Tbd4gZ00ogrHeY_Y5XSDXq2i91Fr0upn9CcHZIPUCf6acJ-RxQxOv5Z5V5PtCvpLbbwZdBgGhm_Qme7HLYXPRTxfIVn4aBYZ_8/s1600/ford-focusmk4-mk4.5.jpg"/>
<sup><i>Focus Mk4 2018 vs Mk4 2022 facelift</i></sup>
<br>
The instrument cluster in front of the steering wheel is digital but the immediate issue is that the rev limit dial and speedometer are in the wrong position; rev being on the right. For the cars without a digital cluster, the rev limit dial is on the left as you would expect. This spec provides tire pressure sensors but these are a little off compared to my old foot pump.<br>
<img width=95% src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2AVe_NmxKNjJG9EllPA43bQ0OWso7LksgxFSBbwWg90-jPmnZxD1Rqs8zdU1ZLUf0s8IXf2VkLOOkkt40hLT5EFxBpHh43ATjw1e4b1priuhaQ5eOSj-KPdq3bTPtxuDRQ2u_PrSGT5SYDPlhEQ1bthh_DTyHKSDUyB0LaNUu_TM93HM0oiA/s1600/ford-focus-instrument.jpg"/>
<sup><i>LHD instrument cluster with optional headup display</i></sup>
<br>
There are 2 key items with respect to parking and stopping: the <i>electric park brake</i> and <i>auto hold</i>. The manual physical handbrake is replaced with a tab in the centre console that you engage by pulling up and release by press down - the <i>auto hold</i> is enabled with the button on the centre console behind the <i>electric park brake</i>.<br>
<br>
The park brake does what you expect, but <i>auto hold</i> is a feature that engages the brakes automatically once the car comes to a stop meaning that the driver no longer needs to hold the brake pedal which is very useful for those that historically have problems with hill starts. Both the <i>electric park brake</i> and <i>auto hold</i> disenage when you lift the clutch/press the accelerator - this takes a little getting used to at first. When the <i>auto hold</i> is engaged an indicator is shown in the bottom right of the instrument cluster.<br>
<img width="95%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_tR7nFbypKEdLNcUjEAHXtJKmw4B5djcYlbLqVsy2fWF-w-wRScbjdiI__QGg2HxQjpKd-RHt25eBhCrcRhNDliAeYhdMeYg5IoimPdo3aLrT0HGaZlsk-3LUAT7LKJNb_yfSLQDlyTQxkRNA7BNlE65Sc4ZNHl5cx52GJbWwnlRIt7Td4gw/s1600/ford-st-interior.jpg"/>
<sup><i>Interior cabin of LHD ST-line, notice the electric brake and auto hold buttons behind the shift lever on console</i></sup>
<br>
The only difference between <i>auto hold</i> and <i>electric park brake</i> is that the former will be engaged automatically by the car itself when enabled whereas the <i>electric park brake</i> needs to be engaged just before the car ignition is stopped. The <i>electric park brake</i> can only be disengaged when the car ignition is on. The <i>auto hold</i> enabled/disabled setting is retained when the car is started next time.<br>
<br>
The ride height of the Active X is higher than the standard Focus with the combination of slightly elevated (increase of 3cm at the front and 3.4cm at the rear) suspension and 18" wheels gives ~10cm ground clearance at the driver side. Whilst this height is not comparable to an SUV or even MPV, like the S-Max, it helps give the car a better experience when entering and leaving espcially for older passengers. The rear passenger leg room is just about sufficient but not great.<br>
<img width="95%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0B5iQB-EhpfA-He0Xr7ZC2RxeZm_Jk_Caw_FlpaGLJ9o_QRn93yZCLAa0l0rfP2-_AT_QEorTAPca2YqIhD8QvETlOFFSv7mhEGizqyyQpht_TTDTsniP2uXNFAijNg7nAnR4H0_8B8zxFqCCICfBiUFZkhUtm8rZKgU8hKw7SDCVOVofmO8/s1600/ford-focus-side.png"/>
<sup><i>Focus ST-line side view</i></sup>
<br>
<h2>Hardware</h2>
Boot space measures 100cm wide and extends 79cm into the car (with a loading lip depth of ~8cm) - and feels like other cars in this segment (particularly the VW Golf) but it gets full quick for the weekly shop. A mini spare wheel is available under the boot floor and is normally obscured by the removable subwoofer - whilst not perfect, at least there is wheel provided: the subwoofer takes ~10cm. The Active X variant has a subwoofer located in the boot which raises the boot floor and thus reduces slightly the storage available even compared to the Active (non X) variant (loading lip depth of ~17cm). The boot has a non flat loading lip and the removable parcel shelf in-place gives about 45cm clearance from boot floor.<br>
<br>
Loading for vacation would be tough for a 4 person family - the boot, with rear seats up, will take 2 large (~80cm high) suitcases loaded width-ways across wihtout the loading shelf but after that, its a tough fit especially when comparing it to say, a S-Max that provides 116cm wide x 93cm deep boot. Alternatively, the boot will fit 3x IATA guidance sized cabin suitcases (56x45x25) loaded in length-ways, packed side by side on their short side. Furthermore, unlike the mk3 and earlier, the mk4 Focus hatchbacks do not have a hinge for the rear seats to be folded forward to allow an extended flat boot when the rear seats are down.<br>
<br>
Internal door locking is a bit odd - there are no internal locking controls on the doors except for on the driver side, who can control the central locking. There are also no visible indicators from the outside whether the car is locked or not except if you enable the auto folding for the door mirrors. Furthermore, there is NO auto-lock feature for the car (ie when car is driving, all doors locked) but only auto-unlock for when car is stopped and driver door openned - the <a href=https://www.fordownersclub.com/forums/topic/120014-auto-lock-doors-when-driving/>auto-lock feature is disabled in Europe</a>.<br>
<img width="95%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSU6m8Eu2q9w_5DQ6gnJuK5TmigEdVqv5VQcfvOonUhp4A1gDsq3GSL4_VUI4gEn1TDpbHweOseF4COsiOSaslb9hCe2ZQ2p5UhCBrt-N0NK1g2R-YnCS7GogJ4qpF3sUXeOZAcTQDLwuE9-vkIF6wFa6FYC3rXOA3g80P39HZeHIG_hmw-5o/s1600/2021_FORD_FOCUS_ST_lock.jpg"/>
<br>
However, when the driver engages central locking from the door controls it is not possible to open the doors externally, but it is possible to open the door from the inside. The traditional physical child locks are available - turn key in the rear door frames - and this needs the central unlocking before opening from the outside.<br.
<br>
The car has multiple sensors that will give indication for proximity to other obstacles, useful for parking, and displayed clearly on the Sync4 screen. There are other features like <i>Lane Assist</i> (enabled on the turn signal stalk) which can give tactic vibration feedback via the steering wheel when it determines the driver is vearing.<br>
<br>
Under the bonnet, there is no engine cover but the battery appears to be positioned so that it can be easily swapped out rather than requiring first removal of the air filter.<br>
<br>
Apple Carplay is available via Bluetooth as well as USB and works as you expect. When enabled, google maps is available which is displayed full size on the Sync4 screen and will be very useful when the Ford navigation subscription expires.<br>
<br>
USB connectors are in 2 areas: the lower central console (in front of the gear knob) with USB-A and USB-C next to the 12volt outlet and 2 USB-C ports for the rear passengers located on the central column.<br>
<br>
The 13" Sync4 touchscreen system (although some of the early or dealer only models can come with the older/slower/smaller Sync3 screen) is a key feature of the facelifted Focus. It is a <i>connected</i> system that provides status updates to a Ford cloud service to allow status updates to the mobile appls but also provides over the air software updates. The Sync4 system provides you with maps that are updated via over-the-air updates and for new vehicles, it comes with 1year subscription to live navigation. With the larger Sync4 screen, all the climate control and audio functions (except for the volume) is at the bottom the screen.<br>
<img width="95%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBzl2vjGTV4c3cVQER_p_bDBWbXcB7vmlvoFbRnXBhB3nZSJ058g3IJ-wFL0ZynIe3XVyvgvErNB5_-XpbUDtH-q90cDboSff0uAmPCwSAjsaqipOahfOkCvMCMzrHKnkwn6dsiZ3JUdAqu_Hlbk1_M1atj5vZRYeHYFLFsLFk0WD7dT6PO6A/s1600/ford-sync4.jpg"/>
<br>
The only physical buttons under the Sync4 screen are split into the <i>engine start/stop</i>, <i>audio on/off and volume knob</i>, and a 4 button quick select panel: <i>drive mode</i> (also initiated by the directly on Sync4 screen) with <i>Trail</i> and <i>Slippery</i> modes unique to the Active series, <i>max windscreen heating</i>, <i>engine auto start-stop</i> (enabled by default and activated when car stopped, shift to neutral for more than 3seconds) and <i>parking assist</i> (from optional Parking Pack).<br>
<img width="95%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEik6gJ2DQ1r9NO3AXtShUlBD3J2d9BlvJMrlmLq4NlsjmHcr1mWCxEHAOVBrx7ObUut58VaWUc-DTilgz3tcSGvNhsCN6kZVip_mIOB3mz7RRx4fTsdh6cHyvBjucig3iuvHn-AMscv7Vfj24_hcNKwSdN07Tpx1gPZSrC3sLhTSQFY1GcWc0c/s1600/focus-physical-buttons.jpg"/>
<sup><i>LHD physical buttons</i></sup>
<br>
The Sync4 music system plays music from DAB radio, Bluetooth music and also from the USB ports using FAT32, but not exFat, formated USB sticks - the metadata from the music files, having observed support for MP3 and AAC, are scanned and indexed automatically.<br>
<br>
<h2>Apps</h2>
The car has a modem for its data requirements; this also provides connectivity for smartphone apps that can monitor the vehicle location, unlock/lock door and obtain other status such as tyre pressure, fuel: there are two varients: <i>Ford Pass</i> and <i>Ford Pass Pro</i>, with the latter focused on business users that can more easily monitor up to five vehicles and uses an email as its key. The app requires a Ford website account registered against an email address; on the Ford website you can perform the majority of the same functions with some minor exceptions: website can add payment cards and app can active and unlock etc<br>
<br>
When the car is new, the car connectivty remains requires <i>activation</i> which requires the VIN to be entered into the app, which will in turn ping the car. If you are the first to <i>activate</i> the car, you will need to enter the vehicle and follow/confirm the onscreen Sync4 notifications to complete the activation - subsequent users can be approved by the existing user, whilst they see a message in the app informaing them of the wait.<br>
<img width="95%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2KCgOTLciRocINFPXGf9DBtpHML3YEYeZpR_sqZhKlzW-LIGJWdXiVZAnB7jWsZ90IbME0ncJpYMPO5VgSXVn7rxozsrXFc9tg74B1pyoeG8mQgt_DcC3-jt2r6-q4OkBUpP6ijmE42kyocyf3w2pd_KQZsIZntOSGxZ7aE5hwFSUvoLuhDI/s1600/ford-pass.jpg"/>
<sup><i>Ford Pass Left | Ford Pass Pro Right</i></sup>
<br>
The list of existing authourised users (email) can only be see in the FordPass app which also allows uses to manage subscriptions associated with the car, such as live traffic updates. The lock status of the car is only visible on the FordPass Pro although both appls allow unlock/lock functionality. To remove authorised users there are two options: individual users removing the vehicle from their app or mass user removal by performing <i>master reset</i> on the Sync4 screen - this will require a user to <i>activate</i> the car again<br>
<br>
One great feature is that the apps can display the car's last known location on a map and location valid for when the car was last parked/power available to the car.<br>
<br>
<h2>Conclusions</h2>
The drive of the 2023 Focus is nice and shouldn't come to a surprise to other Focus drivers; having driven the original Mk1 and Mk2 models, this last model is equally fun to drive - the steering is very light but responsive. The 3 cylinder 1.0L engine has a slight growl to it although I would have liked it a little louder, especially on the motorway so its easier to shift up instead of scanning the rev counter. Torque is enough to pull the car up a hill without having to add too much gas and this is greatly helped with the <i>auto hold</i> feature.<br>
<br>
The boot space for a hatchback variation comes with its usual constraints; there exists an estate version of the Focus variants and offers a flat loading lip and slightly wider (114cm vs 100cm) and another 20cm in depth for those that need the extra space and want to stay in the vehicle class rather than jumping to the SUV/crossover offerings.<br>
<br>
Overall the last Focus facelift a handsome car, with its more aggresive front, and fun to drive although its size does come, as it always had done, with its practical limitations but after 25yrs on the market, this shouldn't be a surprise and certainly does not detract from the car.
<img width="95%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYI2qVlLeI95zr5U9jKcxUwr4qVGr6HGANXxOVSmkuIsldzH5n9PuCh4j3y8-j9puaW3pAXHIiVPQhTvmJEBngSAVATHxNUTfzGjk9cSQNYmcWqyCtWPuaH5LZQE7G5rhRqPvsNIXQZnbw1xpnqbvXYBwwJQKiozj0jWIk0GU8LNzuQCenA2s/s1600/ford-focus-mca-c519-eu-STL_03_C519_Focus_Ext_Rear_3_4_Static-16x9-2160x1215_gt.jpg.renditions.extra-large.jpeg"/>
<sup><i>Focus ST-line rear view</i></sup>Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-24205281125077276412023-01-14T13:34:00.002+00:002023-01-16T17:58:14.885+00:00Faking RPM db dependanciesThe fedora packager manager takes care of all your dependancies but how do we deal with user compiled binaries and misisng dependencies?
<a name='more'></a>
<div class=code>
$ sudo dnf install simplescreenrecorder
...
Error:
Problem: conflicting requests
- package simplescreenrecorder-0.4.3-3.fc35.x86_64 requires libavcodec.so.58()(64bit), but none of the providers can be installed
- package simplescreenrecorder-0.4.3-3.fc35.x86_64 requires libavcodec.so.58(LIBAVCODEC_58)(64bit), but none of the providers can be installed
- package simplescreenrecorder-0.4.3-3.fc35.x86_64 requires libavformat.so.58()(64bit), but none of the providers can be installed
- package simplescreenrecorder-0.4.3-3.fc35.x86_64 requires libavformat.so.58(LIBAVFORMAT_58)(64bit), but none of the providers can be installed
- package simplescreenrecorder-0.4.3-3.fc35.x86_64 requires libswscale.so.5()(64bit), but none of the providers can be installed
- package simplescreenrecorder-0.4.3-3.fc35.x86_64 requires libswscale.so.5(LIBSWSCALE_5)(64bit), but none of the providers can be installed
- package simplescreenrecorder-0.4.4-1.fc35.x86_64 requires libavcodec.so.58()(64bit), but none of the providers can be installed
- package simplescreenrecorder-0.4.4-1.fc35.x86_64 requires libavcodec.so.58(LIBAVCODEC_58)(64bit), but none of the providers can be installed
- package simplescreenrecorder-0.4.4-1.fc35.x86_64 requires libavformat.so.58()(64bit), but none of the providers can be installed
- package simplescreenrecorder-0.4.4-1.fc35.x86_64 requires libavformat.so.58(LIBAVFORMAT_58)(64bit), but none of the providers can be installed
- package simplescreenrecorder-0.4.4-1.fc35.x86_64 requires libswscale.so.5()(64bit), but none of the providers can be installed
- package simplescreenrecorder-0.4.4-1.fc35.x86_64 requires libswscale.so.5(LIBSWSCALE_5)(64bit), but none of the providers can be installed
- package ffmpeg-libs-4.4-7.fc35.x86_64 is filtered out by exclude filtering
- package ffmpeg-libs-4.4.3-1.fc35.x86_64 is filtered out by exclude filtering
(try to add '--skip-broken' to skip uninstallable packages)
$ dnf reinstall simplescreenrecorder -y --downloadonly --downloaddir=.
</div>
This shows that <code>simplescreenrecorder</code> has unresolved <code>ffmpeg 4.x</code> runtime dependancies - on this machine I have ffmpeg 5 and self compiled ffmpeg 4 (to support FDK). So at this point I have two options: install the ffmpeg-lib-4.x package from RPMfusion and then overwrite those provided libs with my self compiled libs but there is another way: to fake the dependancies known to the RPM db.<br>
<br>
<div class=code>
# https://github.com/larsks/fakeprovide
$ sudo curl -L https://raw.githubusercontent.com/larsks/fakeprovide/44698c8b398bb5f8071e1dc3f63c3f275861a250/fakeprovide -o /usr/local/bin && sudo chmod a+x /usr/local/bin/fakeprovide
$ sudo dnf install rpm-build
</div>
With the required utils, we can create a dummy/fake rpm that declares the required depedancies as above and install.
<div class=code>
$ fakeprovide -a x86_64 \
-P"libavformat.so.58()(64bit)" \
-P"libavformat.so.58(LIBAVFORMAT_58)(64bit)" \
-P"libswscale.so.5()(64bit)" \
-P"libswscale.so.5(LIBSWSCALE_5)(64bit)" \
-P"libavcodec.so.58()(64bit)" \
-P"libavcodec.so.58(LIBAVCODEC_58)(64bit)" \
-v 4.x ffmpeg-libs
# alternative take dependancy output of dnf install above
# cat > /tmp/deps << EOF
...
# to generate the -P.... lines
# $(grep requires /tmp/deps | sed -e 's/^.*requires \(.*\), but non.*$/-P"\1" \\/g')
# examine what the generate rpm
$ rpm -q --provides -i ./fakeprovide-ffmpeg-libs-4.x-1.fc35.x86_64.rpm
fakeprovide-ffmpeg-libs = 4.x-1.fc35
fakeprovide-ffmpeg-libs(x86-64) = 4.x-1.fc35
ffmpeg-libs
libavcodec.so.58()(64bit)
libavcodec.so.58(LIBAVCODEC_58)(64bit)
libavformat.so.58()(64bit)
libavformat.so.58(LIBAVFORMAT_58)(64bit)
libswscale.so.5()(64bit)
libswscale.so.5(LIBSWSCALE_5)(64bit)
Name : fakeprovide-ffmpeg-libs
Version : 4.x
Release : 1.fc35
Architecture: x86_64
Install Date: (not installed)
Group : Fake
Size : 108
License : GPL
Signature : (none)
Source RPM : fakeprovide-ffmpeg-libs-4.x-1.fc35.src.rpm
Build Date : Sat 14 Jan 2023 12:48:48 GMT
Build Host : yoga
Summary : Fake provide for ffmpeg-libs.
Description :
Fake provide for ffmpeg-libs.
$ sudo rpm -i ./fakeprovide-ffmpeg-libs-4.x-1.fc35.x86_64.rpm ./simplescreenrecorder*.rpm
</div>Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-1365783714177065532022-10-09T10:00:00.041+01:002022-10-28T20:24:06.054+01:00Herman Miller Aeron: Replacing the Gas Lift <a href=https://whatdoineed2do.blogspot.com/2016/08/herman-miller-aeron-pitfalls-of-trying.html>Having documented other people's difficulties with replacing their Herman Miller Aeron gas lifts</a>, it was finally time to do the same for my own 1999 chair. What are the current challenges and tools available for this activity in 2022?<br>
<br>
Since acquring my late 90s Aeron, most of the common parts have been replaced (sunken seat pan, plastic clamshell hip bolts, torn seat back, fixed arm rests, wobbly castors, torn lumbar support, seat pan edge foam insert) but whilst the gas lift had always been sticky and the chair wobbled, this was put off due to the reported difficulties. However more recently this was no longer an option to defer replacement since the gas lift would no longer operate, being stuck after lowering. But how easy and what options for parts are available.<br>
<br>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhX3jcIlwBaB21Yv-4G5GfAb1pPG-Z6zLhN1gddGnMsAHewONTGoygjD3tHZg-Tsji9FPeUfZM0aIPdwGJuJ7UfPJII813B2-O2po-EF7kZTe97S4_zZMLvMmfj1BkK2As_1B3X6lKn89gyGQ_iGiJgaA2MnOFsS0zAASTSRQOsaxiW_M51Onc/s1600/_DSC2139.jpg" width="95%">
<br>
<a name='more'></a>
<h3>Gas Lift Basics</h3>
Gas lifts in most chairs are a universal size with the following industry measurements/characteristics:
<ul>
<li>outer (50mm diameter) tube length, V</li>
<li>min and max length of total gas cylinder, L1 and L2</li>
<li>extenion length of the telescopic piston (28mm diameter) including tapper to tilt mechanism, (L2-L1) Stroke</li>
<li>outter tube tappered section into base, typically 60mm length, X</li>
<li>projection from under base (below X), Y</li>
<li>top button or side cable activation</li>
</ul>
The only signficant gas lift specification is the <i>Class</i> of the cylinder which dictates the weight it can support and the metal wall thickness of the tube and telescopic piston: Class 4 supports heavier loads (commonly quoted from 150kg-250kg) compared to Class 3 (up to 150kg). The key point being for the gas lift: given the weight of the Aeron chair itself and the support it needs to provide the user, you should be looking for a Class 4 gas lift - don't repurpose from an Ikea chair even if it does physically fit.<br>
<br>
<h3>The OEM gas lift spec</h3>
The factory fitted original 2 stage telescopic gas lift I removed from a 1999 mark I Aeron had the following observations:
<ul>
<li>outter tube tappers from 50mm to 47mm for section X, 210mm length V</li>
<li>telescopic piston tappers from 28mm to 26mm, with 28.5mm inserts into tilt mechanism</li>
<li>250mm (10") stroke</li>
<li>40mm length projection, Y</li>
<li>top button activation</li>
</ul>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmms1FGG-_5sR1o0lZbXoC2ml060_vwy7CvrCWUjF4ekDsExlb92HMOkSXnCMtgAdf_krsuzEOeUj5k-E3qhWTEzlx6c2M5rl891A40RvIRbSijlbGzEwbAs7IZU0ZxP_nATUsn7ovbRMEcVonunLedsd8jZYns_mpGhb02Pt4C6Luyp6mK0c/s1600/gaslift.jpg" width=95%><br>
It appears that Herman Miller have used a gas lift with industry standard specifications which makes sourcing a replacement much easier.<br>
<br>
The type of gas lift activation is imporant as the tilt mechanism uses different hardware depending on type and the two types are not compatible: <i>pre 2012</i> top button activated, <i>post 2012</i> side cable activated.
Identification is relatively easy after you pop the top cover of the tilt mechanism and visually inspect. Button activated chairs/cylinders have a butterly-like activation mechanism wth a lever fitted over the activation button that plugges the lever when the chair's right side actuator tab is pulled.<br>
<br>
<h3>Replacement Process</h3>
Unless you are absolutely sure that the gas lift is defective, I would recommend that you first verify that you do indeed require a gas lift replacement before attempting given the potential cost, trouble and damage you can cause. One common problem with gas lifts is that users report their chair will sink slowly as they sit. This can be definitely a defective gas lift but it can also be an inadvertant gas lift activation for button activated chairs: as you sit, the extra weight subtly presses down on the actuator button and thus the chair sinks.<br>
<br>
Alternatively, a similar related problem may be if chair does not raise nor fall when the gas lift actuator is pulled<br>
<br>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIklCD4XfRNz4ly9quDr9sBGdB3RYonqFxt-OUXS-OW7ozylm3dRpRoYMQRmfOjMBLiGYLUkvn0303F8Rw6QxM375OQD3fQ8KoH5910RA_82c2MbYSM-ErF41NIbssEp7PZr_-3yk5AODbQM5b1bSufWSkRP-sIm_rZyynJBTm9p0rfrLFyi4/s1600/mechanism.jpg" width="95%">
<br>
In both cases you should check the following: the pre-2012 Aerons actuator button are under a butterfly mechanism where the base of the butterfly mechansim's lever is a 4mm hex set screw adjustment ; adjustments to this raises or lowers the lever when it is in its reseting positon - this should be adjusted so that the lever has enough tension/no bounce to it when pressed with your finger and this will also result in the actuator pull tab having no slack.<br>
<br>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_b2B0BaJ0yEdOT0dRaGN-ZEMZE0MMIRBWh9aam_1QYiHZYfz8XyUHeJfePfge-I2jnTNQBH3lYlCqCeYhADobwJ8XnLgDQ-Yhl6ILY9WiG0BZZDd2MghmGQ2qcFDj4Lc6e4WIIicBlHVYfXANk8IRWevhc8QXW0qF18ZJahPnroMjsbGvBXU/s1600/actuation%20lever%20and%20set%20screw.jpg" width=95%><br>
<sup>(c) Herman Miller: green - lever, blue - 4mm set screw adjustment</sup><br>
<br>
If after this set-screw adjustment has been completed but the gas lift is still not operating correctly then you know its the gas lift.<br>
<br>
<h3>Replacement Process and Tools</h3>
The internet's recommendations for removal would involve either:
<ul>
<li>a specialised <i>lift off tool</i> that is attached to the telescopic tube butted up against the tilt mechanism and hammered until seperation</li>
<li>exposing the top of the gas lift in tilt mechansim, with removal of seat pan and any activation mechanisms and to hammer out the gis lift via a special diameter pipe</li>
<li>pipe wrench to twist off the cyclinder by gripping the telescopic tube</li>
</ul>
whereby the top 2 have been mentioned in Herman Miller's service manuals.<br>
<br>
However in the last 3years or so there is a further option which involves a simple two metal ring removal tool: one ring is tighted and clamped to the 28mm telescopic tube and screws are turned to push against the second metal ring which is loosely attached around the same telescopic tube whilst pressed against the tilt mechanism - the extension of the screws will gently and slowly push the tilt mechanism away from the telescopic tube and finally separating the two.<br>
<br>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEuhQkrg5kfvY7K5bDoSoNSx7bGkvj0W1-MW8nHATPNMwxO_SthLRL0M6fhtzczF4VYz9qZC7ftCQxwnDT7FIjTuCuGZ4PYGU5v1ubqsw_-6QxlSVwShLr-YxAr94X9iuY7jmzTQ2aNZmtW7s6qSRl15CITIOjSaUqLK6HReo6twyBPifh5EM/s1600/removaltools.jpg">
<sup>(c) Office Oasis</sup><br>
<br>
Using the 2 metal rings is best done with your chair toppled over, with the back facing upwards as the same rests on the ground - this leaves the base up in the air.<br>
<br>
In the UK a kit consisting on the rings and a Class 4 single stage gas lift was available from an American firm as <i>Office Owl universal gas lift with removal tool</i> for about 30 GBP: the (reusable) metal ring removal tool was being sold elsewhere for 20 GBP. In the US market, there appears to be an <a href=https://theofficeoasis.com/products/office-chair-cylinder-replacement?variant=31153471717430>identical set from <i>Office Oasis</i></a><br>
<br>
This removal method also requires that the 28mm telescopic tube is accessible. For me, this was not the case - having previously tried and failed to using a <a href=https://en.wikipedia.org/wiki/Tongue-and-groove_pliers>tongue and groove pliers (aka water pump pliers or grips)</a> (rather than the min 24" recommended <a href=https://en.wikipedia.org/wiki/Pipe_wrench>pipe wrench</a> that I don't own) to twist off the gas cylinder, the chair had sunk further. With the chair positioned with the base in the air, I had to use the same tongue and groove pliers to grab hold of the wider/2nd stage telescopic tube to twist and pull to extend the 28mm telescoing tube which appeared to have been gummed up with 20+ years of dried up lubricant.<br>
<br>
I would definitely recommend separating the tilt mechanism first since separating the 50mm tube from the base requires brute force hammering the tube from under the base. When I hammered the tube out from the base there was quite a lot of damage to the gas cylinder - if you opt to separate the base first and then, for whatever reason, are unsuccessful with the tilt separation, then you have no working chair: having a failed tilt separation first, you can still use your chair with the same issues you have prior to your removal attempt.<br>
<br>
A <a href=https://www.youtube.com/watch?v=A0QxrH72RxI>great video</a> showing the removal process using a pipe wrench for top and side activated gas lifts is available from this <a href=https://www.crandalloffice.com/>US-based office refurb seller</a><br>
<br>
Installation is simple: insert the new gas lift into the base and then the tilt mechanism on top - gentle sit in the chair to seat everything; one nice upshot aside from the working gas lift was that the new cyclinder removed the slight wobble of the chair<br>
<H3>Conclusions</h3>
<br>
Whilst the gas lift removal is still not the easiest Aeron maintainence task, it is now significantly improved with the options and breathes new air into your aging Aeron.
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgE8OqNfUgXKsrJsWTia__sM46TsbERYFRgiDfoEv0l6AaKjaRsJ-131G7Pr9bO3q7rFbJ7Ki6zTGRRYPiz_GDSFF2GSMnsNTPRftPLrcuGtxk3afQuOlE_pNXfSvmeKWR5vpu--ZKuH3VRdVSSkUaZu4ZHGyPO9owFHKFXNmECmqeqoYM-7nI/s1600/_DSC2133.jpg" width=95%>Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-77048575674464931272022-09-10T17:19:00.001+01:002022-09-10T21:12:23.358+01:00Disabling debuginfods and manually loading symbolsFedora 32 introduced <a href=https://fedoraproject.org/wiki/Debuginfod><code>debuginfod</code></a> which is meant to provide dynamic debug symbols to debugging tools. The problem I've found is that using <code>valgrind</code> is horrifically slow as the debug symbols are downloaded and processed.
One way to avoid the debug is to disable <code>debuginfod</code> is to <code>rm /etc/debuginfod/*.urls; echo "set debuginfod enabled off" > /etc/gdbinit.d/debuginfo.gdb</code>. But how do we get our debug symbols for <code>gdb</code>?
<a name='more'></a>
The following example we're looking at a locally built version of <code>ffmpeg</code> but the installed libraries are stripped.
<div class=code>
$ gdb ffmpeg_g
(gdb) br av_rescale_rnd
(gdb) set args -i /tmp/input.wav -ar 22050 -ac 2 /tmp/output.mp3
(gdb) run
...
Breakpoint 1, 0x00007ffff6059c90 in av_rescale_rnd () from /usr/local/lib64/ffmpeg5/libavutil.so.57
Missing separate debuginfos, use: dnf debuginfo-install alsa-lib-1.2.7.1-1.fc35.x86_64 fdk-aac-2.0.2-2.fc35.x86_64 glibc-2.34-40.fc35.x86_64 lame-libs-3.100-11.fc35.x86_64 xz-libs-5.2.5-9.fc35.x86_64 zlib-1.2.11-31.fc35.x86_64
(gdb) bt
#0 0x00007ffff6059c90 in av_rescale_rnd () from /usr/local/lib64/ffmpeg5/libavutil.so.57
#1 0x00007ffff767dc1a in ?? () from /usr/local/lib64/ffmpeg5/libavformat.so.59
#2 0x00007ffff767f754 in ?? () from /usr/local/lib64/ffmpeg5/libavformat.so.59
#3 0x00007ffff7681426 in avformat_find_stream_info () from /usr/local/lib64/ffmpeg5/libavformat.so.59
#4 0x0000000000415240 in open_input_file (o=o@entry=0x7fffffffd8b0, filename=<optimized out>) at fftools/ffmpeg_opt.c:1286
#5 0x00000000004190d2 in open_files (open_file=0x414620 <open_input_file>, inout=0x431838 "input", l=0x441058) at fftools/ffmpeg_opt.c:3500
#6 ffmpeg_parse_options (argc=argc@entry=8, argv=argv@entry=0x7fffffffde68) at fftools/ffmpeg_opt.c:3540
#7 0x0000000000408717 in main (argc=8, argv=0x7fffffffde68) at fftools/ffmpeg.c:4538
</div>
We can see that there are no symbols - we have the debug symbols in our pre-installed libraries that we can load into the debugger - we can load the extracted debug symbols only or the full non-stripped library. Note that the debug versions are loaded at the specific base address that is already mapped in memory.
<div class=code>
$ objcopy --only-keep-debug libavutils/libavutils.so.57 /tmp/libavutils.so.57.debug
# in gdb
(gdb) info sharedlibrary
From To Syms Read Shared Object Library
0x00007ffff7fc9090 0x00007ffff7fee693 Yes /lib64/ld-linux-x86-64.so.2
0x00007ffff7fb08e0 0x00007ffff7fb73d8 Yes (*) /usr/local/lib64/ffmpeg5/libavdevice.so.59
0x00007ffff7a766d0 0x00007ffff7cdd936 Yes (*) /usr/local/lib64/ffmpeg5/libavfilter.so.8
<b>0x00007ffff7638b70</b> 0x00007ffff77ba9ec Yes (*) /usr/local/lib64/ffmpeg5/libavformat.so.59
0x00007ffff625f6a0 0x00007ffff6b9f618 Yes (*) /usr/local/lib64/ffmpeg5/libavcodec.so.59
0x00007ffff7f8e290 0x00007ffff7fa1acc Yes (*) /usr/local/lib64/ffmpeg5/libswresample.so.4
0x00007ffff7eee2e0 0x00007ffff7f6d94b Yes (*) /usr/local/lib64/ffmpeg5/libswscale.so.6
<b>0x00007ffff603a510</b> 0x00007ffff60c8c8c Yes (*) /usr/local/lib64/ffmpeg5/libavutil.so.57
0x00007ffff7e0a390 0x00007ffff7e7a048 Yes (*) /lib64/libm.so.6
0x00007ffff5e28700 0x00007ffff5f9b1ad Yes (*) /lib64/libc.so.6
0x00007ffff7924ef0 0x00007ffff79bbd8b Yes (*) /lib64/libasound.so.2
0x00007ffff7de65f0 0x00007ffff7df375b Yes (*) /lib64/libz.so.1
0x00007ffff78c89f0 0x00007ffff78e270e Yes (*) /lib64/liblzma.so.5
0x00007ffff74d3270 0x00007ffff75b67de Yes (*) /usr/lib64/fdk-aac/libfdk-aac.so.2
0x00007ffff5d8fbf0 0x00007ffff5dbe35f Yes (*) /lib64/libmp3lame.so.0
(*): Shared library is missing debugging information.
(gdb) add-symbol-file ./libavformat/libavformat.so.59 0x00007ffff7638b70
add symbol table from file "./libavformat/libavformat.so.59" at
.text_addr = 0x7ffff7638b70
(y or n) y
Reading symbols from ./libavformat/libavformat.so.59...
(gdb) add-symbol-file /tmp/libavutil.so.57.debug 0x00007ffff603a510
add symbol table from file "/tmp/libavutil.so.57.debug" at
.text_addr = 0x7ffff603a510
(y or n) y
Reading symbols from /tmp/libavutil.so.57.debug...
(gdb) bt
#0 av_rescale_rnd (a=a@entry=1, b=45158400, c=44100, rnd=rnd@entry=AV_ROUND_DOWN) at libavutil/mathematics.c:65
#1 0x00007ffff767dc1a in compute_pkt_fields (s=s@entry=0x441480, st=st@entry=0x442200, pc=pc@entry=0x0, pkt=pkt@entry=0x441780, next_dts=next_dts@entry=-9223372036854775808,
next_pts=next_pts@entry=-9223372036854775808) at libavformat/demux.c:1006
#2 0x00007ffff767f754 in read_frame_internal (s=s@entry=0x441480, pkt=pkt@entry=0x441780) at libavformat/demux.c:1324
#3 0x00007ffff7681426 in avformat_find_stream_info (ic=0x441480, options=0x442d00) at libavformat/demux.c:2611
#4 0x0000000000415240 in open_input_file (o=o@entry=0x7fffffffd8b0, filename=<optimized out>) at fftools/ffmpeg_opt.c:1286
#5 0x00000000004190d2 in open_files (open_file=0x414620 <open_input_file>, inout=0x431838 "input", l=0x441058) at fftools/ffmpeg_opt.c:3500
#6 ffmpeg_parse_options (argc=argc@entry=8, argv=argv@entry=0x7fffffffde68) at fftools/ffmpeg_opt.c:3540
#7 0x0000000000408717 in main (argc=8, argv=0x7fffffffde68) at fftools/ffmpeg.c:4538
</div>
Once loaded we can see the symbols are available.Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-49075501288241667522022-07-05T22:05:00.009+01:002022-07-09T20:28:08.685+01:00A local Openshift 4.x development environment on your laptopHaving access to dev <code>OpenShift</code> 4.x cluster that you control is invaluable - <a href=https://developers.redhat.com/blog/2019/09/05/red-hat-openshift-4-on-your-laptop-introducing-red-hat-codeready-containers>Redhat now provides this ability through their <i>Code Ready Container</i></a> also known an <code>crc</code><br>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjJzpvs2SqqUjuAAugBc2qD9B3yhDXBc3IpvjclSjrE0Y5QmAb62kOQwYv58-u8dUGBQmNyYw7tx2TO8ofBaJyOnZxK0roW2aUflu-GEEXBTX7fbLJkjSGjTYnfI4crJgQK2wZKlOgrRtOys2Y_YfyhhPjthWeBem7Rp8P8NHK2mI2E57ef-sE/s1600/openshift-featured.png" width="95%"/>
<br>
Setting up the rest of the cluster and dev ecosystem is a little complicated at first so here's a set of notes documenting how it can be done on an 8core / 16Gb Fedora 35 machine.<br>
<a name='more'></a>
<br>
All steps below refer to a machine called <code>devhost</code> which will be an alias to the local server.<br>. <a href=https://github.com/whatdoineed2do/vanilla-node-rest-api>Code for the REST server and the configuration is available here</a>
<h2>Initial <code>crc</code> setup</h2>
<div class=code>
$ sudo dnf install -y podman skopeo
# add 'devhost' to machine IP addr that references the crc i/f
$ echo "192.168.130.1 devhost" | sudo tee -a /etc/hosts
# set up OpenShift - this will require your 'pull secrets' as it downloads its virtual image to run
# this will result in a ~36Gb VM image under ~/.crc
$ wget https://developers.redhat.com/content-gateway/file/pub/openshift-v4/clients/crc/2.5.1/crc-linux-amd64.tar.xz
$ xz -d < crc-llinux-amd64.tar.xz | tar xf - && sudo mv crc /usr/local/bin
$ echo 'export PATH=${PATH}:~/.crc/bin/oc/' >> ~/.bashrc
$ crc setup
INFO Using bundle path /home/ray/.crc/cache/crc_libvirt_4.10.18_amd64.crcbundle
INFO Checking if running as non-root
INFO Checking if running inside WSL2
INFO Checking if crc-admin-helper executable is cached
INFO Caching crc-admin-helper executable
INFO Using root access: Changing ownership of /home/ray/.crc/bin/crc-admin-helper-linux
INFO Using root access: Setting suid for /home/ray/.crc/bin/crc-admin-helper-linux
INFO Checking for obsolete admin-helper executable
INFO Checking if running on a supported CPU architecture
INFO Checking minimum RAM requirements
INFO Checking if crc executable symlink exists
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Checking if active user/process is currently part of the libvirt group
INFO Checking if libvirt daemon is running
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Checking crc daemon systemd service
INFO Checking crc daemon systemd socket units
INFO Checking if systemd-networkd is running
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Checking if dnsmasq configurations file exist for NetworkManager
INFO Checking if the systemd-resolved service is running
INFO Checking if /etc/NetworkManager/dispatcher.d/99-crc.sh exists
INFO Checking if libvirt 'crc' network is available
INFO Checking if libvirt 'crc' network is active
INFO Checking if CRC bundle is extracted in '$HOME/.crc'
INFO Checking if /home/ray/.crc/cache/crc_libvirt_4.10.18_amd64.crcbundle exists
INFO Getting bundle for the CRC executable
INFO Downloading crc_libvirt_4.10.18_amd64.crcbundle
119.76 MiB / 3.13 GiB [------>____________________________________________________________________________________________________________________________________________________________________
INFO Uncompressing /home/ray/.crc/cache/crc_libvirt_4.10.18_amd64.crcbundle
crc.qcow2: 12.45 GiB / 12.45 GiB [---------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00%
oc: 117.14 MiB / 117.14 MiB [--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00%
Your system is correctly setup for using CRC. Use 'crc start' to start the instance
# if upgrading, remove existing cluster info (incl projects)
$ crc delete
$ <b>crc start</b>
INFO Checking if running as non-root
INFO Checking if running inside WSL2
INFO Checking if crc-admin-helper executable is cached
INFO Checking for obsolete admin-helper executable
INFO Checking if running on a supported CPU architecture
INFO Checking minimum RAM requirements
INFO Checking if crc executable symlink exists
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Checking if active user/process is currently part of the libvirt group
INFO Checking if libvirt daemon is running
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Checking crc daemon systemd socket units
INFO Checking if systemd-networkd is running
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Checking if dnsmasq configurations file exist for NetworkManager
INFO Checking if the systemd-resolved service is running
INFO Checking if /etc/NetworkManager/dispatcher.d/99-crc.sh exists
INFO Checking if libvirt 'crc' network is available
INFO Checking if libvirt 'crc' network is active
INFO Loading bundle: crc_libvirt_4.10.18_amd64...
INFO Starting CRC VM for OpenShift 4.10.18...
INFO CRC instance is running with IP 192.168.130.11
INFO CRC VM is running
INFO Check internal and public DNS query...
INFO Check DNS query from host...
INFO Verifying validity of the kubelet certificates...
INFO Starting OpenShift kubelet service
INFO Waiting for kube-apiserver availability... [takes around 2min]
INFO Waiting for user's pull secret part of instance disk...
INFO Starting OpenShift cluster... [waiting for the cluster to stabilize]
INFO Operator openshift-apiserver is not yet available
INFO Operator openshift-apiserver is not yet available
INFO All operators are available. Ensuring stability...
INFO Operators are stable (2/3)...
INFO Operators are stable (3/3)...
INFO Adding crc-admin and crc-developer contexts to kubeconfig...
Started the OpenShift cluster.
The server is accessible via web console at:
https://console-openshift-console.apps-crc.testing
Log in as administrator:
Username: kubeadmin
Password: ....
Log in as user:
Username: developer
Password: developer
Use the 'oc' command line interface:
$ eval $(crc oc-env)
$ oc login -u developer https://api.crc.testing:6443
# convience if we ever need to login into cluster direclty
$ cat >> ~/.ssh/config << EOF
Host crc
Hostname 192.168.130.11
User core
IdentityFile ~/.crc/machines/crc/id_ecdsa
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
EOF
$ chmod 600 ~/.ssh/config
$ ssh core@crc
</div>
<h2><code>Podman</code> and as a local <code>registery</code></h2>
Allow <code>podman</code> and its images to be used remotely by <code>crc</code>. This is different to your local docker image cache that is avaiable via your <code>podman images</code> commands.<br>
<br>
The first thing you must do is setup your local docker <code>registry</code>. To do that, you must first create a directory to house container data with the command.<br>
<div class=code>
$ systemctl --user enable --now podman.socket
$ firewall-cmd --permanent --add-port=5000/tcp --zone=libvirt
$ firewall-cmd --reload
$ mkdir ~/.config/containers
$ cat > ~/.config/containers/registries.conf << EOF
unqualified-search-registries = ["registry.fedoraproject.org", "registry.access.redhat.com", "docker.io", "quay.io", "devhost:5000"]
[[registry]]
prefix = "devhost/foo"
insecure = true
blocked = false
location = "devhost:5000"
short-name-mode="enforcing"
EOF
$ mkdir -p ${HOME}/.local/share/containers/registry
# finally run the registry
# this is running on the 'crc' interface (ie 192.168.130.1) that is created when crc starts;
# using the crc i/f ensures that you can run the cluster and registry on an isolated device with no network
$ <b>podman run --privileged -d --name registry \
-p $(getent hosts devhost | cut -f1 -d\ ):5000:5000 \
-v ${HOME}/.local/share/containers/registry:/var/lib/registry \
-e REGISTRY_STORAGE_DELETE_ENABLED=true \
--rm \
registry:2</b>
Resolved "registry" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/registry:2...
Getting image source signatures
Copying blob e69d20d3dd20 done
Copying blob ea60b727a1ce done
Copying blob c87369050336 done
Copying blob 2408cc74d12b done
Copying blob fc30d7061437 done
Copying config 773dbf02e4 done
Writing manifest to image destination
Storing signatures
c598323f0a44835e7771c79adb0e1280d7b3347bf96318f3b3be7adb9e20f7ee
$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c598323f0a44 docker.io/library/registry:2 /etc/docker/regis... 15 seconds ago Up 16 seconds ago 0.0.0.0:5000->5000/tcp registry
</div>
Refs:<br>
https://github.com/containers/podman/blob/main/docs/tutorials/remote_client.md<br>
<br>
The registry is now available but will run in an insecure mode which means we have and additional set to configure <code>crc</code>:
<div class=code>
# https://github.com/code-ready/crc/wiki/Adding-an-insecure-registry
$ oc login -u kubeadmin https://api.crc.testing:6443
$ oc patch --type=merge --patch='{
"spec": {
"registrySources": {
"insecureRegistries": [
"devhost:5000"
]
}
}
}' image.config.openshift.io/cluster
</div>
Trying to setup with a reverse proxy and a self generated rootCA and self signed cert, even if rootCA installed on cluster does not work properly, so save yourself headaches.<br>
<h3>How to push an image to the local <code>registry</code></h3>
<div class=code>
# this provides a simple nodejs based server, with /api/status endpoint
$ git clone https://github.com/whatdoineed2do/vanilla-node-rest-api
# generate the docker image
$ make package
$ cat > Dockerfile << EOF
FROM node:18-alpine3.15
WORKDIR /app
COPY . .
EXPOSE 8080
CMD node server.js
EOF
$ export UUID=$(uuidgen) && \
podman build --squash -t vnra:${UUID} && \
podman push --tls-verify=false vnra:${UUID} devhost:5000/foo/vnra:${UUID}
# validate it been pushed
$ podman search --tls-verify=false devhost:5000/
INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED
devhost:5000 devhost:5000/foo/vnra 0
$ podman search --tls-verify=false vnra
INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED
devhost:5000 devhost:5000/foo/vnra 0
$ skopeo inspect --tls-verify=false docker://devhost:5000/foo/vnra
{
"Name": "devhost:5000/foo/vnra",
"Digest": "sha256:e4a7636b834c6287800a3a664ef3f5ce3f06d623437a37b104a81febef69b1e7",
"RepoTags": [
"latest",
"b835dd7",
"ecd9fe5",
"66f793b"
],
"Created": "2022-07-05T15:11:41.878665307Z",
"DockerVersion": "",
"Labels": {
"io.buildah.version": "1.23.1"
},
"Architecture": "amd64",
"Os": "linux",
"Layers": [
"sha256:8dfb4e6dc5179a0adf4a069e14d984216740f28b088c26090c8f16b97e44b222",
"sha256:be2771caf87008c0ade639b6debce2ddb8f735e32eeb73d4bc01a6c68c09c933",
"sha256:be4f0bf8cf1b2cab1e1197378bf7756cae87232d43ef1ec0c031e62cb83f6735",
"sha256:89383deba3bc0da6d79f88604e4710a8972c9e682412267fd565630d79e90cd4",
"sha256:0f3180c4d208c7874b0afddd1940fc3f297dd67b90944e40ed630cd5adaa3a4b"
],
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NODE_VERSION=18.4.0",
"YARN_VERSION=1.22.19"
]
}
</div>
<h3>How to delete an image from the local <code>registry</code></h3>
<div class=code>
# registry MUST be run with '-e REGISTRY_STORAGE_DELETE_ENABLED=true'
$ skopeo delete --tls-verify=false \
docker://devhost:5000/foo/bar:0.0.2
# force garbage collection on running registry
$ podman exec registry \
/bin/registry garbage-collect \
--delete-untagged=true
/etc/docker/registry/config.yml
</div>
<h2>Configuring <code>CRC</code> to use our local development docker registry</h2>
Now that we have a local registry, we can see how to use this with <code>crc</code>. It is possible that we <i>could</i> <code>push</code> images directly to the cluster's internal repo but this not typical in your production environments.<br>
<br>
Create a <code>DeploymentConfig</code> that specifies the usual items and a reference to our local <code>registry</code> and apply it to the cluster which will pull the image and spin up the pod:
<div class=code>
$ cat > vnra.yaml << EOF
apiVersion: apps.openshift.io/v1
kind: List
items:
- apiVersion: v1
kind: Service
metadata:
name: vnra
spec:
ports:
- port: 8080
targetPort: 8080
selector:
deploymentconfig: vnra
- apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: vnra
labels:
app: vnra
spec:
replicas: 1
selector:
deploymentconfig: vnra
strategy:
# strategy to Recreate, means that it will be scaled down prior to being scaled up
type: Rolling
template:
metadata:
labels:
deploymentconfig: vnra
app: vnra
spec:
restartPolicy: Always
containers:
- image: devhost:5000/foo/vnra:66f793b
name: main
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
protocol: TCP
name: http
livenessProbe:
failureThreshold: 5
httpGet:
path: /api/status
port: 8080
scheme: HTTP
periodSeconds: 60
successThreshold: 1
triggers:
- type: ConfigChange
- apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: vnra
spec:
to:
kind: Service
name: vnra
EOF
$ oc login -u developer -p developer
$ oc apply -f vnra.yaml
## pods should be spinning up
$ oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
vnra vnra-foo.apps-crc.testing vnra <all> None
$ oc get pods
NAME READY STATUS RESTARTS AGE
vnra-8-deploy 0/1 Completed 0 18m
vnra-8-qvzn6 1/1 Running 0 18m
$ oc logs -f $(oc get pods | grep -v deploy | grep vnra | cut -f1 -d\ )
Server 66f793b running on port 8080
Tue Jul 05 2022 21:42:56 GMT+0000 (Coordinated Universal Time): #1 GET /api/status {"host":"10.217.0.188:8080","user-agent":"kube-probe/1.23","accept":"*/*","connection":"close"}
Tue Jul 05 2022 21:43:56 GMT+0000 (Coordinated Universal Time): #2 GET /api/status {"host":"10.217.0.188:8080","user-agent":"kube-probe/1.23","accept":"*/*","connection":"close"}
Tue Jul 05 2022 21:44:56 GMT+0000 (Coordinated Universal Time): #3 GET /api/status {"host":"10.217.0.188:8080","user-agent":"kube-probe/1.23","accept":"*/*","connection":"close"}
...
$ curl vnra-foo.apps-crc.testing/api/status | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 105 0 105 0 0 44080 0 --:--:-- --:--:-- --:--:-- 52500
{
"ip": "10.217.0.188",
"uptime": 1277.743065979,
"timestamp": 1657053916,
"version": "66f793b",
"requests": 23
}
</div>
<h3>Troubleshooting cluster image pull</h3>
If you are struggle with the pod's not spinning up due to <code>ImagePullBackOff</code> you can verify that the cluster can communicate with the <code>registry</code> AND the specified image name is avaiable. A successful manual pull looks like this:
<div class=code>
$ oc login -u developer -p developer
$ oc new-app --image=devhost:5000/foo/vnra:latest --name=manual
--> Found container image 66f793b (1 days old) from devhost:5000 for "devhost:5000/foo/vnra:latest"
* An image stream tag will be created as "manual:latest" that will track this image
--> Creating resources ...
imagestream.image.openshift.io "manual" created
deployment.apps "manual" created
--> Success
Run 'oc status' to view your app.
</div>
Refs:<br>
https://cloud.redhat.com/blog/deploying-applications-from-images-in-openshift-part-one-web-console<br>
<h2>Helm</h2>
<a href=https://helm.sh/docs/><code>helm</code></a> can be thought of as a packaging configuration for Kubernetes that will let you define and install configurations: It is very useful with templates that can define different environments with a common files.<br>
<br>
Firstly we need <a href=https://github.com/helm/helm/releases/><code>helm</code></a> and then a <code>helm chart repository</code> - we can use <a href=https://github.com/helm/chartmuseum>Chart Museum</a> for the latter which is part of the <code>helm</code> project.
<div class=code>
$ mkdir ~/.local/share/containers/helm
$ chartmuseum --debug --port=8089 \
--storage=local \
--storage-local-rootdir=~/.local/share/containers/helm
# one time setup
$ helm repo add chartmuseum http://devhost:8089
</div>
Once the <code>helm</code> components are available, we need to make <code>crc</code> aware:
<div class=code>
$ oc login -u kubeadmin https://api.crc.testing:6443
$ cat << EOF | oc apply -f -
apiVersion: helm.openshift.io/v1beta1
kind: HelmChartRepository
metadata:
name: helm-local-repo
spec:
name: helm-local-repo
connectionConfig:
url: http://devhost:8089/
EOF
$ oc login -u developer https://api.crc.testing:6443
</div>
Refs:<br>
https://docs.openshift.com/container-platform/4.6/cli_reference/helm_cli/configuring-custom-helm-chart-repositories.html<br>
<h3>Creating a boilerplate `helm` chart</h3>
<div class=code>
$ helm create foo
$ tree foo
foo
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── NOTES.txt
│ ├── serviceaccount.yaml
│ ├── service.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml
3 directories, 10 files
</div>
We can make our modifications, in particular adding a <code>{dev,prod}.yaml</code> and test:
<div class=code>
# https://helm.sh/docs/chart_template_guide/debugging/
$ helm lint --debug -f foo/dev.yaml foo
==> Linting foo
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, 0 chart(s) failed
</div>
<h3>Validating and Installing a <code>helm</code> chart manually</h3>
For the first installation following a succesful `lint`, you can fully verify the parameters and rendering is valid by performing a `install --dry-run`. This process however will communicate with the cluster (`helm` uses the local api server automatically without the need to specify `--kube-apiserver`) and will fail if there already exists the `route`, `service` or `deployment`/`deploymentconfig`.
<h4>Preparing</h4>
We require at minimum a <code>Chart.yaml</code>, values and tempalte configuration.
<div class=code>
$ mkdir -p helm/tempaltes
$ cat > helm/Chart.yaml << EOF
apiVersion: v2
name: vnra
description: Vanilla Node REST api service in K8
type: application
version: 0.0.1
appVersion: "79de471"
EOF
$ cat > helm/values.yaml << EOF
replicaCount: 1
image:
repository: devhost:5000/foo
pullPolicy: IfNotPresent
tag: "79de471"
autoscaling:
enabled: false
minReplicas: 1
EOF
$ cat > helm/dev.yaml << EOF
env: dev
replicaCount: 1
autoscaling:
enabled: true
maxReplicas: 2
targetCPUUtilizationPercentage: 80
EOF
$ cat > helm/tempaltes/all.yaml << EOF
apiVersion: apps.openshift.io/v1
kind: List
items:
- apiVersion: v1
kind: Service
metadata:
name: vnra
spec:
ports:
- port: 8080
targetPort: 8080
selector:
deploymentconfig: vnra
- apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: vnra
labels:
app: vnra
env: {{ .Values.env }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
deploymentconfig: vnra
strategy:
# We set the type of strategy to Recreate, which means that it will be scaled down prior to being scaled up
type: Rolling
template:
metadata:
labels:
deploymentconfig: vnra
app: vnra
spec:
restartPolicy: Always
containers:
- image: "{{ .Values.image.repository }}/{{ .Chart.Name }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
name: {{ .Chart.Name }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 8080
protocol: TCP
name: http
livenessProbe:
failureThreshold: 5
httpGet:
path: /api/status
port: 8080
scheme: HTTP
periodSeconds: 10
successThreshold: 1
triggers:
- type: ConfigChange
- apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: vnra
spec:
to:
kind: Service
name: vnra
EOF
</div>
Note how this differs from the original <code>deployment config</code> with the parameterisation avilable for different environments.
<h4>Installing</h4>
<div class=code>
# login to the cluster and project
$ oc login -u developer
$ oc project foo
# supply override values, name and file location of Chart.yaml - note the use of -f helm/dev.yaml that will overlay the values.yaml that is still being implicitly used
$ helm install --dry-run --debug -f helm/dev.yaml vnra ./helm
iNAME: vnra
LAST DEPLOYED: Sat Jul 9 10:58:38 2022
NAMESPACE: foo
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
autoscaling:
enabled: true
maxReplicas: 2
targetCPUUtilizationPercentage: 80
env: dev
replicaCount: 1
COMPUTED VALUES:
autoscaling:
enabled: true
maxReplicas: 2
minReplicas: 1
targetCPUUtilizationPercentage: 80
env: dev
image:
pullPolicy: IfNotPresent
repository: devhost:5000/foo
tag: 79de471
replicaCount: 1
HOOKS:
MANIFEST:
---
# Source: vnra/templates/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: vnra
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: vnra
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
---
# Source: vnra/templates/all.yaml
apiVersion: apps.openshift.io/v1
kind: List
items:
- apiVersion: v1
kind: Service
metadata:
name: vnra
spec:
ports:
- port: 8080
targetPort: 8080
selector:
deploymentconfig: vnra
- apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: vnra
labels:
app: vnra
env: dev
spec:
selector:
deploymentconfig: vnra
strategy:
# We set the type of strategy to Recreate, which means that it will be scaled down prior to being scaled up
type: Rolling
template:
metadata:
labels:
deploymentconfig: vnra
app: vnra
spec:
restartPolicy: Always
containers:
- image: "devhost:5000/foo/vnra:79de471"
name: vnra
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
protocol: TCP
name: http
livenessProbe:
failureThreshold: 5
httpGet:
path: /api/status
port: 8080
scheme: HTTP
periodSeconds: 10
successThreshold: 1
triggers:
- type: ConfigChange
- apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: vnra
spec:
to:
kind: Service
name: vnra
</div>
Once verified this is all good, we can perform the installation to the cluster:
<div class=code>
$ helm install --dry-run --debug -f helm/dev.yaml vnra ./helm
NAME: vnra
LAST DEPLOYED: Sat Jul 9 11:04:35 2022
NAMESPACE: foo
STATUS: deployed
REVISION: 1
TEST SUITE: None
# verify what's been installed on the cluster via helm
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
vnra foo 1 2022-07-09 11:04:35.789973015 +0100 BST deployed vnra-0.0.1 acba902
# and again confirm what the cluster thinks: note the annotations
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vnra ClusterIP 10.217.5.77 <none> 8080/TCP 15h
$ oc describe svc vnra
Name: vnra
Namespace: foo
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: vnra
meta.helm.sh/release-namespace: foo
Selector: deploymentconfig=vnra
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.217.5.77
IPs: 10.217.5.77
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.217.0.7:8080
Session Affinity: None
Events: <none>
</div>
Once <code>helm</code> has been used to install, you may uninstall or upgrade
<h3>Installing via a <code>helm</code> repository</h3>
<div class=code>
# in the directory with your Chart.yaml
$ helm package .
Successfully packaged chart and saved it to: /home/ray/dev/Docker/vanilla-node-rest-api/openshift/helm/vnra-0.0.1.tgz
# publish helm chart
$ curl --data-binary @vnra-0.0.1.tgz http://devhost:8089/api/charts
{"saved": true }
$ helm search chartmuseum
</div>Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-60615028915488749042022-04-10T16:37:00.012+01:002022-05-09T12:38:05.796+01:00Custom live Linux USB image: Working around locked down ThinkPadAt work we are getting pushed into a hot-desking setup and each member of staff is being moved onto a thin client ThinkPad. Of course a thin client is nothing more than a customised and stripped down Windows 10 build that connects to the firm's virtual desktop infrastructure, via a combination of CiscoConnect and VMWare Horizon client.<br>
<br>
Since we're being forced to carry the ThinkPad to and from the hot-desk office, I'm going to use the ThinkPad for my own dev purposes en route. The ThinkPad's BIOS is not locked down so we can get into the boot menu via <code>F12</code> or the BIOS setup via <code>Enter</code> but setting my own dev environment is not straight forward.<br>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIWkg9EEpYqAySO3SyICxxvJGomM_S_JzOeqL2qWDiCEaYDgjdPDbIMGpiBuKJykgdUlfBvdrOlcOOlgtWTYpNGmc3uGTj9mVzTmg2X-N9MuYJFCfme-jsYqWm-LrxAYGhV5fzNLXDj13-Q2Scsr-dPGZQCZZfeU_5fhrvuhvgQKmk0ufrSM0/s1600/thinkpad-usb.png" width=95%>
<a name='more'></a><br>
The ThinkPad is a reasonably spec'd contemporary corpoarte laptop running a quad core 11th Gen Intel i5-1145G7 with 16GB of RAM and a 512GB SSD running some form of Windows 10. In years gone by, we'd simply boot a live USB image and then carve out a slice on the NFTS disk but with <a href=https://docs.microsoft.com/en-us/windows/security/information-protection/bitlocker/bitlocker-overview>Windows 10 we have bitlocker</a> that requires a key to access the partition - this essentially prevents us from partitioning the SSD.<br>
<br>
This doesn't prevent us from achieving our goal though and simply means we have to continue to use the <a href=https://docs.fedoraproject.org/en-US/quick-docs/creating-and-using-a-live-installation-image/>live USB image with a persistance</a>. Whilst a standard live image is useful for basic tasks, we can customise a base image with specific development packages along with creating an overlay system.<br>
<br>
Creating a <a href=https://fedoraproject.org/wiki/Remix>custom live image can be achieved through a <i>remix</i></a> and using a flattened <code<kickstart</code> file:
<div class=code>
https://github.com/whatdoineed2do/fedora-remix
$ ksflatten --config kickstarts/remix-cinnamon.ks --output fedora-kickstarts.ks
$ livemedia-creator \
--resultdir=results/remix --make-iso --no-virt \
--project=Fedora --releasever=35 --ks=fedora-kickstarts.ks
# and test the output image
$ qemu-kvm -m 2560 -cdrom result/remix/images/boot-efi.iso
</div>
The process of generating the custom image is a lengthy process, even with a wired Gbit network connection running on a core i7-1165G7 machine with an SSD - the generation of the USB image taking least 80 minutes: downloading the RPMs, installing, building the squashfs filesystem and then generating the final ISO image. Once built, installing the image to an USB stick is still a little tricky. The stock Fedora 35 <code>livecd-iso-to-disk</code> util generates USB sticks that were NOT bootable although <code>livemedia-creator</code> on same ISO was succesful but will use the entire storage of the USB device.<br>
<br>
Using the <a href=https://github.com/livecd-tools/livecd-tools><code>livecd-tools</code> from the development repo</a> resolves the boot issue and only uses the space of USB image, allowing us to create additional partitions on the remaining free USB space.
<div class=code>
$ git clone https://github.com/livecd-tools/livecd-tools
$ ./livecd-tools/tools/livecd-iso-to-disk.sh \
--format 8196,ext4 --efi --reset-mbr \
--home-size-mb 2048 --unencrypted-home \
fedora-live-image.iso /dev/sdx
# creates 3 partitions created
# force re-read of partition table and create our data partition of relevent size and write out
$ partprobe /dev/sdx
$ fdisk /dev/sdx
...
$ mkfs.ext4 /dev/sdx4
</div>
With this step we have a usable live USB stick, with a persistent home directory, that we can use on the ThinkPad.<br>
<br>
Booting the live image from a 16Gb USB stick, we inspect the system to reveal:
<div class=code>
$ inxi -F
System:
Host: localhost-live Kernel: 5.16.18-200.fc35.x86_64 arch: x86_64 bits: 64
Desktop: Cinnamon v: 5.2.7 Distro: Fedora release 35 (Thirty Five)
Machine:
Type: Convertible System: LENOVO product: 20VLS1JC28
v: ThinkPad L13 Yoga Gen 2 serial: >superuser required<
Mobo: LENOVO model: 20VLS1JC28 v: SDK0J40697 WIN
serial: >superuser required< UEFI: LENOVO v: R1PET19W (1.11 )
date: 12/08/2021
Battery:
ID-1: BAT0 charge: 29.1 Wh (63.1%) condition: 46.1/46.0 Wh (100.2%)
volts: 15.7 min: 15.4
CPU:
Info: quad core model: 11th Gen Intel Core i5-1145G7 bits: 64 type: MT MCP
cache: L2: 5 MiB
Speed (MHz): avg: 688 min/max: 400/4400 cores: 1: 631 2: 569 3: 685
4: 938 5: 908 6: 524 7: 585 8: 664
Graphics:
Device-1: Intel TigerLake-LP GT2 [Iris Xe Graphics] driver: i915 v: kernel
Device-2: Chicony ThinkPad T490 Webcam type: USB driver: uvcvideo
Display: x11 server: X.Org v: 1.20.14 driver: X: loaded: modesetting
unloaded: fbdev,vesa gpu: i915 resolution: 1920x1080~60Hz
OpenGL: renderer: Mesa Intel Xe Graphics (TGL GT2) v: 4.6 Mesa 21.3.8
Audio:
Device-1: Intel Tiger Lake-LP Smart Sound Audio driver: snd_hda_intel
Sound Server-1: ALSA v: k5.16.18-200.fc35.x86_64 running: yes
Sound Server-2: PipeWire v: 0.3.49 running: yes
Network:
Device-1: Intel Wi-Fi 6 AX201 driver: iwlwifi
IF: wlp0s20f3 state: up mac: 84:14:xx:xx:xx:xx
Device-2: Intel Ethernet I219-LM driver: e1000e
IF: enp0s31f6 state: down mac: 48:2a:xx:xx:xx:xx
Bluetooth:
Device-1: Intel AX201 Bluetooth type: USB driver: btusb
Report: bt-adapter ID: hci0 state: up address: 84:14:xx:xx:xx:xx
Drives:
Local Storage: total: 491.5 GiB used: 15.36 GiB (3.1%)
ID-1: /dev/nvme0n1 vendor: Toshiba model: N/A size: 476.94 GiB
ID-2: /dev/sda type: USB vendor: SanDisk model: Cruzer Blade
size: 14.56 GiB
Partition:
ID-1: / size: 7.78 GiB used: 7.04 GiB (90.5%) fs: ext4 dev: /dev/dm-0
Swap:
ID-1: swap-1 type: zram size: 8 GiB used: 0 KiB (0.0%) dev: /dev/zram0
Sensors:
System Temperatures: cpu: 1.0 C mobo: N/A
Fan Speeds (RPM): cpu: 33280 fan-1:
Info:
Processes: 286 Uptime: 14m Memory: 15.33 GiB used: 2.54 GiB (16.6%)
Shell: Bash inxi: 3.3.14
</div>
The system usage above is generated with firefox running but we can see there is plenty of resources available.
<h3>VMWare Horizon</h3>
It can be useful to run the live Linux USB image even when at the office such as when theres a need for a lab or isolated environment but still with easy access to the work infrastructure. Obviously, we can do this with running VMWare Horizon off the live image - having the local dependancies for installing VMWare Horizon is the easiest way to achieve this rather than embedding this into the live image itself, particularly if the remote VMWare Horizon server requirements change.<br>
<br>
The latest Fedora 35 the lastest version of <code>python</code> (version 3.10) is incompatable with the VMWare Horizon installer but this an easy fix.
<div class=code>
$ mkdir ~/vmware/
# pull down compat python
$ dnf --downloadonly --downloaddir=~/vmware install python3.9
# <a href=https://customerconnect.vmware.com/en/downloads/info/slug/desktop_end_user_computing/vmware_horizon_clients/horizon_8>Obtain VMware</a> with <a href=https://docs.vmware.com/en/VMware-Horizon-Client-for-Linux/2203/horizon-client-linux-installation/GUID-A5A6332F-1DEC-4D77-BD6E-1362596A2E76.html#GUID-A5A6332F-1DEC-4D77-BD6E-1362596A2E76>installation parameters here</a>
</div>
To aid usability, create an install/run script that can also be used by a desktop icon launcher - we add <a href=https://wiki.archlinux.org/title/Desktop_notifications#Bash>desktop notifications</a> via a <code>libnotify</code> util.
<div class=code>
$ cat > ~/vmware/vmware-view.sh << EOF
#!/bin/bash
WHERE=$(dirname $0)
which vmware-view 2>&1 >/dev/null
if [ $? -ne 0 ]; then
notify-send "VMware Horizon" "installing..."
TK_8=${WHERE}/tk-8.6.10-7.fc35.x86_64.rpm
PYTHON_39=${WHERE}/python3.9-3.9.12-1.fc35.x86_64.rpm
VMHORIZON=${WHERE}/VMware-Horizon-Client-2203-8.5.0-19586897.x64.bundle
# sudo dnf -y install python3.9 && cd /usr/bin && rm python3 && ln -s python3.9 python3
echo "installing Python 3.9"
sudo rpm -i ${TK_8} ${PYTHON_39}
[ $? -ne 0 ] && notify-send -u critical "VMware Horizon" "failed to install deps..." && exit -1
(cd /usr/bin && sudo ln -sf python3.9 python3)
echo "installing VMWare Horizon" && \
sudo env TERM=dumb VMWARE_EULAS_AGREED=yes \
${VMHORIZON} \
--console --required \
--set-setting vmware-horizon-integrated-printing vmipEnable no \
--set-setting vmware-horizon-usb usbEnable yes \
--set-setting vmware-horizon-smartcard smartcardEnable no \
--set-setting vmware-horizon-rtav rtavEnable yes \
--set-setting vmware-horizon-tsdr tsdrEnable no \
--set-setting vmware-horizon-scannerclient scannerEnable no \
--set-setting vmware-horizon-serialportclient serialportEnable no \
--set-setting vmware-horizon-mmr mmrEnable yes \
--set-setting vmware-horizon-media-provider mediaproviderEnable no \
--set-setting vmware-horizon-teams-optimization teamsOptimizationEnable yes
(cd /usr/bin && sudo ln -sf python3.10 python3)
which vmware-view 2>&1 >/dev/null
[ $? -ne 0 ] && notify-send -u critical "VMware Horizon" "failed to install..." && exit -1
fi
notify-send "VMware Horizon" "starting..."
exec vmware-view "$@"
EOF
$chmod a+x ~/vmware/vmware-view.sh
</div>
Once succesfully installed, we can further tailor <a href=https://docs.vmware.com/en/VMware-Horizon-Client-for-Linux/2203/horizon-client-linux-installation/GUID-D4D962F3-0EE0-4E5C-BC0C-6BE452FF0601.html>how VMWare Horizon runs</a> and as we need:
<div class=code>
# Add a desktop icon
$ cat > ~/Desktop/VMWare\ Horizon.desktop << EOF
[Desktop Entry]
Name=VMWare Horizon
#Exec=vmware-view --serverURL==vmdesktop.foo.com --tokenUserName=foobar --userName=foobar --password=letmein123 --domainName="domain.foo.com" --desktopSize=large
Exec=~/vmware/vmware-view.sh --tokenUserName=foobar
Comment=
Terminal=false
Icon=vmware
Type=Application
EOF
# See <a href=https://docs.vmware.com/en/VMware-Horizon-Client-for-Linux/2203/horizon-client-linux-installation/GUID-AB6F0B4D-03DD-4E7A-AE16-BAB77CE4D42D.html>configuration reference</a> - tokenUserName is not available as a default
$ cat > ~/.vmware/view-preferences << EOF
view.autoConnectBroker = 'vmdesktop.foo.com'
view.defaultBroker = 'vmdesktop.foo.com'
view.defaultUser = 'foobar'
view.defaultDomain = 'domain.foo.com'
view.defaultPassword = 'letmein123'
view.defaultDesktopSize = '3'
view.deviceID = '55:44:33:22:11:00'
EOF
$ chmod 600 ~/.vmware/view-preferences
</div>
Since this installation is not persisted, it will run in RAM. Similar steps can be applied for Zoom which at this point has no non-standard dependencies.
<h3>Further Customisations</h3>
Upon restart (first time is a little slower) you can set the keyboard layout to non-US format.
<div class=code>
# start firefox (slow first time) and force the browser to use mem for cache and the vaapi h/w accelerated backend for video playback
about:config
-> browser.cache.disk.enable = false
-> media.ffmpeg.vaapi.enabled = true
# fix prompt
$ echo 'export PS1="[\j] \u: \e[92m\]\w\e[0m\] $ "' >> ~/.bashrc
# disable SELinux
$ sudo setenforce 0
</div>
<h3>So, How does it run</h3>
In general it works well. Booting to the (auto logging) desktop takes about 65 seconds from the point of power on, hitting F12 and selecting the kernel from <code>grub</code>.<br>
<br>
I use this en-route to/from work on my own non-work coding projects since I'd be carrying another laptop anyway. Using the live image off a USB stick works as I simply put a few hair ties together and hook the USB stick to the screen hinge so its always attached/available. Typically I'm working on C/C++ projects that is on the <code>ext4</code> partition. As we are writing a lot of small files when compiling etc you do notice performance is a little slow at times. Browsing using firefox is not bad but again be aware of the small writes aren't great even with the disk cache all in RAM as above.<br>
<br>
Running IntelliJ is very painful and slow though but this is pretty much because of the number of apparent disk reads/writes/indexing it makes to its underlying caches. Compiling java directly is fine - even with the persistent home directory, I've moved the maven/gradle caches onto the <code>ext4</code> partition<br> although as with the rest of the tooling, it never seems to be a RAM exhaustion problem - the machine's 16GB of ram seemingly more than enough.<br>
<br>
Of course, we could put this image onto an external SSD but you'd have to be lugging around a SSD and at which point we might as well go all out and install Linux to the external SSD.
<h3>Custom Kickstart file</h3>
Generated based on https://github.com/tierratelematics/fedora-remix
<div class=code>
# Generated by pykickstart v3.34
#version=DEVEL
# X Window System configuration information
xconfig --startxonboot
# Keyboard layouts
keyboard 'gb'
# Root password
rootpw --iscrypted --lock locked
# System language
lang en_GB.UTF-8
# Shutdown after installation
shutdown
# Network information
network --bootproto=dhcp --device=link --activate
# Firewall configuration
firewall --enabled --service=mdns
# Use network installation
url --mirrorlist="https://mirrors.fedoraproject.org/mirrorlist?repo=fedora-$releasever&arch=$basearch"
repo --name="fedora" --mirrorlist=https://mirrors.fedoraproject.org/mirrorlist?repo=fedora-$releasever&arch=$basearch
repo --name="updates" --mirrorlist=https://mirrors.fedoraproject.org/mirrorlist?repo=updates-released-f$releasever&arch=$basearch
repo --name="fedora-cisco-openh264" --metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-cisco-openh264-$releasever&arch=$basearch
repo --name="rpmfusion-free" --metalink=https://mirrors.rpmfusion.org/metalink?repo=free-fedora-$releasever&arch=$basearch
repo --name="rpmfusion-free-updates" --metalink=https://mirrors.rpmfusion.org/metalink?repo=free-fedora-updates-released-$releasever&arch=$basearch
repo --name="rpmfusion-nonfree" --metalink=https://mirrors.rpmfusion.org/metalink?repo=nonfree-fedora-$releasever&arch=$basearch
repo --name="rpmfusion-nonfree-updates" --metalink=https://mirrors.rpmfusion.org/metalink?repo=nonfree-fedora-updates-released-$releasever&arch=$basearch
repo --name="rpmfusion-free-tainted" --metalink=https://mirrors.rpmfusion.org/metalink?repo=free-fedora-tainted-$releasever&arch=$basearch
repo --name="rpmfusion-nonfree-tainted" --metalink=https://mirrors.rpmfusion.org/metalink?repo=nonfree-fedora-tainted-$releasever&arch=$basearch
# System timezone
timezone Europe/London
# SELinux configuration
selinux --enforcing
# System services
services --disabled="sshd,NetworkManager-wait-online" --enabled="NetworkManager"
# System bootloader configuration
bootloader --location=none
# Clear the Master Boot Record
zerombr
# Partition clearing information
clearpart --all
# Disk partitioning information
part / --fstype="ext4" --size=8500
part / --size=8192
%post
# FIXME: it'd be better to get this installed from a package
cat > /etc/rc.d/init.d/livesys << EOF
#!/bin/bash
#
# live: Init script for live image
#
# chkconfig: 345 00 99
# description: Init script for live image.
### BEGIN INIT INFO
# X-Start-Before: display-manager chronyd
### END INIT INFO
. /etc/init.d/functions
if ! strstr "\`cat /proc/cmdline\`" rd.live.image || [ "\$1" != "start" ]; then
exit 0
fi
if [ -e /.liveimg-configured ] ; then
configdone=1
fi
exists() {
which \$1 >/dev/null 2>&1 || return
\$*
}
livedir="LiveOS"
for arg in \`cat /proc/cmdline\` ; do
if [ "\${arg##rd.live.dir=}" != "\${arg}" ]; then
livedir=\${arg##rd.live.dir=}
continue
fi
if [ "\${arg##live_dir=}" != "\${arg}" ]; then
livedir=\${arg##live_dir=}
fi
done
# enable swaps unless requested otherwise
swaps=\`blkid -t TYPE=swap -o device\`
if ! strstr "\`cat /proc/cmdline\`" noswap && [ -n "\$swaps" ] ; then
for s in \$swaps ; do
action "Enabling swap partition \$s" swapon \$s
done
fi
if ! strstr "\`cat /proc/cmdline\`" noswap && [ -f /run/initramfs/live/\${livedir}/swap.img ] ; then
action "Enabling swap file" swapon /run/initramfs/live/\${livedir}/swap.img
fi
mountPersistentHome() {
# support label/uuid
if [ "\${homedev##LABEL=}" != "\${homedev}" -o "\${homedev##UUID=}" != "\${homedev}" ]; then
homedev=\`/sbin/blkid -o device -t "\$homedev"\`
fi
# if we're given a file rather than a blockdev, loopback it
if [ "\${homedev##mtd}" != "\${homedev}" ]; then
# mtd devs don't have a block device but get magic-mounted with -t jffs2
mountopts="-t jffs2"
elif [ ! -b "\$homedev" ]; then
loopdev=\`losetup -f\`
if [ "\${homedev##/run/initramfs/live}" != "\${homedev}" ]; then
action "Remounting live store r/w" mount -o remount,rw /run/initramfs/live
fi
losetup \$loopdev \$homedev
homedev=\$loopdev
fi
# if it's encrypted, we need to unlock it
if [ "\$(/sbin/blkid -s TYPE -o value \$homedev 2>/dev/null)" = "crypto_LUKS" ]; then
echo
echo "Setting up encrypted /home device"
plymouth ask-for-password --command="cryptsetup luksOpen \$homedev EncHome"
homedev=/dev/mapper/EncHome
fi
# and finally do the mount
mount \$mountopts \$homedev /home
# if we have /home under what's passed for persistent home, then
# we should make that the real /home. useful for mtd device on olpc
if [ -d /home/home ]; then mount --bind /home/home /home ; fi
[ -x /sbin/restorecon ] && /sbin/restorecon /home
if [ -d /home/liveuser ]; then USERADDARGS="-M" ; fi
}
findPersistentHome() {
for arg in \`cat /proc/cmdline\` ; do
if [ "\${arg##persistenthome=}" != "\${arg}" ]; then
homedev=\${arg##persistenthome=}
fi
done
}
if strstr "\`cat /proc/cmdline\`" persistenthome= ; then
findPersistentHome
elif [ -e /run/initramfs/live/\${livedir}/home.img ]; then
homedev=/run/initramfs/live/\${livedir}/home.img
fi
# if we have a persistent /home, then we want to go ahead and mount it
if ! strstr "\`cat /proc/cmdline\`" nopersistenthome && [ -n "\$homedev" ] ; then
action "Mounting persistent /home" mountPersistentHome
fi
if [ -n "\$configdone" ]; then
exit 0
fi
# add liveuser user with no passwd
action "Adding live user" useradd \$USERADDARGS -c "Live System User" liveuser
passwd -d liveuser > /dev/null
usermod -aG wheel liveuser > /dev/null
# Remove root password lock
passwd -d root > /dev/null
# turn off firstboot for livecd boots
systemctl --no-reload disable firstboot-text.service 2> /dev/null || :
systemctl --no-reload disable firstboot-graphical.service 2> /dev/null || :
systemctl stop firstboot-text.service 2> /dev/null || :
systemctl stop firstboot-graphical.service 2> /dev/null || :
# don't use prelink on a running live image
sed -i 's/PRELINKING=yes/PRELINKING=no/' /etc/sysconfig/prelink &>/dev/null || :
# turn off mdmonitor by default
systemctl --no-reload disable mdmonitor.service 2> /dev/null || :
systemctl --no-reload disable mdmonitor-takeover.service 2> /dev/null || :
systemctl stop mdmonitor.service 2> /dev/null || :
systemctl stop mdmonitor-takeover.service 2> /dev/null || :
# don't start cron/at as they tend to spawn things which are
# disk intensive that are painful on a live image
systemctl --no-reload disable crond.service 2> /dev/null || :
systemctl --no-reload disable atd.service 2> /dev/null || :
systemctl stop crond.service 2> /dev/null || :
systemctl stop atd.service 2> /dev/null || :
# turn off abrtd on a live image
systemctl --no-reload disable abrtd.service 2> /dev/null || :
systemctl stop abrtd.service 2> /dev/null || :
# Don't sync the system clock when running live (RHBZ #1018162)
sed -i 's/rtcsync//' /etc/chrony.conf
# Mark things as configured
touch /.liveimg-configured
# add static hostname to work around xauth bug
# https://bugzilla.redhat.com/show_bug.cgi?id=679486
# the hostname must be something else than 'localhost'
# https://bugzilla.redhat.com/show_bug.cgi?id=1370222
hostnamectl set-hostname "localhost-live"
EOF
# bah, hal starts way too late
cat > /etc/rc.d/init.d/livesys-late << EOF
#!/bin/bash
#
# live: Late init script for live image
#
# chkconfig: 345 99 01
# description: Late init script for live image.
. /etc/init.d/functions
if ! strstr "\`cat /proc/cmdline\`" rd.live.image || [ "\$1" != "start" ] || [ -e /.liveimg-late-configured ] ; then
exit 0
fi
exists() {
which \$1 >/dev/null 2>&1 || return
\$*
}
touch /.liveimg-late-configured
# read some variables out of /proc/cmdline
for o in \`cat /proc/cmdline\` ; do
case \$o in
ks=*)
ks="--kickstart=\${o#ks=}"
;;
xdriver=*)
xdriver="\${o#xdriver=}"
;;
esac
done
# if liveinst or textinst is given, start anaconda
if strstr "\`cat /proc/cmdline\`" liveinst ; then
plymouth --quit
/usr/sbin/liveinst \$ks
fi
if strstr "\`cat /proc/cmdline\`" textinst ; then
plymouth --quit
/usr/sbin/liveinst --text \$ks
fi
# configure X, allowing user to override xdriver
if [ -n "\$xdriver" ]; then
cat > /etc/X11/xorg.conf.d/00-xdriver.conf <<FOE
Section "Device"
Identifier "Videocard0"
Driver "\$xdriver"
EndSection
FOE
fi
EOF
chmod 755 /etc/rc.d/init.d/livesys
/sbin/restorecon /etc/rc.d/init.d/livesys
/sbin/chkconfig --add livesys
chmod 755 /etc/rc.d/init.d/livesys-late
/sbin/restorecon /etc/rc.d/init.d/livesys-late
/sbin/chkconfig --add livesys-late
# enable tmpfs for /tmp
systemctl enable tmp.mount
# make it so that we don't do writing to the overlay for things which
# are just tmpdirs/caches
# note https://bugzilla.redhat.com/show_bug.cgi?id=1135475
cat >> /etc/fstab << EOF
vartmp /var/tmp tmpfs defaults 0 0
EOF
# work around for poor key import UI in PackageKit
rm -f /var/lib/rpm/__db*
echo "Packages within this LiveCD"
rpm -qa --qf '%{size}\t%{name}-%{version}-%{release}.%{arch}\n' |sort -rn
# Note that running rpm recreates the rpm db files which aren't needed or wanted
rm -f /var/lib/rpm/__db*
# go ahead and pre-make the man -k cache (#455968)
/usr/bin/mandb
# make sure there aren't core files lying around
rm -f /core*
# remove random seed, the newly installed instance should make it's own
rm -f /var/lib/systemd/random-seed
# convince readahead not to collect
# FIXME: for systemd
echo 'File created by kickstart. See systemd-update-done.service(8).' \
| tee /etc/.updated >/var/.updated
# Drop the rescue kernel and initramfs, we don't need them on the live media itself.
# See bug 1317709
rm -f /boot/*-rescue*
# Disable network service here, as doing it in the services line
# fails due to RHBZ #1369794
/sbin/chkconfig network off
# Remove machine-id on pre generated images
rm -f /etc/machine-id
touch /etc/machine-id
%end
%post --nochroot
# For livecd-creator builds only (lorax/livemedia-creator handles this directly)
if [ -n "$LIVE_ROOT" ]; then
cp "$INSTALL_ROOT"/usr/share/licenses/*-release-common/* "$LIVE_ROOT/"
# only installed on x86, x86_64
if [ -f /usr/bin/livecd-iso-to-disk ]; then
mkdir -p "$LIVE_ROOT/LiveOS"
cp /usr/bin/livecd-iso-to-disk "$LIVE_ROOT/LiveOS"
fi
fi
%end
%post
# cinnamon configuration
# create /etc/sysconfig/desktop (needed for installation)
cat > /etc/sysconfig/desktop <<EOF
PREFERRED=/usr/bin/cinnamon-session
DISPLAYMANAGER=/usr/sbin/lightdm
EOF
cat >> /etc/rc.d/init.d/livesys << EOF
# set up lightdm autologin
sed -i 's/^#autologin-user=.*/autologin-user=liveuser/' /etc/lightdm/lightdm.conf
sed -i 's/^#autologin-user-timeout=.*/autologin-user-timeout=0/' /etc/lightdm/lightdm.conf
#sed -i 's/^#show-language-selector=.*/show-language-selector=true/' /etc/lightdm/lightdm-gtk-greeter.conf
# set Cinnamon as default session, otherwise login will fail
sed -i 's/^#user-session=.*/user-session=cinnamon/' /etc/lightdm/lightdm.conf
# no updater applet in live environment
rm -f /etc/xdg/autostart/org.mageia.dnfdragora-updater.desktop
# this goes at the end after all other changes.
chown -R liveuser:liveuser /home/liveuser
restorecon -R /home/liveuser
EOF
%end
%post
echo ""
echo "POST desktop-base ************************************"
echo ""
# Antialiasing by default.
# Set Noto fonts as preferred family.
cat > /etc/fonts/local.conf << EOF_FONTS
<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<!-- Settins for better font rendering -->
<match target="font">
<edit mode="assign" name="rgba"><const>rgb</const></edit>
<edit mode="assign" name="hinting"><bool>true</bool></edit>
<edit mode="assign" name="hintstyle"><const>hintfull</const></edit>
<edit mode="assign" name="antialias"><bool>true</bool></edit>
<edit mode="assign" name="lcdfilter"><const>lcddefault</const></edit>
</match>
<!-- Local default fonts -->
<!-- Serif faces -->
<alias>
<family>serif</family>
<prefer>
<family>Noto Serif</family>
<family>DejaVu Serif</family>
<family>Liberation Serif</family>
<family>Times New Roman</family>
<family>Nimbus Roman No9 L</family>
<family>Times</family>
</prefer>
</alias>
<!-- Sans-serif faces -->
<alias>
<family>sans-serif</family>
<prefer>
<family>Noto Sans</family>
<family>DejaVu Sans</family>
<family>Liberation Sans</family>
<family>Arial</family>
<family>Nimbus Sans L</family>
<family>Helvetica</family>
</prefer>
</alias>
<!-- Monospace faces -->
<alias>
<family>monospace</family>
<prefer>
<family>Noto Sans Mono Condensed</family>
<family>DejaVu Sans Mono</family>
<family>Liberation Mono</family>
<family>Courier New</family>
<family>Andale Mono</family>
<family>Nimbus Mono L</family>
</prefer>
</alias>
</fontconfig>
EOF_FONTS
# Set a colored prompt
cat > /etc/profile.d/color-prompt.sh << EOF_PROMPT
## Colored prompt
if [ -n "\$PS1" ]; then
if [[ "\$TERM" == *256color ]]; then
if [ \${UID} -eq 0 ]; then
PS1='\[\e[91m\]\u@\h \[\e[93m\]\W\[\e[0m\]\\$ '
else
PS1='\[\e[92m\]\u@\h \[\e[93m\]\W\[\e[0m\]\\$ '
fi
else
if [ \${UID} -eq 0 ]; then
PS1='\[\e[31m\]\u@\h \[\e[33m\]\W\[\e[0m\]\\$ '
else
PS1='\[\e[32m\]\u@\h \[\e[33m\]\W\[\e[0m\]\\$ '
fi
fi
fi
EOF_PROMPT
cat > /usr/sbin/backup_for_upgrade.sh << 'BACKUPSCRIPT_EOF'
#!/bin/bash
if [ "$(id -u)" != "0" ]; then
echo "This script must be run as root" 1>&2
exit 1
fi
USER="$(logname)"
MOUNTPOINT_DEST="/home"
DEST="/home/backup-$USER@$HOSTNAME-$(date '+%Y%m%d_%H%M%S')"
PATHS_TO_BACKUP=(
usr/local
etc
root
)
mkdir -p "$DEST"
cd $DEST
umask 0066
echo "Saving lists of installed packages"
id > id.txt
dnf list installed > dnf_list_installed.txt
rpm -qa > rpm-qa.txt
flatpak list > flatpak_list.txt
snap list > snap_list.txt
# backup folders
for path in "${PATHS_TO_BACKUP[@]}"
do
echo "Backing up $path"
tar cjpf "backup-$(echo $path | tr / _).tar.bz2" -C / "$path"
done
echo "All done. Files are in $DEST"
BACKUPSCRIPT_EOF
chmod +x /usr/sbin/backup_for_upgrade.sh
semanage fcontext -a -t unconfined_exec_t '/usr/local/sbin/firstboot'
cat > /usr/local/sbin/firstboot << 'FIRSTBOOT_EOF'
#!/bin/bash
extcode=0
shopt -s nullglob
for src in /usr/local/sbin/firstboot_*.sh; do
echo "firstboot: running $src"
$src
if [ $? -ne 0 ]; then
mv $src $src.failed
echo "Script failed! Saved as: $src.failed"
extcode=1
else
echo "Script completed"
rm $src
fi
done
if [[ $exitcode == 0 ]]; then
semanage fcontext -a -t unconfined_exec_t '/usr/local/sbin/firstboot'
rm /usr/local/sbin/firstboot
fi
exit $extcode
FIRSTBOOT_EOF
chmod +x /usr/local/sbin/firstboot
cat > /usr/local/sbin/firstboot_anaconda.sh << 'ANACONDA_EOF'
#!/bin/bash
dnf remove -y anaconda
ANACONDA_EOF
chmod +x /usr/local/sbin/firstboot_anaconda.sh
cat > /usr/local/sbin/firstboot_noatime.sh << 'NOATIME_EOF'
#!/bin/bash
gawk -i inplace '/^[^#]/ {if (($3 == "ext4" || $3 == "btrfs") && !match($4, /noatime/)) { $4=$4",noatime" } } 1' /etc/fstab
NOATIME_EOF
chmod +x /usr/local/sbin/firstboot_noatime.sh
%end
%post
echo ""
echo "POST desktop-cinnamon ************************************"
echo ""
cat > /etc/mpv/mpv.conf << EOF
hwdec=vaapi
EOF
%end
%post
echo ""
echo "POST nonfree **************************************"
echo ""
# Enable Cisco Open H.264 repository
dnf config-manager --set-enabled fedora-cisco-openh264
cat > /usr/local/sbin/firstboot_flathub.sh << 'FLATHUB_EOF'
#!/bin/bash
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
FLATHUB_EOF
chmod +x /usr/local/sbin/firstboot_flathub.sh
%end
%post
echo ""
echo "POST development-base ************************************"
echo ""
cat > /etc/sysctl.d/10-remix-inotify.conf << INOTIFY_EOF
# remix - increase max inotify watches
fs.inotify.max_user_watches=524288
INOTIFY_EOF
mkdir /net
mkdir -p /export/public
chmod 1775 /export/public
cat > /etc/exports << EOF
/export/public *(ro,sync,root_squash)
EOF
cat >> /etc/vimrc << EOF
set ai sw=4
:nnoremap <CR> :nohlsearch<CR>/<BS>
EOF
cat >> /etc/inputrc << EOF
set show-all-if-ambiguous on
set editing-mode vi
EOF
cat >> /etc/profile.d/colorls.sh << EOF
alias ls='ls -F --color=auto' 2>/dev/null
EOF
%end
%packages
@^cinnamon-desktop-environment
@anaconda-tools
@development-tools
@networkmanager-submodules
@x86-baremetal-tools
aajohan-comfortaa-fonts
aircrack-ng
alsa-lib-devel
anaconda
anaconda-install-env-deps
anaconda-live
autoconf
autofs
automake
avahi-devel
bison
chkconfig
dracut-live
exfat-utils
fedora-release-cinnamon
fedora-workstation-repositories
ffmpeg-devel
file-roller-nautilus
flex
fuse-exfat
g++
gawk
gcc
gdb
gettext-devel
git
glibc-all-langpacks
google-noto-sans-fonts
google-noto-sans-mono-fonts
google-noto-serif-fonts
gparted
gperf
gstreamer1-libav
gstreamer1-plugins-bad-free
gstreamer1-plugins-bad-freeworld
gstreamer1-plugins-good
gstreamer1-plugins-ugly
gstreamer1-plugins-ugly-free
gstreamer1-vaapi
initscripts
intel-media-driver
inxi
jq
json-c-devel
kernel
kernel-modules
kernel-modules-extra
libconfuse-devel
libcurl-devel
liberation-s*-fonts
libevent-devel
libgcrypt-devel
libplist-devel
libreoffice-langpack-en
libsodium-devel
libtool
libunistring-devel
libva
libva-utils
libwebsockets-devel
mpv
mxml-devel
nfs-utils
nodejs
npm
ntpsec
pkgconfig
podman
protobuf-c-devel
rhythmbox
rpmfusion-*-appstream-data
rpmfusion-free-release
rpmfusion-free-release-tainted
rpmfusion-nonfree-release
rpmfusion-nonfree-release-tainted
seahorse
seahorse-nautilus
sqlite
sqlite-devel
strace
tcpdump
telnet
unar
unrar
valgrind
vim-default-editor
vim-enhanced
wireshark
zlib-devel
-abrt*
-device-mapper-multipath
-fcoe-utils
-fedora-release-notes
-hexchat
-pidgin
-rsyslog
-sendmail
-thunderbird
-xreader
%end
</div>
Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-27670858500913902572022-02-27T15:30:00.006+00:002022-05-09T13:15:56.564+01:00Ardour DAW on FedoraWhilst <code>audacity</code> is a well known and simple sound editor it has some limitations when compared to digitial audio workstations (DAWs): this space can be fileld with <a href=https://ardour.org/>ardour</a> and a number of basic plugins.<br>
<img src="https://blogger.googleusercontent.com/img/a/AVvXsEjbT_45jY2W2xnlSmE_PIRH0V4D6jARxBofACFbuslUF4CxgcT6RAkrE1UAK23RmtipAtXxiWS_0z9GHCdqK8q4kBlVCQjOcwgAoJrRQd3OAq2nMA_9sO9A2T5cw1ZG41MRhzT7fegcB9eGN9kEee70trt32ppJiiXams0GOs_M05JXObuHzgc=s320" width=95%><br>
<a name='more'></a>
To install on Fedora 35, we need the DAW but its feature set is expanded with the relevant LV2 plugins that can be used as filters within <code>ardour</code>:
<div class=code>
$ dnf -x lv2-*devel install ardour6 calf lv2-calf-plugins lv2-\* lsp-plugins-lv2
# a noise reduction filter/plugin
$ dnf install fftw3-devel meson ninja-build
$ git clone https://github.com/lucianodato/noise-repellent && cd noise-repellent \
meson build --buildtype release --prefix /usr \
ninja -C build install
</div>
Whilst <code>ardour</code> provides basic <a href=http://brunoruviaro.github.io/ardour4-tutorial/>funcitonality like multiple audio tracks, cross fading/fade in/out</a>, the filters and plugins provide very useful tools like <a href=https://www.youtube.com/watch?v=ikPR1b9pbqQ&list=LL&index=1&t=63s>audio compressors, gates and EQ filtesr</a> to process audio.<br>
<br>
Whilst Fedora has adopted <a href=https://pipewire.org/><code>pipewire</code></a> to simplify audio handling (maintaining a subset of ALSA/pulseaudio functionality) by removing the need for the user to manually start/stop the <a href=https://jackaudio.org/><code>jack</code></a> daemon. However you may still finding warnings:
<blockquote>
WARNING: Your system has a limit for maximum amount of locked memory. This might cause Ardour to run out of memory before your system runs out of memory.<br>
<br>
You can view the memory limit with 'ulimit -l', and it is normally controlled by /etc/security/limits.conf
</blockquote>
This can resolved with:
<div class=code>
$ cat > /etc/security/limits.d/audio.conf << EOF
@audio - rtprio 95
@audio - memlock unlimited
EOF
# $ sudo groupadd audio
$ sudo usermod -a -G audio $(id -un)
</div>
Start to <a href=https://manual.ardour.org/toc/>familarise yourself</a> with <a href=http://brunoruviaro.github.io/ardour4-tutorial/creating-a-track-or-bus/>buses</a>, <a href=http://brunoruviaro.github.io/ardour4-tutorial/understanding-routing/>routing</a> and <a href=http://brunoruviaro.github.io/ardour4-tutorial/recording-audio/>arming tracks</a> etc and you will be good to go.<br>
<h3>Pipewire</h3>
One complicaiton with Fedora 34 onwards was that the backend sound system moved to <a href=https://pipewire.org/>Pipewire</a> and this has shown up a number of issues whilst using Ardour - the biggest issue I've faced is choppy audio that floods <code>journalctl</code> with out of sync issues. A <a href=https://forum.manjaro.org/t/howto-troubleshoot-crackling-in-pipewire/82442>potential workaround</a> on variuos forums <a href=https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/Config-PipeWire#setting-buffer-size-quantum>suggest increasing <code>min-quantum</code></a> values:
<div class=code>
# verify current load
$ pw-top
# powers of 2
$ pw-metadata -n settings 0 clock.min-quantum 2048
# if works, make system wide
$ sudo cp -r /usr/share/pipewire/ /etc
$ vi /etc/pipewire/pipewire.conf
...
context.properties = {
...
## UPDATE ##
default.clock.allowed-rates = [ 44100, 48000 ]
default.clock.quantum = 2048
default.clock.max-quantum = 8192
}
</div>
The <a href=https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/Troubleshooting>Pipewire development repo's troubleshooting guide</a> also suggets increasing headroom:
<div class=code>
$ mkdir -p ~/.config/wireplumber/main.lua.d
$ cp /usr/share/wireplumber/main.lua.d/50-alsa-config.lua ~/.config/wireplumber/main.lua.d/
$ vi ~/.config/wireplumber/main.lua.d/50-alsa-config.lua
...
["api.alsa.period-size"] = 2048,
["api.alsa.headroom"] = 8192,
$ systemctl --user restart wireplumber pipewire pipewire-pulse
</div>
<h3>FocusRite Scarlet 2i2 3G</h3>
Audio files can be captured through various means but you may also need to capture audio yourself, for example via a microphone for voice or instruments - for this you will need an (USB) audio interface. As with many devices for use under Linux there is a possibility of missing support but a large number of FocusRite devices have been supported in the mainstream kernel for a couple of years.<br>
<img src="https://blogger.googleusercontent.com/img/a/AVvXsEgfx2iZiYVezewFlwieFuK-Jf55UctysLnf_sizGoXU51Bt_2iWkAeYRcF1Lbg58ZeVGG5gz1UsuwlIOr4rEKbRZ2VKuXGr1IK5yVldwtGayW8cTRuLRQUTo0xLviWn66pyoCAwsE32h0wsu8v9rmeD6qBggZyjiZXBcfjSCS1LIodDJLnt86w" width=95%><br>
In particular the FocusRite Scarlet (poviding 2x inputs with phantom power over a USB connection) and has been available in Fedora 32 and its 5.11.x kernels. For the 2i2 we can also add <code>alsamixer</code> support:
<div class=code>
$ cat > /etc/modprobe.d/snd_usb_audio.conf << EOF
options snd_usb_audio vid=0x1235 pid=0x8210 device_setup=1
EOF
</div>
Furthermore:
<blockquote>
In order to force your Scarlett 2i2 out of MSD mode without first registering it, connect it to your host computer and press and hold the 48V button for five seconds. This will ensure that your Scarlett 2i2 has full functionality [allowing sample rates up to 192 Khz rather than limted to 48 Khz as it arrives out of the box]].
</blockquote>Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-26311757248516939862021-12-15T18:31:00.009+00:002022-06-25T11:51:43.083+01:00Dell TB16 docking station with Dell XPS/Fedora 35With the Thunderbolt enabled laptop there are now more options available to integrate with your different workspaces; a docking station is one that has been a business staple for a long time but a lot of enterprise solutions are pricey but legacy and decontinued solutions exist, such as the <a href=https://www.dell.com/support/home/en-uk/product-support/product/dell-thunderbolt-dock-tb16/overview>Dell TB16</a> but do they perform?<br>
<img src="https://blogger.googleusercontent.com/img/a/AVvXsEgUXUfw8FB2sGMt3h8y1Ec9Hb_glJbstxAHEtMdVyeOHuOFAgbgcx2N3-v0w6sYLoaMA02VHeJFJgrCkL8M__B2hPSJnGjES9dP2wNQUIRIMGN3jRisdT0PYt1Mo6upmLK0We9eS4XAba-H62z-ys8zoQ33LmZlarwgZ7JpfiwsxyddzE_nP14" width="95%">
<a name='more'></a>
<br>
Using a Dell XPS 9305 we have already seen options for expanding connectivity and connecting to peripherals that we take for granted: wired networks, usb mice and external monitors. We have seen that a <a href=https://whatdoineed2do.blogspot.com/2021/11/fedora-35-on-dell-xps13-9305.html>Dell DA310 and other hdmi-enabled multi mini UGreen</a> are successful partners.<br>
<br>
Whilst these mini adapters and hubs work and very portable, there are messy. Using a dedicated docking station like the Dell TB16 is still a good option for a stable desk environment where monitors and network ports are consistent.<br>
<br>
The TB16 works fine with Fedora 35/5.15.6 kernel - the network adptor is the same chipset (RTL8153) as the DA310, the displayport (full and mini) as well as hdmi ports are recognised and can drive 1980x monitors happily and the USB 3.1 hub works for connecting various devices, incl mice, keyboards, webcams and 2.5" HDD.<br>
<br>
However, its fair to say that the TB16 isn't a well loved product with many <a href=https://www.dell.com/community/Linux-Developer-Systems/TB16-Dock-Linux-Support/td-p/5109123>reported issues</a> with <a href=https://www.dell.com/support/kbdoc/en-uk/000143789/precision-5520-intermittent-network-hangs-while-running-ubuntu-linux-on-a-tb16-dock>network</a> and USB dropouts, monitor connectivity problems. For my device, it appears to already be on the 1.0.2 firmware and thus my experiences are based on this.
<div class=code>
$ fwupdmgr get-devices
...
├─Thunderbolt Cable:
│ Device ID: 2315cbb258f43caf4677117e8dfbb6ce68f60f88
│ <b>Current version: 16.00</b>
│ Vendor: Dell (THUNDERBOLT:0x00D4, TBT:0x00D4)
│ GUIDs: 99102381-23e8-5ff5-9767-8bcda2aaa864 ← THUNDERBOLT\VEN_00D4&DEV_B051&REV_00
│ 6634407c-6706-5fe9-907e-37efcbc8098a ← THUNDERBOLT\VEN_00D4&DEV_B051
│ b4fd3cdf-4e3a-5090-a583-45367cfd6421 ← TBT-00d4b051
│ 8564922d-2c7a-5169-9cff-d3e73f0bd807 ← TBT-00d4b051-controller0-1
│ Device Flags: • Updatable
│ • System requires external power source
│ • Device stages updates
│
├─Thunderbolt Dock:
│ Device ID: c9f174d381c66aab3dea447decfe5df418a2d22f
│ <b>Current version: 16.00</b>
│ Vendor: Dell (THUNDERBOLT:0x00D4, TBT:0x00D4)
│ GUIDs: 8e801c01-c7bf-5de2-85e3-185b2afe3b10 ← THUNDERBOLT\VEN_00D4&DEV_B054&REV_00
│ 1fa96dfa-7407-50b2-87d4-4ae4351c3867 ← THUNDERBOLT\VEN_00D4&DEV_B054
│ 76cc74d4-f062-5b93-a11c-8d2a58a25848 ← TBT-00d4b054
│ f5a71973-58f2-5638-9c9a-d9c7538d6772 ← TBT-00d4b054-controller0-301
│ Device Flags: • Updatable
│ • System requires external power source
│ • Device stages updates
</div>
Checking firmware updates for the TB16 dock with <code>fwupdmgr get-updates c9f174d381c66aab3dea447decfe5df418a2d22f</code> reports there is nothing available from LVFS, although others report version 27 is available.
<h2>Network</h2>
I use the TB16's network port as my primary connection with the XPS 9305 - the interface is bonded with the internal wifi; this does mean that I do not notice network dropouts as much as I could but I have noticed on some odd occassions that the wired connection will be lost and at some point come back. The desktop switch that is the other end shows a solid green (1000M) link at this point.<br>
<br>
Very occassionally the network interface comes up at 100mbit rather than the 1000mbit; forcing <code>ethtool -s enp87s0u1u2 speed 1000 duplex full</code> fixes this as a one off but there are other options (see below).
<br>
Previously the 4.x kernel had <a href=https://bugzilla.redhat.com/show_bug.cgi?id=1460789>problems which resulted in packet corruption</a> but <a href=https://www.dell.com/support/kbdoc/en-uk/000132066/tb16-tb18-dock-resolving-a-checksum-failure-with-systems-running-ubuntu>this was resolved</a> and does not appear to the cause of the very occassional network drop outs. Limited testing shows that the drop outs tend to be clustered together:<div class=code>
Dec 14 09:44:31 xps kernel: bond0: (slave enp87s0u1u2): link status definitely down, disabling slave
Dec 14 09:44:31 xps kernel: bond0: (slave wlp164s0): making interface the new active one
Dec 14 09:44:34 xps kernel: bond0: (slave enp87s0u1u2): link status up, enabling it in 200 ms
Dec 14 09:44:34 xps kernel: bond0: (slave enp87s0u1u2): invalid new link 3 on slave
Dec 14 09:44:35 xps kernel: bond0: (slave enp87s0u1u2): link status definitely up, 1000 Mbps full duplex
Dec 14 09:44:35 xps kernel: bond0: (slave enp87s0u1u2): making interface the new active one
Dec 14 09:44:35 xps kernel: bond0: (slave enp87s0u1u2): link status definitely down, disabling slave
Dec 14 09:44:35 xps kernel: bond0: (slave wlp164s0): making interface the new active one
Dec 14 09:44:38 xps kernel: bond0: (slave enp87s0u1u2): link status up, enabling it in 200 ms
Dec 14 09:44:38 xps kernel: bond0: (slave enp87s0u1u2): invalid new link 3 on slave
Dec 14 09:44:39 xps kernel: bond0: (slave enp87s0u1u2): link status definitely up, 1000 Mbps full duplex
Dec 14 09:44:39 xps kernel: bond0: (slave enp87s0u1u2): making interface the new active one
Dec 14 09:44:39 xps kernel: bond0: (slave enp87s0u1u2): link status definitely down, disabling slave
Dec 14 09:44:39 xps kernel: bond0: (slave wlp164s0): making interface the new active one
Dec 14 09:44:42 xps kernel: bond0: (slave enp87s0u1u2): link status up, enabling it in 200 ms
Dec 14 09:44:42 xps kernel: bond0: (slave enp87s0u1u2): invalid new link 3 on slave
Dec 14 09:44:42 xps kernel: bond0: (slave enp87s0u1u2): link status definitely up, 1000 Mbps full duplex
Dec 14 09:44:42 xps kernel: bond0: (slave enp87s0u1u2): making interface the new active one
...
Dec 14 10:13:05 xps kernel: bond0: (slave enp87s0u1u2): link status definitely down, disabling slave
Dec 14 10:13:05 xps kernel: bond0: (slave wlp164s0): making interface the new active one
Dec 14 10:13:13 xps kernel: bond0: (slave enp87s0u1u2): link status up, enabling it in 200 ms
Dec 14 10:13:13 xps kernel: bond0: (slave enp87s0u1u2): invalid new link 3 on slave
Dec 14 10:13:13 xps kernel: bond0: (slave enp87s0u1u2): link status definitely up, 1000 Mbps full duplex
Dec 14 10:13:13 xps kernel: bond0: (slave enp87s0u1u2): making interface the new active one
Dec 14 10:13:13 xps kernel: bond0: (slave enp87s0u1u2): link status definitely down, disabling slave
Dec 14 10:13:13 xps kernel: bond0: (slave wlp164s0): making interface the new active one
Dec 14 10:13:21 xps kernel: bond0: (slave enp87s0u1u2): link status up, enabling it in 200 ms
Dec 14 10:13:21 xps kernel: bond0: (slave enp87s0u1u2): invalid new link 3 on slave
Dec 14 10:13:21 xps kernel: bond0: (slave enp87s0u1u2): link status definitely up, 1000 Mbps full duplex
Dec 14 10:13:21 xps kernel: bond0: (slave enp87s0u1u2): making interface the new active one
Dec 14 10:13:22 xps kernel: bond0: (slave enp87s0u1u2): link status definitely down, disabling slave
Dec 14 10:13:22 xps kernel: bond0: (slave wlp164s0): making interface the new active one
Dec 14 10:13:38 xps kernel: bond0: (slave enp87s0u1u2): link status up, enabling it in 200 ms
Dec 14 10:13:38 xps kernel: bond0: (slave enp87s0u1u2): link status up, enabling it in 200 ms
Dec 14 10:13:38 xps kernel: bond0: (slave enp87s0u1u2): invalid new link 3 on slave
Dec 14 10:13:39 xps kernel: bond0: (slave enp87s0u1u2): link status definitely up, 100 Mbps full duplex
Dec 14 10:13:39 xps kernel: bond0: (slave enp87s0u1u2): making interface the new active one
</div>
The network chipset is the same as the DA310 but I've not seen dropouts in the same fashion.
<h3>Bonding</h3>
Following <code>systemd.link</code> we can override <code>systemd</code>'s persistent device naming scheme; this is particularly useful given that the same USB ethernet device plugged into a different port will present as a different ethernet device.
Prior to this, we need to ensure that the Dell BIOS is not performing MAC address pass through.
<div class=code>
$ lsusb | grep Ethernet
Bus 006 Device 004: ID 0bda:8153 Realtek Semiconductor Corp. RTL8153 Gigabit Ethernet Adapter
$ lsusb -v -s 006:004 | grep iMac
iMacAddress 3 <b>8CEC4Bxxxxxx</b>
# setup persistent name for TB16
$ cat > /etc/systemd/network/10-eth-tb16.link << EOF
[Match]
MACAddress=8c:ec:4b:xx:xx:xx
[Link]
Name=ethtb16
# force full 1gbit connection for sometimes the autoneg getting this wrong against a tplink gbit desktop switch
Duplex=full
BitsPerSecond=1000M
EOF
$ dmesg
...
[ 2183.450846] r8152 2-1.1:1.0 (unnamed net_device) (uninitialized): Invalid header when reading pass-thru MAC addr
[ 2183.469019] r8152 2-1.1:1.0: load rtl8153b-2 v1 10/23/19 successfully
[ 2183.497741] r8152 2-1.1:1.0 eth0: v1.12.11
[ 2183.522662] r8152 2-1.1:1.0 <b>ethtb16</b>: renamed from eth0
</div>
Now we have a consistent name for this device regardless of port this the TB16 is attached - this one consistent device name can be used for the bond interface instead of adding multiple devices to a bond even though they refer to the same physical device (just plugged in on different ports).
<h2>Monitors</h2>
The connectors appear to be in this preference, at least on Linux: full displayPort, mini-displayPort and HDMI (I didn't try the VGA connector). Hotplugging can be hit and miss and what I've noticed on powerup (via the dock's power button) is that the external screens do not receive a signal immediately.<br>
<br>
Cold booting by pressing the power button the dock wakes up the Dell laptop but the video output is not sent to the external screens, only on the (closed) laptop screen which is a pain as it will be sat at the <code>grub</code> menu on its countdown timer - the keyboard plugged into the dock works at this time so we know the dock is alive. Once the system starts to boot the connected monitors receive a data signal and things are mostly smooth after that point.<br>
<br>
The only way to force the use of external monitor from cold boot is to have the dock connected to the laptop with power supply removed/turned off at the wall - apply power at the wall automatically starts the boot process (no need to press the power button on the dock) and at this point the external monitor displays the POST and grub menus.<br>
<br>
Turning off the monitor off and then on responds well with the system believing that display being unavailable and then reappearing which is useful as sometimes the monitor believes there is no signal from the dock (powering off/on monitor re-establishes the dispaly).
<h2>USB and Audio</h2>
The 3x USB 3.x ports coupled with the 2x USB2.x ports are welcome and I've not encountered any problems with them - I typically use the USB ports to connect a USB 3.x HDD and not suffered any issues. The Dell firemware refers to an ASMedia driver but the system reports a <code>0424:5807 Microchip Technology, Inc. (formerly SMSC) Hub</code><br>
<br>
Similarly for audio, this works fine on the dock for both headphones and lineout via the dock's <code>0bda:4014 Realtek audio chip</code> - it is also pretty low on audio noise footprint when there is no audio.
<h3><code>PulseAudio</code>: Combining the dock output</h3>
The dock has audio outputs in the form of a front headphone jack and also a rear line out jack - these two outputs will typically be recognised as two seperate outputs but it can useful to combine the two into a logical output; this will let you turn on speakers (rear line out) without having to reselect the output device.<br>
<br>
Whilst F35 is shipped with <code>Pipewire</code> it's <a href=https://askubuntu.com/questions/1379376/how-to-achieve-automated-simultaneous-outputs-with-pipewire/1382215#1382215>not clear how to combine logical sinks</a> but with <code>PulseAudio</code> this is relatively simple with default configurations:
<div class=code>
# reinstall pulseaudio
$ dnf swap --allowerasing pipewire-pulseaudio pulseaudio
$ dnf install pulseaudio-utils paprefs
$ dnf remove pipewire pipewire-pulseaudio wireplumber
# find the pulsedevice device names
$ pacmd list-sinks | grep -e 'index:' -e device.string -e 'name:'
index: 0
name: <alsa_output.usb-Plantronics_Plantronics_Blackwire_3210_Series_2FC9575B28134CEA815BE1C29F63D5D0-00.mono-fallback>
device.string = "hw:0"
index: 1
name: <alsa_output.usb-Generic_USB_Audio_200901010001-00.HiFi__hw_Dock_1__sink>
device.string = "_ucm0003.hw:Dock,1"
index: 2
name: <alsa_output.usb-Generic_USB_Audio_200901010001-00.HiFi__hw_Dock__sink>
device.string = "_ucm0003.hw:Dock"
...
# can take local user configuration: cp /etc/pulse/default.pa ~/.config/pulse/
# can make systemwide
$ cat > /etc/pulse/default.pa.d/dellwd.pa << EOF
load-module module-combine-sink slaves=alsa_output.usb-Generic_USB_Audio_200901010001-00.HiFi__hw_Dock_1__sink,alsa_output.usb-Generic_USB_Audio_200901010001-00.HiFi__hw_Dock__sink sink_name=dellwd15 sink_properties="device.description='Dell WD15 Dock' device.icon_name='audio-card-symbolic'"
set-default-sink dellwd15
EOF
$ systemctl --user restart pulseaudio
$ pacmd list-sinks | grep -e 'index:' -e device.string -e 'name:'
index: 0
name: <alsa_output.usb-Plantronics_Plantronics_Blackwire_3210_Series_2FC9575B28134CEA815BE1C29F63D5D0-00.mono-fallback>
device.string = "hw:0"
index: 1
name: <alsa_output.usb-Generic_USB_Audio_200901010001-00.HiFi__hw_Dock_1__sink>
device.string = "_ucm0003.hw:Dock,1"
index: 2
name: <alsa_output.usb-Generic_USB_Audio_200901010001-00.HiFi__hw_Dock__sink>
device.string = "_ucm0003.hw:Dock"
...
* index: 4
name: <dellwd15>
</div>
Whilst possible to create a simulatenous output device using <code>paprefs</code>, this does not provide fine grained control and will put all output sinks into the combined sink, including the laptop speakers!
<h2>Power</h2>
With the TB16 plugged into the mains, there is NO way to power down the dock - its simply always on! If your laptop is plugged in (even after initiating shutdown) there is constant power from the dock (and trickle charing the laptop) - we can also see this with the dock's front LED constantly lit as well power going to the external devices such as a USB harddrive that will continue to spin after laptop powerdown.<br>
<h2>Fan</h2>
This is probably my biggest annoyance with the dock in that it uses active-cooling and the fan is on pretty much ALL of the time in my setup with 2x 1080p monitors and active ethernet port. Upon power, the fan spins up and then down but pretty much after the laptop is powered up and the screens have been in use (even if its just one screen) the fan is whirring away. And its annoying - it measures about 45dB, about 15dB above the ambient noise, although there is also competition from the XPS 9305 own fan that racks in at 50dB for moderate cooling.
<h2>Conclusions</h2>
Whilst this dock isn't a flawless experience, most of the issues can be tollerated and fixes applied. Furthermore, given that it was released in 2017 and now superceded with the most recent docks (Dell WD19TBS for example) costing upwards of 200GBP, the Dell TB16 can be rescued from the second hand market for as little as 50GBP which is probably a fair trade on functionality vs minor pain.
<img src="https://blogger.googleusercontent.com/img/a/AVvXsEj22ZdEErEFirKNyRbbKOzu5T2UGVsiTaM3hBFBndW-HQXd5TtJwI81ePE6ke2rRw4jAJ8FJQdn7hGePNAu2OrCngCCtpBnSe900KuVrIwzf3m3Av_s_Hq3qMJhH6PLOKiQnLYd94tjgMPX7WhZEm5cQO1sD9bpmX-4jppIL6ljwlBiI2hSp6E" width="95%">Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-40665543021607017302021-11-26T17:58:00.057+00:002022-09-04T10:59:32.539+01:00Fedora 35 on a Dell XPS13 9305The Dell XPS 93xx (2020/2021) line with its Intel Iris XE integrated graphics has a native Debian developer edition available directly and its interesting to see how this works.<br>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBXiCzIjH3LJMBdFeIjTLFZBIifovw1WSIlRMjCrnxXLrVJ_JyiZGDjvsfb0EXa9ZWM4mnz5Z3NW6OTaYS-XM2X2m-DOGt4P2ep2_sXzJRpc84-FkYEPtBqa-8ruHdExymGY43XjM/s0/xps.jpg" width=95%>
<a name='more'></a><br>
The Dell XPS 9305 is a 13.3" 16:9 model with fixed RAM (soldered to board) and limited connectity 2x Thunderbolt 4 (dual duty as direct power) and 1x USB-C 3.2, distinguishable from the 9310 which has even fewer ports and missig the battery indicator button/leds on the left side. With the limited connectivity a hub or dock is almost a necessity with options such as a Dell DA310 that will provide power to the machine (unlike the DA300) and supplying a 1GiB wired ethernet, 2x USB-A 3.2, monitor connectity via DisplayPort, HDMI and VGA and connected via a Thunderbolt port.<br>
<br>
The historic risk of hardware and linux is that you can never been 100% that everything will work. However, I was pleasantly surprised that the majority of the XPS 9305 hardware and the DA310 dock is detected and supported out of the box with Fedora 35 and a 5.14.18/5.15.6 kernel with details below:<br>
<div class=code>
$ inxi -F
Machine: Type: Laptop System: Dell product: XPS 13 9305 v: N/A serial: <superuser required>
Mobo: Dell model: 0PPYW4 v: A02 serial: <superuser required> <b>UEFI: Dell v: 1.0.9 date: 07/20/2021 </b>
CPU: Info: Quad Core model: 11th Gen Intel Core i7-1165G7 bits: 64 type: MT MCP cache: L2: 12 MiB
Speed: 612 MHz min/max: 400/4700 MHz Core speeds (MHz): 1: 612 2: 582 3: 722 4: 998 5: 546 6: 564 7: 503 8: 691
Graphics: Device-1: Intel TigerLake-LP GT2 [Iris Xe Graphics] driver: <b>i915 v: kernel </b>
Device-2: <b>Microdia Integrated_Webcam_HD</b> type: USB driver: <b>uvcvideo </b>
Display: x11 server: X.Org 1.20.11 driver: loaded: <b>modesetting</b> unloaded: fbdev,vesa resolution: 1920x1080~60Hz
OpenGL: renderer: Mesa Intel Xe Graphics (TGL GT2) v: 4.6 Mesa 21.2.5
Audio: Device-1: Intel Tiger Lake-LP Smart Sound Audio driver: <b>snd_hda_intel </b>
Sound Server-1: ALSA v: k5.14.18-300.fc35.x86_64 running: yes
Sound Server-2: PipeWire v: 0.3.40 running: yes
Network: Device-1: <b>Intel Wi-Fi 6 AX200</b> driver: <b>iwlwifi </b>
IF: wlp164s0 state: up mac: 94:e2:xx:xx:xx:xx
Device-2: <b>Realtek RTL8153 Gigabit Ethernet Adapter</b> type: USB driver: <b>r8152 </b>
IF: enp0s13f0u2u1 state: down mac: a0:29:xx:xx:xx:xx
IF-ID-1: bond0 state: up speed: -1 duplex: unknown mac: 94:e2:xx:xx:xx:xx
IF-ID-2: bonding_masters state: N/A speed: N/A duplex: N/A mac: N/A
$ lsub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 007: ID 27c6:5335 Shenzhen Goodix Technology Co.,Ltd. Goodix Fingerprint Device
Bus 003 Device 009: ID 413c:c010 Dell Computer Corp. Dell DA310
Bus 003 Device 013: ID 093a:2510 Pixart Imaging, Inc. Optical Mouse
Bus 003 Device 005: ID 1d5c:5510 Fresco Logic Frescologic USB2.0 HUB
Bus 003 Device 003: ID <b>0c45:6d13 Microdia Integrated_Webcam_HD</b>
Bus 003 Device 002: ID <b>8087:0029 Intel Corp. AX200 Bluetooth</b>
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 005: ID <b>0bda:8153 Realtek Semiconductor Corp. RTL8153 Gigabit Ethernet Adapter</b>
Bus 002 Device 002: ID 1d5c:5500 Fresco Logic Frescologic USB3.1Gen2 HUB
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
$ lspci
0000:00:00.0 Host bridge: Intel Corporation 11th Gen Core Processor Host Bridge/DRAM Registers (rev 01)
0000:00:02.0 VGA compatible controller: Intel Corporation TigerLake-LP GT2 [Iris Xe Graphics] (rev 01)
0000:00:04.0 Signal processing controller: Intel Corporation TigerLake-LP Dynamic Tuning Processor Participant (rev 01)
0000:00:06.0 System peripheral: Intel Corporation Device 09ab
0000:00:07.0 PCI bridge: Intel Corporation Tiger Lake-LP Thunderbolt 4 PCI Express Root Port #0 (rev 01)
0000:00:07.1 PCI bridge: Intel Corporation Tiger Lake-LP Thunderbolt 4 PCI Express Root Port #1 (rev 01)
0000:00:08.0 System peripheral: Intel Corporation GNA Scoring Accelerator module (rev 01)
0000:00:0a.0 Signal processing controller: Intel Corporation Tigerlake Telemetry Aggregator Driver (rev 01)
0000:00:0d.0 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt 4 USB Controller (rev 01)
0000:00:0d.2 USB controller: Intel Corporation Tiger Lake-LP Thunderbolt 4 NHI #0 (rev 01)
0000:00:0e.0 RAID bus controller: Intel Corporation Volume Management Device NVMe RAID Controller
0000:00:14.0 USB controller: Intel Corporation Tiger Lake-LP USB 3.2 Gen 2x1 xHCI Host Controller (rev 20)
0000:00:14.2 RAM memory: Intel Corporation Tiger Lake-LP Shared SRAM (rev 20)
0000:00:15.0 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP Serial IO I2C Controller #0 (rev 20)
0000:00:15.1 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP Serial IO I2C Controller #1 (rev 20)
0000:00:16.0 Communication controller: Intel Corporation Tiger Lake-LP Management Engine Interface (rev 20)
0000:00:19.0 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP Serial IO I2C Controller #4 (rev 20)
0000:00:19.1 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP Serial IO I2C Controller #5 (rev 20)
0000:00:1c.0 PCI bridge: Intel Corporation Device a0b8 (rev 20)
0000:00:1d.0 PCI bridge: Intel Corporation Device a0b3 (rev 20)
0000:00:1e.0 Communication controller: Intel Corporation Tiger Lake-LP Serial IO UART Controller #0 (rev 20)
0000:00:1f.0 ISA bridge: Intel Corporation Tiger Lake-LP LPC Controller (rev 20)
<b>0000:00:1f.3 Audio device: Intel Corporation Tiger Lake-LP Smart Sound Technology Audio Controller (rev 20)</b>
0000:00:1f.4 SMBus: Intel Corporation Tiger Lake-LP SMBus Controller (rev 20)
0000:00:1f.5 Serial bus controller [0c80]: Intel Corporation Tiger Lake-LP SPI Controller (rev 20)
0000:a3:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS525A PCI Express Card Reader (rev 01)
<b>0000:a4:00.0 Network controller: Intel Corporation Wi-Fi 6 AX200 (rev 1a)</b>
10000:e0:06.0 PCI bridge: Intel Corporation 11th Gen Core Processor PCIe Controller (rev 01)
10000:e1:00.0 Non-Volatile memory controller: SK hynix Gold P31 SSD
</div>
Some <a href=https://wiki.archlinux.org/title/Dell_XPS_13_(9310)#UEFI>installation references for 93xx series</a> note that BIOS setting changes to secure boot/SATA mode are required but this was NOT my case and a clean F35 installation (booted from USB stick, via a Amazon/UGreen USB-C dongle) without such changes.<br>
<br>
Initial partitioning of the harddrive can be done in Windows itself via <code>disqkmngt.msc</code> and <a href=https://answers.microsoft.com/en-us/windows/forum/all/windows-disk-management-unable-to-shrink-c-drive/217c3521-b254-4662-bac9-bc90dc633fab>overcoming gotchas</a> if necessary. I didn't face issues but perhaps I did this without any new installations to the Windows machine and carving a 180GiB / 270GiB in favour for Linux on the 512GiB disk.<br>
<br>
Post installation followed some of the steps from installations to <a href=https://whatdoineed2do.blogspot.com/2016/11/fedora-24-on-asus-x202es200e-ct216.html>Asus x202e laptop</a> and <a href=https://whatdoineed2do.blogspot.com/2016/01/fedora-23-mate-compiz-on-hp-mini-210.html>an old HP netbook</a>.<br>
<div class=code>
$ systemctl enable sshd autofs nfs-server --now && mkdir /net
$ firewall-cmd --add-service={nfs,nfs3,mountd,rpc-bind,samba,samba-client} --permanent
$ cat >> /etc/vimrc << EOF
set ai sw=4
:nnoremap <CR> :nohlsearch <CR>/<BS>
EOF
$ cat > /etc/exports << EOF
/export/public 192.168.0.0/16(rw,sync,no_root_squash)
/export/src 192.168.0.0/16(ro,sync,no_root_squash)
EOF
$ cat >> /etc/inputrc << EOF
set show-all-if-ambiguous on
set editing-mode vi
EOF
$ cat >> /etc/profile.d/colorls.sh << EOF
alias ls='ls -F --color=auto' 2>/dev/nul
EOF
$ cat > /etc/rc.local << EOF
#!/bin/bash
/usr/sbin/ntpdate uk.pool.ntp.org 2>/dev/null &
/usr/sbin/logrotate /etc/logrotate.conf
exit 0
EOF
$ chmod a+x /etc/rc.local
# disable core dump, although journal entry will exist via 'coredumpctl list'
$ cat >> /etc/systemd/coredump.conf << EOF
Storage=none
ProcessSizeMax=0
EOF
$ cat >> /etc/systemd/journald.conf << EOF
SystemMaxUse=250M
MaxRetentionSec=3month
MaxFileSec=1month
EOF
$ cat > /etc/systemd/system/rtkit-daemon.service.d/override.conf << EOF
[Service]
LogLevelMax=3
EOF
$ cat > /etc/systemd/system/NetworkManager.service.d/override.conf << EOF
[Service]
LogLevelMax=3
EOF
$ systemctl mask systemd-journald-audit.socket
$ systemctl daemon-reload
$ rm /var/lib/systemd/coredump/*
# disable debuginfod which can be horrifically slow
# https://fedoraproject.org/wiki/Debuginfod
$ rm /etc/debuginfod/*.urls
$ echo "set debuginfod enabled off" > /etc/gdbinit.d/debuginfo.gdb
$ grubby --update-kernel ALL --args selinux=0
</div>
<h2>BIOS</h2>
With Linux/grub installed, the grub menu has an entry that takes you directly to the UEFI BIOS; This machine (~Nov 2021) came with 1.0.9 but was Dell/Windows have enabled automated firmware and driver pushes which will lead to ZERO choice/forced BIOS upgrades if you boot into Windows when a new BIOS is available.<br>
<br>
To disable under 1.1.0, enter BIOS and <code>Update,Recovery -> UEFI capsule firmware updates = NO</code>. This will also prevent BIOS updates initiated from <code>fwupdmgr</code> from Linux.
<h2>Graphics/Display</h2>
The graphics chip is fine and display is driven by the generic <code>modesetting</code> kernel driver, with hardware accelerated video decode (for <code>mpv</code> etc) supported over VAAPI with the intel drivers.
<div class=code>
$ dnf install \
https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm \
https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
# install non-free iHD driver for VAAPI handling
$ dnf -y install libva libva-utils intel-media-driver
# force mpv to use h/w decode
$ cat > /etc/mpv/mpv.conf << EOF
hwdec=vaapi
EOF
# if this reports i915 then something is wrong
$ vainfo | grep Driver
vainfo: Driver version: <b>Intel iHD</b> driver for Intel(R) Gen Graphics - 21.3.4 ()
</div>
The X11 driver can be left as is (<code>modsetting</code>) rather than the <code>xorg-x11-drv-intel</code>. Some <a href=https://01.org/linuxgraphics/downloads/firmware>minor tuning</a> for the <code>i915</code> kernel module is available to control GPU offloading:
<div class=code>
$ cat > /etc/modprobe.d/i915.conf << EOF
options i915 enable_guc=3
# framebuffer compression, further power reduction
options i915 enable_fbc=1
EOF
# rebuild initrm.img
$ dracut --force
</div>
Upon next reboot, you should see:
<div class=code>
$ dmesg | grep i915
[ 1.685690] i915 0000:00:02.0: [drm] Finished loading DMC firmware i915/tgl_dmc_ver2_12.bin (v2.12)
[ 2.737213] i915 0000:00:02.0: [drm] GuC firmware i915/tgl_guc_69.0.3.bin version 69.0
[ 2.737216] i915 0000:00:02.0: [drm] HuC firmware i915/tgl_huc_7.9.3.bin version 7.9
[ 2.740614] i915 0000:00:02.0: [drm] HuC authenticated
[ 2.741413] i915 0000:00:02.0: [drm] GuC submission enabled
[ 2.741414] i915 0000:00:02.0: [drm] GuC SLPC enabled
[ 2.741783] i915 0000:00:02.0: [drm] GuC RC: enabled
</div>
Video encoding with QuickSync is flawless and can be monitored by <code>intel_gpu_top</code>
<div class=code>
# see <a href=https://trac.ffmpeg.org/wiki/Hardware/QuickSync>ffmpeg QSV reference here</a>
$ ffmpeg -encoders | egrep "qsv|vaapi"
$ ffmpeg -h filter=vpp_qsv
$ ffmpeg <b>-hwaccel qsv -c:v h264_qsv</b> -i foo-1080.mp4 \
-c:a copy <b>-c:v h264_qsv</b> -b:v 8M -minrate 500k -maxrate 12M \
vf "vpp_qsv=transpose=clock,scale_qsv=w=720:h=-1:mode=hq" \
foo-720vbr.mp4
</div>
Whilst <code>fierfox</code> claims to have added h/w decode support for linux, version 94.0 with <code>media.ffmpeg.vaapi.enabled = true</code> will cause the browser tab to randomly crash whenever there is media on the page - this happens nearly ALL of the time although using YouTube and monitoring via <code>intel_gpu_top</code> I am not able to see any offloading - at this point, I have left the vaapi override as per default (off). Installing something like <code>h264ify</code> to force mp4 stream 100% causes the YouTube tab to crash.<br>
<br>
UPDATE: Using <code>firefox</code> 97.0 seems to work for h/w video decode (MP4 and VP9) with driver 21.4.3 with the following <code>config</code>
<ul>
<li>gfx.webrender.all = true</li>
<li>media.ffmpeg.vaapi-drm-display.enabled = true</li>
<li>media.ffmpeg.vaapi.enabled = true</li>
</ul>
<br>
<h4>fedora-multimedia / negativo17</h4>
There exists an alternative multimedia repo to <code>rpmfusion</code> that also provides <code>ffmpeg</code> but this versions has compiled-in support for the <code><a href=https://wiki.hydrogenaud.io/index.php?title=Fraunhofer_FDK_AAC>FDK-aac</a></code> library. If you must have <code>FDK-aac</code> support you can use the <code>negativo17</code> repo BUT ensure that you continue to use the <code>intel-media-driver</code> from <code>rpmfusion</code> otherwise you'll find VAAPI hardware decoding failing.
<div class=code>
$ dnf config-manager --add-repo=https://negativo17.org/repos/fedora-multimedia.repo
$ dnf config-manager --save --setopt fedora-multimedia.exclude="intel-media-*"
$ for i in rpmfusion-free rpmfusion-free-updates; do \
dnf config-manager --save --setopt $i.exclude="ffmpeg* mpv*" \
done
</div>
<h3>Display</h3>
Using the DA310's DisplayPort, connecting to an external monitor is simple - theres a dedicated F8 binding that initiates mirror'ing the screen, although this can be changed to be an extended desktop. The important note is that the DA310, whilst providing 3x video output connectors supporting up to 60hz refresh, ONLY supports one output at a time. Using the F8 key, I was not able to directly disable the laptop screen but I suspect this can be achieved via <code>xrandr</code><br>
<br>
Furthermore, using an inexpensive UGreen USB-C hub with HDMI output (connected to the right handside non-thunderbolt port), I am able to drive 3 monitors.
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqBnhM6FW6mUs1OpjFq8awEZbpq1YlLdxQiFaxWYaUtKQF4GDKo8LJ0lTFJ6h7Ws_KhbreYbL_DtQfVsZkEHW7purzUYEVHWnUG0-VgftngpExDR5BhWLaWhhyPEEQaVaMVYKcmeA/s0/dsiplay.png" width=95%/>
The UGreen USB hub identifies itself as:
<div class=code>
Bus 003 Device 004: ID 05e3:0610 Genesys Logic, Inc. Hub
Bus 002 Device 003: ID 05e3:0749 Genesys Logic, Inc. SD Card Reader and Writer
Bus 002 Device 002: ID 05e3:0626 Genesys Logic, Inc. USB3.1 Hub
</div>
Closing the laptop with the 2x external monitors (via DA310's DisplayPort and the UGreen hub's HDMI port) attached and configured works as expected - all screens temporarily blank and the system reconfigures itself. Note that machine has a preferance/order of ports in terms of video output: Thunderbolt ports (#1 main power, left side top, #2 secondary port, left side bottom) USB-C 3.1 (right side).
<h2>Networking</h2>
The wifi card has no issues although some earlier reports/older kernels suggested the Killer AX200 chip was not supported. The DA310 hub's GiB network interface (powered by a RealTek RTL8153) is also flawless - as per all my wired/wifi enabled machines, I create a <a href=https://whatdoineed2do.blogspot.com/2018/09/fedora-bonding-over-ethernet-and-wifi.html>bond interface for consistent IP assignement</a> and this too works as expected even with the wired connection via the Thunerbolt 4 port.<br>
<br>
Note that Linux will give your network connections <a href=https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/>predictable network names</a> but this by default will include information about the PCI bus that the device is located: this means that the same DA310 ethernet jack will have a differernt names when plugged into the top left vs bottom left thunderbolt port. This need attention when creating the bonded ethernet devices (you will need to add both to the bonded interface unless you will always use the same thunderbolt port for connecting the DA310).<br>
<br>
Following <a href=https://www.freedesktop.org/software/systemd/man/systemd.link.html><code>systemd.link</code></a> we can override <code>systemd</code>'s persistent device naming scheme.
<div class=code>
$ lsusb | grep Ethernet
Bus 002 Device 005: ID 0bda:8153 Realtek Semiconductor Corp. RTL8153 Gigabit Ethernet Adapter
$ lsusb -v -s 002:005 | grep iMac
iMacAddress 3 0C3796xxxxxx
# setup persistent name
$ cat > /etc/systemd/network/10-eth-da310.link << EOF
[Match]
MACAddress=0c:37:96:xx:xx:xx
[Link]
Name=ethda310
EOF
</div>
However, when we monitor the system logs for the device we see that the rules have not applied:
<div class=code>[17200.952775] r8152 2-1.1:1.0 (unnamed net_device) (uninitialized): <b>Using pass-thru MAC addr</b> xx:xx:xx:xx:xx:xx
[17200.969596] r8152 2-1.1:1.0: load rtl8153b-2 v1 10/23/19 successfully
[17200.999251] r8152 2-1.1:1.0 eth0: v1.12.11
[17201.038368] r8152 2-1.1:1.0 <b>enp0s13f0u1u1</b>: renamed from eth0
</div>
Using the device name from the system logs, we can verify that systemd did in fact try to rename but this failed due to MAC address not matching:
<div class=code>
$ SYSTEMD_LOG_LEVEL=debug udevadm test-builtin net_setup_link /sys/class/net/<b>enp0s13f0u1u1</b>
...
Loaded timestamp for '/etc/systemd/network'.
Parsed configuration file /usr/lib/systemd/network/99-default.link
Parsed configuration file /etc/systemd/network/10-eth-da310.link
Created link configuration context.
ID_NET_DRIVER=r8152
enp0s13f0u1u1: Device has name_assign_type=4
enp0s13f0u1u1: Config file <b>/usr/lib/systemd/network/99-default.link</b> is applied
enp0s13f0u1u1: Failed to get ACTION= property: No such file or directory
enp0s13f0u1u1: Could not apply link configuration, ignoring: No such file or directory
ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link
Unload module index
Unloaded link configuration context.
</div>
Notice the highlighted: <i>Using pass-thru MAC addr</i> - this is a <a href=https://www.dell.com/support/kbdoc/en-uk/000143263/what-is-mac-address-pass-through>Dell BIOS feature</a>:
<blockquote><i>[network devices have] their own MAC address built into their chipsets. When dock/adapter is connected to a Dell system that supports MAC Pass-through, and the network driver is loaded on the system, the adapter specific MAC address will be overridden by the system specific MAC address from the BIOS</i></blockquote>
To prevent this behaviour update the BIOS settings: <code>Pre-boot Bahviour -> MAC Address Pass-Through = Disabled</code> and upon next boot:
<div class=code>
$ dmesg
...
[ 2183.450846] r8152 2-1.1:1.0 (unnamed net_device) (uninitialized): Invalid header when reading pass-thru MAC addr
[ 2183.469019] r8152 2-1.1:1.0: load rtl8153b-2 v1 10/23/19 successfully
[ 2183.497741] r8152 2-1.1:1.0 eth0: v1.12.11
[ 2183.522662] r8152 2-1.1:1.0 <b>ethda310</b>: renamed from eth0
$ SYSTEMD_LOG_LEVEL=debug udevadm test-builtin net_setup_link /sys/class/net/<b>ethda310</b>
...
Loaded timestamp for '/etc/systemd/network'.
Parsed configuration file /usr/lib/systemd/network/99-default.link
Parsed configuration file /etc/systemd/network/10-eth-da310.link
Created link configuration context.
ID_NET_DRIVER=r8152
ethtb16: Device has name_assign_type=4
ethtb16: Config file <b>/etc/systemd/network/10-eth-da310.link</b> is applied
ethtb16: Failed to get ACTION= property: No such file or directory
ethtb16: Could not apply link configuration, ignoring: No such file or directory
ID_NET_LINK_FILE=/etc/systemd/network/10-eth-da310.link
Unload module index
Unloaded link configuration context.
</div>
Finally, the ethernet device on the bond will use the <code>ethda310</code> device whos name will consistent no matter which physical port it is attached.<br>
<div class=code>
$ nmcli con \
add type bond \
con-name bond \
ifname <b>bond0</b> \
mode active-backup \
primary <b>ethda310</b> \
+bond.options "fail_over_mac=active,miimon=100,primary_reselect=always,updelay=200" \
ip4 192.168.0.123/24 \
gw4 192.168.0.1 \
ipv4.dns "8.8.4.4 8.8.8.8" \
ipv4.method manual \
ipv6.method ignore
# create the slaves connections against the phys interfaces
$ nmcli con \
add type wifi \
con-name bond-wlan \
slave-type bond \
master <b>bond0</b> \
ifname wlp164s0 \
ssid <i>your-wifi-ssid</i>
$ nmcli con \
add type ethernet \
con-name bond-eth \
slave-type bond \
master <b>bond0</b> \
ifname ethda310
# update the wifi details seperate (not available in connnection setup above)
$ nmcli con modify bond-wlan wifi-sec.key-mgmt wpa-psk
$ nmcli con modify bond-wlan wifi-sec.psk <i>your-wifi-password</i>
# disable the auto created interfaces tied to the physical devices
$ nmcli c down <i>your-wifi-ssid</i> && nmcli c modify <i>your-wifi-ssid</i> autoconnect no
$ nmcli c down ethda310 && nmcli c modify ethda310 autoconnect no
# finally bring up the connection associated with bond0 device
$ nmcli c up bond
</div>
<h2>Audio</h2>
Audio works and uses the newer <a href=https://fedoramagazine.org/pipewire-the-new-audio-and-video-daemon-in-fedora-linux-34/>Pipewire</a> subsytem over raw Pulseaudio causes some issues for <code>paprefs</code> (won't install) which gives rise to some issues with creating multiple outputs redirect to a single output sink: for example, using a wired USB headset for zoom/conference calls and duplicate output to headset and speakers.<br>
<br>
Using the default sound settings, it's a binary option - I've also noticed that when the USB headset is disconnected the subsequent boots the audio applications crash; <code>mpv</code> starts but hangs, <code>YouTube</code> crashes <code>firefox</code> etc and you must go and manually reset/reconfigure to use the onboard/headphone outputs.<br>
<br>
The <a href=https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/Migrate-PulseAudio>pipewire migration guide</a> is not very helpful although it claims that the <code>pulseaudio module-combine-sink</code> is supported. It is possible to <a href=https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/Virtual-devices#create-a-combined-sinksource>manually create the output sink</a> for manually with <code>pactl</code> and <code>pw-link</code>
<div class=code>
# create the output
$ pactl load-module module-null-sink media.class=Audio/Sink sink_name=<b>virtual-output</b> channel_map=stereo
# find the device names you want to route to virtual output device
$ pw-link -o
Midi-Bridge:Midi Through:(capture_0) Midi Through Port-0
v4l2_input.pci-0000_06_00.0-usb-0_1.7.2_1.0:out_0
v4l2_input.pci-0000_00_14.0-usb-0_3_1.0:out_0
alsa_output.usb-Plantronics_Plantronics_Blackwire_3210_Series_2FC9575B28134CEA815BE1C29F63D5D0-00.mono-fallback:monitor_MONO
alsa_input.usb-Plantronics_Plantronics_Blackwire_3210_Series_2FC9575B28134CEA815BE1C29F63D5D0-00.iec958-stereo:capture_FL
alsa_input.usb-Plantronics_Plantronics_Blackwire_3210_Series_2FC9575B28134CEA815BE1C29F63D5D0-00.iec958-stereo:capture_FR
alsa_output.usb-Generic_USB_Audio_200901010001-00.HiFi__hw_Dock_1__sink:monitor_FL
alsa_output.usb-Generic_USB_Audio_200901010001-00.HiFi__hw_Dock_1__sink:monitor_FR
alsa_output.usb-Generic_USB_Audio_200901010001-00.HiFi__hw_Dock__sink:monitor_FL
alsa_output.usb-Generic_USB_Audio_200901010001-00.HiFi__hw_Dock__sink:monitor_FR
alsa_input.usb-Generic_USB_Audio_200901010001-00.HiFi__hw_Dock__source:capture_FL
alsa_input.usb-Generic_USB_Audio_200901010001-00.HiFi__hw_Dock__source:capture_FR
alsa_output.pci-0000_00_1f.3.analog-stereo:monitor_FL
alsa_output.pci-0000_00_1f.3.analog-stereo:monitor_FR
alsa_input.pci-0000_00_1f.3.analog-stereo:capture_FL
alsa_input.pci-0000_00_1f.3.analog-stereo:capture_FR
# manually link them together
$ pw-link virtual-output:monitor_FL \
alsa_output.usb-Generic_USB_Audio_200901010001-00.HiFi__hw_Dock__sink:playback_FL
$ pw-link virtual-output:monitor_FR \
alsa_output.usb-Generic_USB_Audio_200901010001-00.HiFi__hw_Dock__sink:playback_FR
...
</div>
The pipewire docs suggest making this a persistent virtual sink via system or user config:
<div class=code>
$ cat > /etc/pipewire/virtual-sink.conf << EOF
context.objects = [
...
{ factory = adapter
args = {
factory.name = support.null-audio-sink
node.name = "virtual-output"
node.description = "A virtual device combining all physical outputs into one"
media.class = Audio/Sink
object.linger = true
audio.position = [ FL FR ]
}
}
...
]
EOF
$ pipewire -v -c virtual-sink.conf
[W][27767.124162] pw.context | [ context.c: 1331 pw_context_load_spa_handle()] 0x562d7db829e0: no library for support.null-audio-sink: No such file or directory
[E][27767.124304] mod.adapter | [module-adapter.c: 252 create_object()] can't create node: No such file or directory
[E][27767.124330] pw.conf | [ conf.c: 503 create_object()] can't create object from factory adapter: No such file or directory
[E][27767.124664] default | [ pipewire.c: 123 main()] failed to create context: No such file or directory
[D] pw.context [pipewire.c:209 unref_handle()] clear handle 'support.log'
[D] pw.context [pipewire.c:209 unref_handle()] clear handle 'support.cpu'
</div>
But of course it doesn't work.<br
<br>
How to easily/visually create the links feeding into this device however... <code>pipewire</code>'s lack of a <code>paprefs</code>hurts here, although there is a <a href=https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/Migrate-PulseAudio#etcpulsedefaultpa><code>context.exec</code></a> block at this available to execute shell scripts/commands after each launch so a user could manually link all outputs together but user experience is obviously very poor.<br>
<h3>Going back to PulseAudio</h3>
The biggest gripe I have with pipewire is, not only the items above, but that under certain loads, the audio becomes choppy and stutters - its just a terrible user experience. Luckily we can choose to go back to PulseAudio relatively easily:
<div class=code>
$ dnf swap --allowerasing pipewire-pulseaudio pulseaudio
$ dnf install pulseaudio-utils paprefs
$ dnf remove pipewire pipewire-pulseaudio wireplumber
</div>
Following this, reboot and we'll be back to something much more stable.
<h2>Bluetooth</h2>
There appears to be no problem with bluetooth although there are a number of <a href=https://bugzilla.redhat.com/show_bug.cgi?id=2027071>bug reports indicating that Intel AX200 bluetooth chip</a> and <a href=https://bugzilla.kernel.org/show_bug.cgi?id=215167#c9>Linux 5.15.4 onwards</a> fails to initialise bluetooth devices upon reboots with <code>hci0: command 0xfc05 tx timeout</code> messages. This has been fixed in 5.15.15.<br>
<br>
I have noticed pairing devices with the default installation can be hit and miss; attmpting to pair an bluetooth Apple keyboard (05ac:023a) I had to use <code>bluetoothctl</code> as <code>bluetooth manager</code> was not able to discover it, but was able to discover various phones and tablets:
<div class=code>
$ bluetoothctl
Agent registered
[bluetooth]# power on
Changing power on succeeded
[bluetooth]# <b>pairable on</b>
Changing pairable on succeeded
[bluetooth]# <b>scan on</b>
Discovery started
[CHG] Controller 94:E2:3C:xxDELLxx Discovering: yes
[NEW] Device 78:CA:39:xx:xx:xx 78-CA-39-xx-xx-xx
,,,
[CHG] Device 78:CA:39:xx:xx:xx LegacyPairing: no
[CHG] Device 78:CA:39:xx:xx:xx Name: Apple Wireless Keyboard
[CHG] Device 78:CA:39:xx:xx:xx Alias: Apple Wireless Keyboard
[CHG] Device 78:CA:39:xx:xx:xx LegacyPairing: yes
[bluetooth]# <b>pair 78:CA:39:xx:xx:xx</b>
Attempting to pair with 78:CA:39:xx:xx:xx
[CHG] Device 78:CA:39:xx:xx:xx Connected: yes
[agent] PIN code: xxxxxx
[CHG] Device 78:CA:39:xx:xx:xx Modalias: usb:v05ACp023Ad0050
[CHG] Device 78:CA:39:xx:xx:xx UUIDs: 00001124-0000-1000-8000-00805f9b34fb
[CHG] Device 78:CA:39:xx:xx:xx UUIDs: 00001200-0000-1000-8000-00805f9b34fb
[CHG] Device 78:CA:39:xx:xx:xx ServicesResolved: yes
[CHG] Device 78:CA:39:xx:xx:xx Paired: yes
Pairing successful
[CHG] Device 78:CA:39:xx:xx:xx WakeAllowed: yes
[CHG] Device 78:CA:39:xx:xx:xx ServicesResolved: no
[CHG] Device 78:CA:39:xx:xx:xx Connected: no
[bluetooth]# <b>trust 78:CA:39:xx:xx:xx</b>
[CHG] Device 78:CA:39:xx:xx:xx Trusted: yes
Changing 78:CA:39:xx:xx:xx trust succeeded
[bluetooth]# <b>connect 78:CA:39:xx:xx:xx</b>
Attempting to connect to 78:CA:39:xx:xx:xx
[CHG] Device 78:CA:39:xx:xx:xx Connected: yes
Connection successful
[CHG] Device 78:CA:39:xx:xx:xx ServicesResolved: yes
</div>
Once paired the laptop will remember the device across reboots and we can update minor <a href=https://whatdoineed2do.blogspot.com/2016/11/wired-apple-keyboard-on-pc-with-linux.html><code>udev</code> configuration to ensure proper Apple keyboard mappings</a>:
<div class=code>
$ cat > /etc/udev/rules.d/99-apple-a1314.rules << EOF
# /etc/udev/rules.d/99-apple-a1314.rules
# for a 2009 Apple a1314 bluetooth wireless keyboard
#
# udevadm info -a -n /dev/hidraw1
# udevadm control --reload-rules
#
ACTION=="add", KERNELS=="*:05AC:023A.*", SUBSYSTEMS=="hid", RUN+="/usr/bin/udev-apple-a1314.sh"
EOF
$ cat > /usr/bin/udev-apple-a1314.sh << EOF
#!/bin/bash
if [ ! -d /sys/module/hid_apple/parameters ]; then
echo "HID apple module directory not available, modprobe hid-apple???"
# /sbin/modprobe hid-apple
exit 1
fi
echo 1 > /sys/module/hid_apple/parameters/iso_layout
echo 1 > /sys/module/hid_apple/parameters/swap_opt_cmd
echo 1 > /sys/module/hid_apple/parameters/fnmode
echo 1 > /sys/module/hid_apple/parameters/swap_fn_leftctrl
EOF
$ chmod a+x /usr/bin/udev-apple-a1314.sh
</div>
Interestingly, the <code>udev</code> subsystem does not identify this via a product or vendor id as possible with a similar wired A1242 (a5ac:021e) keyboard which we can use <code>udevadm test $(udevadm info -q path -n /dev/input/by-id/usb-Apple_Inc._Apple_Keyboard-event-kbd)</code> and subsequently <code>SUBSYSTEM=="input", ATTRS{idVendor}=="05ac", ATTRS{idProduct}=="021e"</code><br>
<br>
Pairing the same bluetooth device with a dual boot machine needs a little bit of trickery: in essense the bluetooth devices remembers an authorised token between itself and the paired device and this token is generated upon each pairing - if you pair with Linux and then Windows, the bluetooth device will have a different token when you go back to linux. This can be <a href=https://unix.stackexchange.com/questions/255509/bluetooth-pairing-on-dual-boot-of-windows-linux-mint-ubuntu-stop-having-to-p>solved by manually syncing the token between your OSs</a>. Linux keeps its tokens under <code>/var/lib/bluetooth/<i>host bluetooth mac</i>/<i>device bluetooh mac</i>/info</code>.
<h2>Webcam</h2>
The webcam (0c45:6d13) works fine but the position is painfully low (as natural on the screen on the laptop). If you wish to disable the webcam, this is possible via the BIOS or via a <code>udev rule</code>.<br>
<div class=code>
$ lsusb -tvv
...
|__ Port 3: Dev 3, If 0, Class=Video, Driver=, 480M
ID 0c45:6d13 Microdia
<b>/sys/bus/usb/devices/3-3</b> /dev/bus/usb/003/003
|__ Port 3: Dev 3, If 1, Class=Video, Driver=, 480M
ID 0c45:6d13 Microdia
/sys/bus/usb/devices/3-3 /dev/bus/usb/003/003
# lets find out udev visible info for rule
$ udevadm info -a -n /sys/bus/usb/devices/3-3
...
looking at device '/devices/pci0000:00/0000:00:14.0/usb3/3-3':
KERNEL=="3-3"
SUBSYSTEM=="usb"
DRIVER=="usb"
ATTR{authorized}=="1"
ATTR{avoid_reset_quirk}=="0"
ATTR{bConfigurationValue}=="1"
ATTR{bDeviceClass}=="ef"
ATTR{bDeviceProtocol}=="01"
ATTR{bDeviceSubClass}=="02"
ATTR{bMaxPacketSize0}=="64"
ATTR{bMaxPower}=="500mA"
ATTR{bNumConfigurations}=="1"
ATTR{bNumInterfaces}==" 2"
ATTR{bcdDevice}=="9219"
ATTR{bmAttributes}=="80"
ATTR{busnum}=="3"
ATTR{configuration}==""
ATTR{devnum}=="3"
ATTR{devpath}=="3"
<b>ATTR{idProduct}=="6d13"
ATTR{idVendor}=="0c45"</b>
ATTR{ltm_capable}=="no"
ATTR{manufacturer}=="Sonix Technology Co., Ltd."
ATTR{maxchild}=="0"
# note that identifiers we use to identify and disable
$ cat > /etc/udev/rules.d/99-webcam.rules << EOF
SUBSYSTEM=="usb", ATTRS{idVendor}=="0c45", ATTRS{idProduct}=="6d13", ATTR{authorized}="0"
EOF
$ udevadm control --reload-rules
</div>
Because this is identified by the <code>usb</code> subsystem and driver the udev rule will only come into effect on boot. If you need to dynamically enable the webcam: <code>echo 1 | sudo tee /sys/bus/usb/devices/3-3/authorized </code><br>
<br>
Whilst at a desk with an external monitor running, using an external webcam like a Logitech C310 is an option although its useful to disable the external webcam microphone:
<div class=code>
$ cat > /etc/udev/rules.d/70-logitech-c310.rules << EOF
SUBSYSTEM=="usb", DRIVER=="snd-usb-audio", ATTRS{idVendor}=="046d", ATTRS{idProduct}=="081b", ATTR{authorized}="0"
EOF
</div>
The reason to mention this is that whilst most conference software will allow you to select inputs, its preferable to disable the microphone if at all possible but I've not found this option with the built in mic yet.
<h2>Virtualisation and Containerisation</h2>
There were no issues running <code>QEMU / virt-manager</code> with my Windows 7 image from a different machine - <a href=https://dgpu-docs.intel.com/devices/iris-xe-max-graphics/index.html>Intel documents how to perform graphics passthrough to the VM via <code>vfio-pci</code></a> but this is isn't something I've explored for Fedora yet as Intel documents refer to a Ubuntu kernel.<br>
<br>
Containerisation (and <a href=https://developers.redhat.com/blog/2020/09/25/rootless-containers-with-podman-the-basics>rootless containers</a>) via <code>podman</code> works with no additional configuration with Fedora 35 by default shipping and configured with container groups v2/unified heirarchy.
<h2>Power management via S0ix states</h2>
The Dell XPS 9305 has <a href=https://01.org/blogs/qwang59/2018/how-achieve-s0ix-states-linux>support for S0ix states (similar to S3/suspend to mem) as described by this Intel reference here</a> with a <a href=https://01.org/blogs/qwang59/2020/linux-s0ix-troubleshooting>troubleshooting materials here</a>. I am still to investigate the impact of the modern S0ix states (opportunistic and explicit) but for certain <a href=https://www.dell.com/community/XPS/XPS-13-9310-Ubuntu-deep-sleep-missing/td-p/7734008>reported that traditional "deep" sleep/hibernation is not supported</a> with <code>/sys/power/mem_sleep</code> only supporting <code>s2idle</code> which we've observed working.
<h2>Fingerprint sensor</h2>
The fingerprint sensor does not work without <a href=https://aboutcher.co.uk/2020/10/goodix-fingerprint-reader-on-fedora-linux/>manually overwritting system fingerprint handling shared libs</a> from <a href=http://dell.archive.canonical.com/updates/pool/public/libf/libfprint-2-tod1-goodix/>Dell's Debian based packages</a> so I've left this alone. However, sometimes this can cause slight problems, particularly when using <code>sudo</code> or other authentication means.<br>
<br>
For example, the login may take a while whilst PAM timeouts the query to the fingerprint daemon and you'll see similar <code>journal</code> errors:<blockquote><code>
sudo[7281]: pam_fprintd(sudo:auth): GetDevices failed: Connection timed out</code></blockquote>
For me, its simply easier just to disable this:
<div class=code>
$ systemctl mask fprintd
$ authselect disable-feature with-fingerprint
</div>
<h2>Conclusions</h2>
Overall, Fedora on this Dell XPS 13 9305 laptop has been an easy experience - long gone are the days, it seems, of fighting installers/BIOS/bootloaders as we had to even 10yrs ago, nevermind the days of Slackware in the 1990s. Almost all things work, and work well including the inbuilt micro SD card reader. Theres some minor annoyances switching off laptop screen when using external monitor but given that the we have working networks, graphic card h/w decode/encode acceleation and working docking station, it's not a bad platform to start your complaints.
Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-49071620356860849772021-04-10T21:22:00.006+01:002021-04-29T18:30:21.385+01:00One of these is not like the other: iPod classic upgradeOne of the great things with electronc devices in previous years was the ability to replace/repair items but also importantly as the device ages, to replae the battery. Apple have a certain reputation when it comes to repairability of their portable devices (iPhones, iPods etc) and its not positive as they cram more and more into smaller spaces. However there is a set of iPod that can be relatively easily self serviced and revived.<br>
<br>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_7st_d3GoLegkmsI2gWiICEdhoMBbTLTYM1Ss6rOQnMiu3SHhgYgIxeGpR_lngX2mtnvqG3EZZH7JKecjmw8WqqXvNl5PMpgu15sEMRl7GBm3QvVn-O676tnkC15ZXSuOsw9YR90/s0/ipods.jpg"><img alt="iPod 5G" width="95%" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_7st_d3GoLegkmsI2gWiICEdhoMBbTLTYM1Ss6rOQnMiu3SHhgYgIxeGpR_lngX2mtnvqG3EZZH7JKecjmw8WqqXvNl5PMpgu15sEMRl7GBm3QvVn-O676tnkC15ZXSuOsw9YR90/s0/ipods.jpg"/></a>
<a name='more'></a>
The details here are not new and theres been many sites other the years that have described the DIY mods and extensions that you can perform on your iPod. So this is not what this is about.<br>
<br>
My motivation for using, at time of writing, a 15 year old iPod for music might be counterintuitive:
<ul><li>the interface is very basic: can't add to current play queue or scrub music so easily</li>
<li>limited audit file suppport (if we don't go with <a href=https://www.rockbox.org/>Rockbox</a>): mp3, m4a (incl lossless), wav</li>
<li>only availble as second hand so quality hit and miss</li>
</ul>
Even with these concerns, when you consider the non-replacable battery drain/wear on your phone then it's an easy choice to revisit the full size iPods (original, 2nd to 5th/5.5, 6th/7th gen) where these batteries can be replaced. Furthermore, even in 2021, these iPod parts (batteries, headphone/hold jacks, screens, front/rear plates) for the 4th gen ownards are readily available on eBay and aliexpress etc.<br>
<br>
<iframe width="560" height="315" src="https://www.youtube.com/embed/S83ZHf1GAeY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Assuming that the screen and logic board of the iPod are not destroyed, then the battery replacement and storage upgrade are the main DIY themes but there are as always a catch.<br>
<br>
Of the full size iPods, the <a href=https://en.wikipedia.org/wiki/IPod_Classic#5th_generation_(%22iPod_Video%22)>5/5.5G aka Video (a1136)</a> is the easiest model to upgrade and best value proposition: form factor, functionality and ease of modification - the latter iPod Classic (aka 6th/7th generation) are notorious for being difficult to open with its metal faceplate. Additionally the 5/5.5 generation was the last to use the Wolfson DAC which (apparently) has a less clinical sound rather than that of the Cirrus DAC introduced in the 6th generation iPod.<br>
<br>
<br>
I orginally had a 30Gb 5.5G iPod and started to upgrade the storage and battery.<br>
<br>
The storage upgrade is motivated by two points:
<ul><li>power hungry hard disk and stability issues - whilst the harddisk only spins up to fill the cache its still more than a solid sate storage</li>
<li>limited storage</li>
</ul>
The standard upgrade path here is an <a href=https://www.iflash.xyz/>iFlash</a> board: a SD (Solo, Duo, Quad..) or SSD (mSata) upgrade. Personally, I don't have the need for anything more than 64GB or 128GB of storage in an iPod. In terms of power consumption, the <a href=https://www.iflash.xyz/runtime-shootout-2016-quad-dual-solo-msata-vs-original-hard-drive/>2016 iFlash runtime comparison</a> shows SD access is the least hungry with HDD being >2x and mSata >4.5x more expensive in terms of consumption. <br>
<br>
The other point to remember is that the iPod 5G came in 2 configurations: the 30GB/60GB (original 5th gen) or 30GB/80GB (2nd iteration aka 5.5 aka 5 Enhanced). There are two important difference between the 30GB and 60/80GB:
<ul><li>depth of the rear case: thin (30GB) vs thick</li>
<li>amount of onboard RAM</li>
</ul>
The onboard memory affects the iPod's ability to shuffle all tracks or to start for devices with a LOT (> ~25,000) tracks. This is likely the ability to load and parse the iTunesDB database on the iPod. This limitation also puts an implicit limit on the max size of replacement storage that you add to the device.<br>
<br>
The depth of the rear case has an impact on the battery upgrade option. 30GB battery replacement units are typically 450-550mah and are limited by the battery height whereas the 60/80GB can hold a thicker and longer battery. For the bigger, 1800-3000mah batteries, these are only compatible with the thicker cases.<br>
<br>
When upgrading a 30GB (which is always factory fitted with a thin back) to an aftermarket thick back, we can transplant the original headphone and hold jack to the thicker case. However, whilst the headphone/hold jack will fit and function you will notice that the headphone jack is not flush with the case and the hold switch may be a little recessed in the case, making it less easy to use. This is because the thin back is curved andthi is refleted in the plastic mouldings for the headphone/hold jacks whereas the thick version have flat plastic mouldings and identifiable by the square bump on the underside of the headphone jack.<br>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBSn2Fqg4vWxJZxJBfhtWXTjUP3puDembE12Zke3NEYXd6-5bfkEgyLXPcZlV-ucFBcYn573Rvw2TFlAy-BypWWThHUW6S1bCgdU6t_DK7giI-ao1-K07jjD3KfrKAGYwDaf1cKuE/s0/foo.jpg" width="95%"><br>
<sup>top left: the original thin-back headphone jack in a thick case - notice the curve; top right: thick-back headphone jack (show bottom with the square bump) in thick back</sup>
<br>
The changes to the original 30GB 5.5G was thus: replace harddisk with iFlash Solo, new 2000mah battery with thick rear case.
<table>
<tr><th>Model/mods</th><th>Depth</th><th>Weight</th></tr>
<tr><td>iPod 5.5G, 30GB original</td><td>11.5mm</td><td>130g</td></tr>
<tr><td>iPod 5.5G, iFlash Solo + thin 450mah battery</td><td>11.5mm</td><td>115g</td></tr>
<tr><td>iPod 5.5G, iFlash Solo + 2000mah battery/thick case</td><td>15.0mm</td><td>125g</td></tr>
<tr><td>iPod Touch 1G, 32GB original</td><td>8.4mm</td><td>105g</td></tr>
</table><br>
The original Apple specs claimed up to 14hrs music playback on its 550mah battery. The original 15 year old battery barely lasted 2 hours for playback but the new 450mah battery would only give around 7 hours of real world playback (music with different bitrates and skipping through songs etc) with the original HD and about 8.5hrs with the iFlash Solo. So surprisingly not a massive increase - with the 2000mah battery I was able to get about 36hrs of realworld playback spread over 4 days.<br>
<br>
As for the weight, the "full" upgrade is just a little lighter but what's the cost?
<ul><li>iFlash Solo - 28GBP</li>
<li>64GB SD card - 14GB</li>
<li>2000mah battery - 13GB</li>
<li>thick case - 10GB</li>
<li>thick headphone/hold jack - 8GB</li>
</ul>
At total of 73GBP for upgrade parts that should hope to provide a good number years of service.Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-67148693929945562392021-01-12T21:00:00.000+00:002021-01-12T21:00:14.487+00:00Windows drag n drop Batch Image Metadata UpdatesBatch modifying the <code>exif metadata</code> on your images (NEF, DNGs, jpegs ....) in Windows can be a little painful. <code><a href=https://exiftool.org/>Exiftool</a></code> is a great tool that can perform the metadata updates but most would associate this with command line updates.<br>
<br>
With Windows, there exists a neat tick with batch files that you can drag and drop files onto batch file under Windows explorer and the batch file will accept them as arguments. For example, to set the lens information on any <code>exiftool</code> support file, create the following <code>.bat</code> file and then drag and drop your files!<br>
<div class=code>
@echo off
FOR %%i IN (%*) DO exiftool -overwrite_original -Lens="Nikkor 20mm f/3.5 AI" -MaxApertureValue="3.5" -FocalLength="20" %%i
pause
</div>Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-49350725474973620842021-01-11T08:32:00.043+00:002022-02-28T19:03:53.138+00:00Overcoming missing DisplayCAL dependacies on recent Linux distros with X11 container<a href=https://displaycal.net/>DisplayCAL</a> has been a great free tool to partner your hardware colour calibrator, like your DataColor Spyder etc. However installing this on any of the newer mid-2020 distributions (like Fedora 32 and above) has become problematic due to the <code>python 2</code> requirements being dropped by a number of distributions.<br>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgC0YzK3RcoiPav0_hcNN1aOrXeR7r6jnMvIh047WfnecsN3-3wOIhhw5KxZGr2pZv8iplY4ayOXz-0q5yqTnBwAOj-pE03OqzTen34cNj7Ar9pAy2buwU73FzulGP3GTGOJM9sUBE/s0/displaycal-podman.png" width="95%"><br>
How can we continue to use your DataColor Spyder with DisplayCAL on a recent linux?<br>
<a name='more'></a>
<br>
The last known working version of DisplayCAL on Fedora was for F31 so there are 2 options:<br>
<ul><li>live boot media (USB/DVD) of F31 and manaully install depedancies for DisplayCAL</li>
<li>run DisplayCAL from a F31 container on your current workstation</li>
</ul>
I'll only cover the latter option above since its the apparently easier option, particularly since the calibration measurement can take around 90-100 minutes, you can run the DisplayCAL container on your monitor whilst doing something useful on your host.<br>
<br>
Using <code>docker</code> or <code>podman</code> we can to create an image as normal but the only complications is that we require the <code>X11</code> subsystem and we will rely on some of the <code>systemd</code> like services.
<br>
<h3><code>Dockerfile</code> and build</h3>
The following will give us the build configuration and build us an image tagged as <code>f31-displaycal</code><br>
<div class=code>
$ cat > Dockerfile << EOF
FROM fedora:31
RUN dnf install -y libXScrnSaver libXdmcp https://displaycal.net/download/Fedora_31/x86_64/DisplayCAL.rpm dbus-x11 && dnf clean all
CMD [ "/sbin/init" ]
EOF
$ podman build -t f31-displaycal .
</div>
This will provide <code>ArgyllCMS 1.9.2</code> and <code>DisplayCAL 3.8.9.3</code> in an 525MB docker image. Whilst <code>DisplayCAL</code>, on startup, will advise and try to install newer versions of <code>ArgyllCMS</code> I've found that this can be hit and miss.<br>
<br>
If you wish to run a <a href=https://www.argyllcms.com/Argyll_V2.1.2_linux_x86_64_bin.tgz><code>ArgyllCMS 2.1.2</code></a> (this also requires the <code>libXdmcp</code> package) you can download manually, make availabe on a shared volume mount and request <code>DispalyCAL</code> to locate the binaries.
<h3>Running the <code>DisplayCAL</code>X11 container</h3>
This is where its a little tricky as we need a service enabled before we can start DisplayCAL<br>
<div class=code>
$ mkdir -p ${HOME}/.local/share/DisplayCAL
# start a container named 'displaycal' for ease of referencing later
# add '-it' to the command below if you want to see what is happening (not a lot)
# on the terminal, otherwise the container will run in the background
# can use 'podman -f displaycal' to check the log
$ podman run --name displaycal --rm \
-u 0 -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw \
--device /dev/bus/usb/:/dev/bus/usb --net=host \
-v $HOME/.local/share/DisplayCAL:/root/.local/share/DisplayCAL \
f31-displaycal
# attach to running container to launch 'dispalyCAL' tool
$ podman exec -t -i displaycal bash
systemctl start dbus
displaycal
</div>
The <code>--device</code> ensures that the container can see the USB colour calibration device and the volume mounts on <code>/tmp/.X11</code> will allow the container to reach the localhost's X display. Finally, the volume mount of <code>.local/share/DisplayCAL</code> will provide an easy way for you to retreive your generated <code>.icc</code> profile as this is the location used within the container and mapped to your locahost.<br>
<br>
Whilst <code>DisplayCAL</code> is running, remember to disable screensavers and monitor power saving to ensure the calibration tool can continue to measure correctly.<br>
<br>
Remember, once the container is stopped the data generated will be lost so you must have a way to get the generated <code>.icc</code> out of the container -- of course we can also install <code>scp</code> but this is not necessary as we have shared the local <code>$HOME/.local/share/DisplayCAL</code> with container where the container will have written the profiling information. Once <code>DisplayCAL</code> is complete, quit out of that and then stop the main running container: <code>podman stop displaycal</code>.<br>
<br>
Now with this, we can perform the calibration and pick up the generated <code>.icc</code> and import it into <code>color</code> etc.
<br>
<h3>Profiling and <code>.icc</code></h3>
From the initial screen ensure the <code>Mode</code> is set correct for your monitor type (backlight WLED,CCFL etc) which can be <a href=https://www.displayspecifications.com>verified here</a>. The default calibration settings (under <i>Profiling</i> on the main display page) aims to produce a high quality profile but at the expense of time. This can be affected by:
<ul>
<li>profile quality - high vs medium</li>
<li>test chart - auto optimised/adjustable number of patches vs specific test charts/fixed number of patches</li>
</ul>
The difference between high (~95mins) vs medium (~55mins) with auto optimised is noticable.<br>
<br>
<img src="https://blogger.googleusercontent.com/img/a/AVvXsEjxNmuDXdmDxp8pW5ZEIUJdiP0DSsvm7FaG5ClezMjJ6JqPECe1W22xEnXNh7WAzYhDsGHsGPBp0B_deN1NK3m0wF17E6muuHJzkf5luU9fuJrbvZwvN1Hb5pgZnzPeXSyhhiEyffi4fXExMZspvgzoTbvvDIrk1_eMwmCeIi2vkpSB5bD6xic" width=95%/><br>
Once the profile is generated, it will have an embedded description which typically is the monitor name, date and the test coverage. I have seen that sometimes on multi monitor setups the monitor name is not always correct - this results in an <code>.icc</code> that declares itself as something different which can be very confusing. If you need to modify the generated <code>.icc</code>, you can use <a href=http://sourceforge.net/projects/iccxml>IccXML</a> to convert to an XML, update the ascii description before reencoding back to an <code>.icc</code>.Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com1tag:blogger.com,1999:blog-7800204991823004827.post-79182484925168791522021-01-04T17:58:00.011+00:002023-09-25T14:03:32.483+01:00Finally migrating from VMplayer to KVMI have been running VMware, and in particular VMplayer 12.5.x, on my aging Sandybridge (circa 2010) i7-870 desktop for many years and over many interations of Fedora upgrades there have been fights to get it to continue to work.
<img src=https://live.staticflickr.com/65535/50800647181_2b537a3aa2_h.jpg width=95%>
<a name='more'></a>
I've been aware of the <a href=https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine>linux KVM</a> for a while but I've never got around to migrating my Windows7 VM across. Whilst F32 and VMplayer 12.5.x work (with the continued thanks of the out of kernel host modules) it was an idea to finally give this another go.<br>
<br>
Originally the Windows 7 VM was cloned from the original Windows 7 installation that was on the aging i7 desktop <a href=https://whatdoineed2do.blogspot.com/2010/12/virtual-nerding.html#virtualisation>to run with VMplayer</a>. So I have a working sparse 31GB <code>.vmdk</code> disk image that represents the original 80GB disk.<br>
<br>
To migrate to over to <code>KVM</code> I first have to get the disk image into a suitable format:<br>
<div class=code>
# convert the image
$ qemu-img convert -f vmdk -O qcow2 Win7x64.vmdk Win7x64.qcow2
# ensure the kernel daemon and API running
$ systemctl start libvirtd
# start the user mgnt tool to create the VM
$ virt-manager &
</div>
After the new disk image was created, <code>virt-manager</code> was used to create a 4 core and 6GB VM for Windows 7 - and to my surprise the start up of the Windows 7 VM was painless! There were a couple of small admin items was to remove the <code>wmare tools</code> from the running Win 7 guest OS and the installation of <code><a href=https://docs.fedoraproject.org/en-US/quick-docs/creating-windows-virtual-machines-using-virtio-drivers>virtio drivers (0.1.173-4)</a></code>:
<ul>
<li>upgrade network driver</li>
<li>upgrade graphics driver</li>
</ul>
by mounting the <code>virtio-win</code> ISO and updating driver from Windows' <i>device manager</i>. <a href=https://askubuntu.com/questions/1146441/how-to-properly-configure-virt-manager-qemu-kvm-with-windows-guest>Further information</a>.<br>
<br>
To enable auto display resolution scaling:
<ul>
<li>from select the VM: <code>View -> Scale Display</code> and select: <code>Auto resize VM with window</code></li>
<li>from <code>virtaul machine manager</code>: <code>Edit -> Preferences -> Console</code> and <code>Graphical console scale = Always</code> and <code>Resize guest with window = On</code></li>
</ul>
From within Windows, set the desired Display resolution (to 1920x1080 for example) and the VM window will automatically scale accordingly.<br>
<br>
It's important to note that Windows 7 (and previously WinXP pro) <b>only supports two physical CPUs but unlimited on cores</b>. Therefore its important to set the CPU topology correct, limiting the number of <i>sockets</i> (physical CPUs seen) to two and increasing the number of <i>cores</i> as appropriate.<br>
<br>
Now that we have the VM running we need to share data between the linux host and the Win7 guest. Whislt there are some discussions for spice drivers etc theres not a lot of success stories so for now I went with the tried and tested (although painful) <code>samba</code> server on the linux host.<br>
<br>
Ensure that samba is installed and we update the firewall (SElinux is not running to remove a bunch of other pain for a single/home box):
<div class=code>
$ dnf install samba
# verify the zones
$ firewall-cmd --get-default-zone
public
$ firewall-cmd --get-active-zone
libvirt
interfaces: virbr0
public
interfaces: wlp0s20f3 bond0
# notice that 'libvirt' is zone we care about
$ firewall-cmd --add-service={samba,samba-client} --permanent --zone=libvirt
$ firewall-cmd --reload
</div>
<br>
Craft the <code>/etc/samba/smb.conf</code><br>
<div class=code>
[global]
#log level = 10
workgroup = WORKGROUP
# for winxp
server min protocol = NT1
lanman auth = yes
ntlm auth = yes
# allow execution of binaries off the server
acl allow execute always = true
security = user
; interfaces = 127.0.0.0/8 192.0.0.0/24 virbrg eth0
; bind interfaces only = yes
server role = standalone server
obey pam restrictions = yes
unix password sync = yes
passwd program = /usr/bin/passwd %u
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
pam password change = yes
map to guest = bad user
passdb backend = tdbsam
usershare allow guests = yes
create mask = 0644
directory mask = 0750
[public]
comment = Public
path = /export/public
browseable = yes
read only = no
guest ok = yes
</div>
add final <code>samba</code> admin:
<div class=code>
$ testparm
$ systemctl enable --now samba nmb
# add your user to authenticate and ensure permissions to update/write files to host
$ smbpasswd -a ray
...
</div>
<br>
On the Windows guest side, ensure it can see traffic in its own firewall:<br>
<div class=code>
Control Panel
Windows Firewall
Advanced Settings
Inbound Rules
Locate "File and Printer Sharing (SMB-In)"
right-click and "EnableRule"
</div>
<br>
And this will be it.<br>
<br>
The Windows 7 VM is a safer bet to move than my older WinXP VM primarily because of activation issues - on Windows 7 I've found that the moves to the VM in the past have complained <i>This copy of Windows is not genuine</i> but it <be>NEVER</b> shuts you down. I wasted an old eBay Windows XP pro license creating a Windows XP VM only to find that moving it between linux boxes was enough to request re-activation which would kick you out. Windows 7 on the other hand is still usable but just with the error message in the corner.<br>
<br>
<h3>Shrinking the qcow2 image</h3>
It is possible to reduce the new disk image further although this may be more useful once you have used the VM for a while and the sparse filesystem is no longer sparse: files deleted will not cause the disk image to shrink. To shrink the image within the Windows VM, <a href=https://docs.microsoft.com/en-gb/sysinternals/downloads/sdelete>download <code>sdelete</code></a> and run as admin on a command prompt. This will zero out the free space to allow for compression:
<div class=code>
c:\sdelete.exe -z c:
</div>
Once complete, shutdown virtual Windows guest and on the linux host create new disk size optimised disk image:
<div class=code>
$ mv Win7x64.qcow2 Win7x64.qcow2_z
$ qemu-img convert -f qcow2 -O qcow2 Win7x64.qcow2_z Win7x64.qcow2
</div>
For my 80GB virtualised disk, for which 19GB is used, it was sitting at 31GB. Following the resulting zeroed free space the image was about 86GB (as expected since we're not longer sparse as we've zeroed all items on the filesystem) and the converted image was about 25GB.
<br>
<br>
Finally, whats the point? For me the <code>KVM</code> has been part of the linux kernel since 2.4.x / cira 2007 so its fairly mature and works whereas VMware is a commercial outfit that has already invalidated the software (12.5.x because of old h/w). Technically though, we're left with these reasons for choosing <code>KVM</code>:<br>
Pros:
<ul>
<li><code>KVM</code> part of mainline kernel</li>
<li>No need for tracking the <a href=https://whatdoineed2do.blogspot.com/2018/10/vmware-player-12x-with-418x-kernels.html>required kernel modules</a> (via <code><a href=https://github.com/mkubecek/vmware-host-modules>vmware-host-modules</a></code>) and rebuilding/install on every new kernel upgrade</li>
</ul>
Cons:
<ul>
<li>No easily dedicated folder sharing between Linux host and Windows guest - need to use samba and take its performance hit although there is <a href=https://virtio-fs.gitlab.io/howto-windows.html>some work</a> with <a href=https://libvirt.org/kbase/virtiofs.html><code>virtio-fs</code></a> that should offer much better IO performance but I've struggled to get this to work</li>
</ul>Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-10198300411766021362020-12-31T17:39:00.005+00:002021-01-05T12:36:04.730+00:00Fixing ffmpeg and failing hardware NVenc/dec following software upgrade<img src=https://upload.wikimedia.org/wikipedia/commons/thumb/5/5f/FFmpeg_Logo_new.svg/800px-FFmpeg_Logo_new.svg.png width=95%>
If you find that <code>ffmpeg</code> is no longer able to use your NVidia's card for hardware decode/encode after an OS or software upgrade, ensure your running X11/NV driver and the CUDA libraries are compatible otherwise you will get cryptic error messages:
<div class=code>
$ ffmpeg -y -hwaccel cuda -hwaccel_output_format cuda \
-c:v h264_cuvid -i 201231-163836.mkv \
-c:a copy \
-vf scale_npp=-1:720 \
-rc vbr_hq -c:v h264_nvenc \
-b:v 3M -minrate 500k -maxrate 12M \
output.mp4
...
failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
Unable to create device
No device available for decoder: device type cuda needed for codec h264
</div>
<a name='more'></a>
Check your NV graphics driver and the CUDA libraries version are matching or compatible. I've not had great success with rpmfusion's CUDA pkg which is in sync with their NVidia driver so this may apply to manual installs only.<br>
<br>
Verify what CUDA drivers are available at <a href=https://developer.nvidia.com/cuda-toolkit-archive>CUDA toolkit archive</a> and download and install the most recent but compatible toolkit for your graphics driver.<br>
<div class=code>$ wget https://developer.download.nvidia.com/compute/cuda/11.1.1/local_installers/cuda_11.1.1_455.32.00_linux.run</div>
In this example I'm using a 455.80.01 driver already. Whilst this CUDA library says that its using anything after the 455.32.00 driver this package will come with the driver too.<br>
<br>
Once the matchin CUDA drivers are installed, your installed <code>ffmpeg</code> will work again with hw accelerated encode and decode.<br>
<br>
If you want to rebuild <code>ffmpeg</code> along with the system installed ffmpeg, <a href=https://docs.nvidia.com/video-technologies/video-codec-sdk/ffmpeg-with-nvidia-gpu/>detailed instructions are available</a>:<br>
<div class=code>
# get the nv headers
$ git clone https://github.com/FFmpeg/nv-codec-headers && cd nv-codec-headers && sudo make install
# get ffmpeg source
$ git clone https://git.ffmpeg.org/ffmpeg.git && cd ffmpeg && \
PKG_CONFIG_PATH=/usr/local/lib64/pkgconfig/:/usr/local/lib/pkgconfig/ ./configure --prefix=/usr/local --sysconfdir=/etc \
--disable-debug \
--disable-shared --enable-static \
--enable-nonfree --enable-cuda --enable-cuvid --enable-nvenc --enable-libnpp \
--enable-libpulse \
--extra-cflags=-I/usr/local/cuda/include \
--extra-ldflags="-L/usr/local/cuda/lib64 -Wl,-rpath=/usr/local/cuda/lib64" && \
make -j install
</div>Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-81116152727515568042020-12-31T11:00:00.010+00:002021-11-26T20:54:36.475+00:00Game Emulation Options on LinuxEmulation for old games, particularly on Linux, can be a bit of a difficult area to navigate. Whilst there are nice standalone setups like <a href=https://retropie.org.uk>retropie</a>, it does need a Raspberry Pi and a display. For those looking to run emulation on their normal OS will have the option of programs like <a href=https://www.retroarch.com>Retroarch</a>, <a href=https://www.mamedev.org>mame</a> and <a href=https://neo-source.com>FB Neo</a>. But where to start?<br>
<br>
<img src=https://live.staticflickr.com/65535/50802558143_60c58e3cd2_b.jpg width=95%>
<a name='more'></a>
<h2>Retropie</h2>
This is an SD image that can be loaded for a RPi - it is built on top of RPi's raspbian and uses RetroArch for emulation and EmulationStation to manage the UI. This setup is not easily runnable outside of RPi but it works well for RPi.<br>
<h2>RetroArch</h2>
This is a nicely polished interface that uses <i>cores</i> to provide the supported emaulators. It provides an every growing set of preconfigured controllers that can be automatically download although if your controller is not availble, you can set it up locally. The cores are <i>libretro</i> maintained forked or periodicallyed sync with various upstreams projects.<br>
<br>
Compiling this can be a litle messy as its modular and split across a number of source repos. The main repos:<br>
<ul><li><a href=https://github.com/libretro/retroarch>RetroArch</a> - the main interface</li>
<li><a href=https://github.com/libretro/libretro-super>libretro-super</a> - a builder for all other cores </li>
</ul>
<div class=code>
$ git clone https://github.com/libretro/retroarch && cd retroarch && git checkout v1.9.0
$ ./configure --enable-libusb --enable-udev && make install
</div>
Once installed, you may want to download <i>assets</i> and change the UI to the <i>rgui</i> interface under <i>Settings</i> since a number of tutorials will show this interface. Also ensure that the input for joystick is set of <code>udev</code> so plugable devices are made available to <code>RetroArch</code><br>
<div class=code>
$ git clone https://github.com/libretro/libretro-super && cd libretro-super
$ for i in fbneo snes8x2010 dosbox vice-x64; do
./libretro-fetch.sh $i &&
./libretro-build.sh $i
done
./libretro-install.sh
</div>
Install the <code>foo_libretro.so</code> and corresponding <code>.info</code> to <code>~/.config/retroarch/cores/</code>. For some emulators, I've seen problems. The <code>DOSbox</code> and <code>vice</code> emulators seem to fail playing some games even though they work in standalone emulators on the same linux host.<br>
<h3>Recording walkthroughs</h3>
On my machine, I have a NVidia card that is supported by <code>ffmpeg</code> for hardware video encode. This is useful as <code>RetroArch</code> can record video of the games being played via <code>ffmpeg</code>. By default <code>RetroArch</code> uses <code>libx264</code> to encode the video in software. By adding <code>~/.config/retroarch/records_config/nv.cfg</code><br>
<div class=code>
vcodec = h264_nvenc
pix_fmt = yuv420p
#video_<i>ffmpeg option</i> = ...
#video_preset = llhq
#video_profile = main
#video_rc = vbr
</div>
To verify what additional options are available <code>ffmpeg -h encoder=nvenc</code><br>
<br>
On the <i>Settings -> Record</i> ensure that <i>Use GPU</i> is set to false - with this enabled it captures the post processed frame but this can slow down the actual game, even though the recorded output plays <i>normally</i><br>
<h3>Controllers</h3>
<code>RetroArch</code> has a notion of a virtual controller - this is what the internal keys are mapped to but real controllers need their inputs mapped to the virtual controller. The controller inputs are community contributed and can be automatically downloaded but new items awaiting PR in its <a href=https://github.com/libretro/../retroarch-joypad-autoconfig>repo</a> will not be available.<br>
<br>
<code>RetroArch</code> will decide which controllers map to player 1 and 2 etc and depends on plug-in order and also USB name if both are inserted on startup - since you can not map the <code>/dev/input/js*</code> device nodes via <code>udev</code> for persistent names, you have to use the <i>Settings -> Input -> Port 1 Controls -> Device Index</i> to toggle to the joystick required for player 1 each time the process starts. Also note that it appears that even with multiple joysticks connected <a href=https://github.com/libretro/RetroArch/issues/3337>only Player 1 can control the menu via the controller</a> although keyboard and mouse inputs can also control menu.<br>
<h3>Adding ROMs</h3>
Once you add your ROMs, say into <code>/export/roms/fba</code>, they need to be discovered by <code>RetroArch</code> - the most repeatable way is documented <a href=https://neo-source.com/index.php?topic=3725.0>here<a> and reproduced:
<div class=code>
1. Open RetroArch
2. Update Databases in "Main Menu > Online Updater" (not 100% sure this one is required)
3. Go to "Import Content > Manual Scan"
4. Fill it :
- Content Directory = /export/rom
- System Name = choose "FBNeo - Arcade Games"
- Default Core = choose "Arcade (FinalBurn Neo)"
- Arcade DAT File = <a href=https://github.com/libretro/FBNeo/blob/master/dats/FinalBurn%20Neo%20(ClrMame%20Pro%20XML,%20Arcade%20only).dat?raw=true>download and use this file</a>
5. The remaining can be left at default, but consider turning on "Overwrite Existing Playlist" if you are updating an existing list
6. Select "Start Scan"
7. Go back, you should have a FB icon with your new playlist inside
</div>
<h2>FB Neo</h2>
This is the upstream for the FBNeo RetroArch core and available as <a href=https://github.com/finalburnneo/FBNeo>FBNeo</a>. Whilst it's an option to use this as a standalone, I've used the <code>RetroArch</code> version for the easy of use. The linux <code>SDL2</code> build isn't inutiative so I quickly lost interest given the easier (<code>RetroArc</code>) alternatives.<br>
<br>
<a href=https://github.com/finalburnneo/FBNeo/releases/tag/v1.0.0.0>Pre build binaries</a> are available - for Windows its a standalone zip that can be run standalone but I wasn't able to persist configurations.<br>
<h2><code>mame</code></h2>
This is one of the best old school emulators, striking for component accuracy - its interface is minimal and functional although as of version 0.177 became a little more picky for the ROMs, with many needing to be redumped which may have some implications for your old game cartridges.<br>
<h3>Compiling subset of arcade support</h3>
The <code>mame</code> is a monolithic binary with compiled in support for different arcade boards and games. Compiling this cna take some time and be relatively large when you don't need all functionality.<br>
<div class=code>
$ git clone https://github.com/mamedev/mame && cd mame \
make REGENIE=1 SOURCES=src/mame/drivers/cps2.cpp,src/mame/drivers/cps3.cpp,src/mame/drivers/cps1.cpp,src/mame/drivers/capcom.cpp,src/mame/drivers/snes.cpp && \
make install
</div>
<h3>Setting up multiple controllers</h3>
<code>mame</code> can handle multiply configured controllers that may not be always plugged in via its <code>controller</code> configuration. Firstly, you will need to identify the controller's identifier as seen by its <i>GUID</i> or <i>device id</i><br>
<div class=code>
$ mame -v
...
Joystick: Start initialization
Input: Adding joystick #0: MY-POWERCO.,LTD.USBJoystick (device id: 030000008f0e00000300000010010000)
Joystick: MY-POWER CO.,LTD. USB Joystick [GUID 030000008f0e00000300000010010000]
Joystick: ... 5 axes, 12 buttons 1 hats 0 balls
Joystick: ... Physical id 0 mapped to logical id 1
Joystick: ... Has haptic capability
Joystick: End initialization
..
</div>
Use normal means to set the controls for each controller and create a consolidated <code>controller</code> file that is referenced by <code>~/.mame/mame.ini</code> sections:
<div class=code>
ctrlrpath /usr/share/mame/ctrlr
ctrlr default
</div>
and finally the <code>/usr/share/mame/ctrlr/default.cfg</code>. The following will always map certain controllers to certain ports - ie the Hori stick is always mapped to player 1.
<div class=code>
<mameconfig version="10">
<system name="default">
<input>
<!-- PS2 type generic controller -->
<mapdevice device="030000008f0e00000300000010010000" controller="JOYCODE_2">
</mapdevice>
<!-- HORICO.,LTD.RealArcadeProS -->
<mapdevice device="030000000d0f0000aa00000011010000" controller="JOYCODE_1">
</mapdevice>
<!-- Hori NSwitch fighting stick mini 2 aka GenericX-Boxpad -->
<mapdevice device="030000000d0f00003701000013010000" controller="JOYCODE_3">
</mapdevice>
<port type="P1_JOYSTICK_UP">
<newseq type="standard">
JOYCODE_1_HAT1UP
</newseq>
</port>
<port type="P1_JOYSTICK_DOWN">
<newseq type="standard">
JOYCODE_1_HAT1DOWN
</newseq>
</port>
<port type="P1_JOYSTICK_LEFT">
<newseq type="standard">
JOYCODE_1_HAT1LEFT
</newseq>
</port>
<port type="P1_JOYSTICK_RIGHT">
<newseq type="standard">
JOYCODE_1_HAT1RIGHT
</newseq>
</port>
<port type="P1_BUTTON1">
<newseq type="standard">
JOYCODE_1_BUTTON1
</newseq>
</port>
<port type="P1_BUTTON2">
<newseq type="standard">
JOYCODE_1_BUTTON4
</newseq>
</port>
<port type="P1_BUTTON3">
<newseq type="standard">
JOYCODE_1_BUTTON6
</newseq>
</port>
<port type="P1_BUTTON4">
<newseq type="standard">
JOYCODE_1_BUTTON2
</newseq>
</port>
<port type="P1_BUTTON5">
<newseq type="standard">
JOYCODE_1_BUTTON3
</newseq>
</port>
<port type="P1_BUTTON6">
<newseq type="standard">
JOYCODE_1_BUTTON8
</newseq>
</port>
<port type="P1_START">
<newseq type="standard">
JOYCODE_1_BUTTON10
</newseq>
</port>
<port type="P2_JOYSTICK_UP">
<newseq type="standard">
NONE
</newseq>
</port>
<port type="P2_JOYSTICK_DOWN">
<newseq type="standard">
NONE
</newseq>
</port>
<port type="P2_JOYSTICK_LEFT">
<newseq type="standard">
NONE
</newseq>
</port>
<port type="P2_JOYSTICK_RIGHT">
<newseq type="standard">
NONE
</newseq>
</port>
<port type="P2_BUTTON1">
<newseq type="standard">
NONE
</newseq>
</port>
<port type="P2_BUTTON2">
<newseq type="standard">
NONE
</newseq>
</port>
<port type="P2_BUTTON3">
<newseq type="standard">
NONE
</newseq>
</port>
<port type="P2_BUTTON4">
<newseq type="standard">
NONE
</newseq>
</port>
<port type="P2_BUTTON5">
<newseq type="standard">
NONE
</newseq>
</port>
<port type="P2_BUTTON6">
<newseq type="standard">
NONE
</newseq>
</port>
<port type="START1">
<newseq type="standard">
JOYCODE_1_BUTTON10
</newseq>
</port>
</input>
</system>
</mameconfig>
</div>
To use the controller connfig run as <code>mame -ctrlr default</code> which will search for the <i>default</i> configuration in the specified <i>controller</i> directories.Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-63961599993659504102020-12-18T12:03:00.004+00:002021-01-05T12:38:29.562+00:00Quick Nintendo Switch joystick comparison: Hori RAP and Fighting Stick Mini<img src=https://live.staticflickr.com/65535/50731965128_c9389a9fa2_h.jpg width=95%>
Whilst revisiting Street Fighter on PC and Nintendo Switch I wanted to get a joystick. Hori make 2 joysticks, the Real Arcade Pro V (aka RAP V) or the Fighting Stick Mini, that meet this need but which is most suitable?<br>
<a name='more'></a>
First for the headline - this is what the two Hori sticks look like and the size difference is very noticable. This is not intended to be an indepth review since both these sticks have been available for some time and there's better resources for that.<br>
<img src=https://live.staticflickr.com/65535/50732793827_9ba1e53f11_h.jpg width=95%>
<h3>Fighting Stick Mini</h3>
For an adult, this is small. It's still usable but I feel only for short periods as you will find that your hands are squeezed quite close together in a rather unnatural position. The button spacing is obviously more compact but not overly annoying. Resting this on your lap is troublesome as it only has rubber feet in the corners of the underside which results in you having to press you legs together and having to fight the underside sliding around your lap.<br>
<img src=https://live.staticflickr.com/65535/50731999858_7dce211f21_h.jpg width=95%>
The USB cable is fixed and not detachable.<br>
<br>
The hardware is reported non-branded but the feel of the stick and the buttons aren't too bad.<br>
<h3>RAP</h3>
Not owned other more expensive arcade sticks before, this thing feels like a monster but in a good way.<br>
<br>
The stick and button layout is much more comfortable and the spacing of the buttons is much more natural. The ball top is a little larger but not noticable taller than the Fighting Stick Mini. Using this on your lap is fine since there is rubber matts that will sit on most people's legs.<br>
<br>
<img src=https://live.staticflickr.com/65535/50731965303_6e9b1c661b_h.jpg width=30%><img src=https://live.staticflickr.com/65535/50732697131_0bbdc190d4_h.jpg width=30%><img src=https://live.staticflickr.com/65535/50732697146_ba197bea9b_h.jpg width=30%><br>
The USB cable is fixed and not detachable although it can be hidden away in a compartment when the stick is not in use.<br>
<br>
The hardware is Hayabusa branded (Hori's own brand) and can be replaced for Sanwa parts, like their JLF stick. The buttons are "soft touch" and require little for activation which seems useful for fighting games.<br>
<br>
In the end, I much prefer the RAP for its size and layout but then again it is 2.5-3x the price of the price of the Fighting Stick Mini.
<img src=https://live.staticflickr.com/65535/50732697101_690ea4ab91_h.jpg width=95%>Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-58771020974249361932020-07-13T11:51:00.000+01:002020-07-16T09:40:03.817+01:00Topping E30: another DAC for linux/RPiThe Topping brand has put out various well received budget <i>Chi-fi</i> units over the last few years, and of interest to me, with various USB enabled DACs such as the D90, D50s and so forth. The most recent addition as of Q2 2020, the Topping E30 is a hi-res DAC with an all metal case, requiring 5v ~1A via by a 2.1mm barrel jack. The E30 has <i>traditional</i> inputs and outputs: toslink/coax and USB digital input and RCA out and crucially for me, without the bloat of a headphone amp or bluetooth receiver- a pure DAC.<br />
<br />
<a href=https://live.staticflickr.com/65535/50116647151_80170eafc4_k.jpg><img src=https://live.staticflickr.com/65535/50116647151_1a4e801cb0_b.jpg width=95%></a><br />
<br />
How does it work with Linux and a RPi music server running <code>forked-daapd</code>.<br />
<a name='more'></a><br />
A few more words on the <a href=http://www.tpdz.net/productinfo/434825.html>specification of the E30</a>:<br />
<ul><li>USB - supports up to 32bit/44.1-768Khz PCM, <a href=https://en.wikipedia.org/wiki/Direct_Stream_Digital>DSD</a> (native/DoP) 64-512 / 64-256</li>
<li>optical/coaxial - supports 16-24bit/44.1-192Khz PCM</li>
</ul>via an <a href=https://www.xmos.com/download/XU208-256-TQ64-Datasheet(1.16).pdf>XU208 USB receiveer</a> and an <a href=https://www.akm.com/kr/ko/products/audio/audio-dac/ak4493eq/>AK4493 DAC chip</a>.<br />
<br />
Connectivity to my Linux boxes will be via USB and its important to note the supported USB details: <i>up to 32bit</i>.<br />
<br />
<h2>On Linux</h2>On attaching to my Linux hosts, 4.9.x and 5.1.x kernels, the E30 is recognised.<br />
<div class="code">$ dmesg<br />
...<br />
[188876.320561] usb 1-2: new high-speed USB device number 46 using ehci-pci<br />
[188876.336658] usb 1-2: New USB device found, idVendor=152a, idProduct=8750, bcdDevice= 1.08<br />
[188876.336673] usb 1-2: New USB device strings: Mfr=1, Product=3, SerialNumber=0<br />
[188876.336682] usb 1-2: Product: E30<br />
[188876.336691] usb 1-2: Manufacturer: Topping<br />
[188877.507922] usb 1-2: 1:3 : unsupported format bits 0x100000000<br />
[188877.527899] usbcore: registered new interface driver snd-usb-audio<br />
<br />
$ lsusb<br />
...<br />
Bus 001 Device 048: ID 152a:8750 Thesycon Systemsoftware & Consulting GmbH E30<br />
</div>The serial number is not encoded but the box notes that this is 2006xxx unit which fixes, both in hardware and firemware, a <a href=https://www.audiosciencereview.com/forum/index.php?threads/topping-e30-polarity-discussion.13789/>reverse polarity issue</a> that was in the early batches (serial# below 2004xxxx).<br />
<br />
Basic ALSA validation, we see <code>aplay -l</code> and <code>aplay -L</code> give us the card number, in this case card#1, and the device alias as <code>hw:CARD=E30</code>. We do notice that the <code>mixer name</code> is a little annoying, with a trailing whitespace.<br />
<br />
<div class="code"># device shows as second alsa soundcard, 'hw:1' or 'hw:CARD=E30'<br />
$ aplay -l<br />
**** List of PLAYBACK Hardware Devices ****<br />
...<br />
card 1: E30 [E30], device 0: USB Audio [USB Audio]<br />
Subdevices: 1/1<br />
Subdevice #0: subdevice #0<br />
<br />
$ aplay -L<br />
...<br />
default:CARD=E30<br />
E30, USB Audio<br />
Default Audio Device<br />
...<br />
hw:CARD=E30,DEV=0<br />
E30, USB Audio<br />
Direct hardware device without any conversions<br />
plughw:CARD=E30,DEV=0<br />
E30, USB Audio<br />
Hardware device with all software conversions<br />
<br />
# notice the mixer name and trailing whitespace<br />
$ amixer -c 1<br />
Simple mixer control 'E30 ',0<br />
Capabilities: pvolume pvolume-joined pswitch pswitch-joined<br />
Playback channels: Mono<br />
Limits: Playback 0 - 127<br />
Mono: Playback 127 [100%] [0.00dB] [on]<br />
</div>Whilst this is the only additional soundcard to be added to the Linux box it's nevertheless safer to use the ALSA device alias when referring to the card to avoid different <code>hw:???</code> assignments.<br />
<br />
When looking at the supported bit rates and formats, we notice that the device <b>only</b> supports 32bit, 44.1-768Khz - note the unit spec above; up to 32bit. Now its not clear whether this is a Linux driver issue or whether this is how the device works and/or whether this makes a difference. Certainly my AlloBoss dac natively supports SE16,24 and 32.<br />
<div class="code">$ cat /proc/asound/E30/stream0<br />
Topping E30 at usb-0000:01:00.0-1.4, high speed : USB Audio<br />
<br />
Playback:<br />
Status: Stop<br />
Interface 1<br />
Altset 1<br />
Format: S32_LE<br />
Channels: 2<br />
Endpoint: 1 OUT (ASYNC)<br />
Rates: 44100, 48000, 88200, 96000, 176400, 192000, 352800, 384000, 705600, 768000<br />
Data packet interval: 125 us<br />
Interface 1<br />
Altset 2<br />
Format: S32_LE<br />
Channels: 2<br />
Endpoint: 1 OUT (ASYNC)<br />
Rates: 44100, 48000, 88200, 96000, 176400, 192000, 352800, 384000, 705600, 768000<br />
Data packet interval: 125 us<br />
Interface 1<br />
Altset 3<br />
Format: SPECIAL DSD_U32_BE<br />
Channels: 2<br />
Endpoint: 1 OUT (ASYNC)<br />
Rates: 44100, 48000, 88200, 96000, 176400, 192000, 352800, 384000, 705600, 768000<br />
Data packet interval: 125 us<br />
</div><br />
For comparison, this is a cheap USB DAC on another RPi that also natively supports 16 and 24bit.<br />
<div class="code">C-Media Electronics Inc. USB Advanced Audio Device at usb-20980000.usb-1.3, ful : USB Audio<br />
<br />
Playback:<br />
Status: Stop<br />
Interface 1<br />
Altset 1<br />
Format: S16_LE<br />
Channels: 2<br />
Endpoint: 1 OUT (SYNC)<br />
Rates: 44100, 48000<br />
Interface 1<br />
Altset 2<br />
Format: S24_3LE<br />
Channels: 2<br />
Endpoint: 1 OUT (SYNC)<br />
Rates: 44100, 48000<br />
Interface 1<br />
Altset 3<br />
Format: S16_LE<br />
Channels: 2<br />
Endpoint: 1 OUT (SYNC)<br />
Rates: 88200, 96000<br />
Interface 1<br />
Altset 4<br />
Format: S24_3LE<br />
Channels: 2<br />
Endpoint: 1 OUT (SYNC)<br />
Rates: 88200, 96000<br />
...<br />
</div>What this means in practise is that we cannot use the ASLA <code>hw:E30</code> (direct h/w access) but rather <code>plughw:E30</code> that <a href=https://alsa-project.org/wiki/DeviceNames#The_.22plug:.22_prefix>implements sample format and potential sample rate conversion</a>.<br />
<br />
Furthmore, the E30 does not support concurrent playback which is potentially an issue for <a href=https://github.com/ejurgensen/forked-daapd/issues/742#issuecomment-504364075><code>forked-daapd</code></a>.<br />
<br />
<br />
<div class="code"># only support S32_LE with direct h/w access<br />
$ sox -n -c 2 -r 44100 -b 32 -C 128 /tmp/sine441.wav synth 10 sin 500-100 fade h 1 0 1<br />
<br />
$ aplay -v -Dhw:CARD=E30 /tmp/sine441.wav &<br />
Playing WAVE '/tmp/sine441.wav' : Signed 32 bit Little Endian, Rate 44100 Hz, Stereo<br />
Hardware PCM card 1 'E30' device 0 subdevice 0<br />
Its setup is:<br />
stream : PLAYBACK<br />
access : RW_INTERLEAVED<br />
format : S32_LE<br />
subformat : STD<br />
channels : 2<br />
rate : 44100<br />
exact rate : 44100 (44100/1)<br />
msbits : 32<br />
buffer_size : 22050<br />
period_size : 5513<br />
period_time : 125011<br />
tstamp_mode : NONE<br />
tstamp_type : MONOTONIC<br />
period_step : 1<br />
avail_min : 5513<br />
period_event : 0<br />
start_threshold : 22050<br />
stop_threshold : 22050<br />
silence_threshold: 0<br />
silence_size : 0<br />
boundary : 6206523236469964800<br />
appl_ptr : 0<br />
hw_ptr : 0<br />
<br />
# try concurrently<br />
$ aplay -v -Dhw:CARD=E30 /tmp/sine441.wav<br />
aplay: main:828: audio open error: Device or resource busy<br />
</div><br />
<h2>Power Saving</h2>When optical or coaxial is plugged into the E30, the unit can auto-wakeup and shutdown once no signal is detected, showing <code>Err</code> for a short while before going into power save. A minor tweak is reqired with the <code>udev rules</code> to acheive this with USB connected to Linux.<br />
<div class="code">$ sudo vi /etc/udev/rules.d/topping-e30.rules<br />
ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="152a", ATTR{idProduct}=="8750", TEST=="power/control", ATTR{power/control}="auto"<br />
<br />
$ sudo udevadm control --reload-rules<br />
</div><br />
<h2>forked-daapd</h2>Making the E30 available to <code>forked-daapd</code> is relatively simple, adding the following to the config file and restarting:<br />
<div class="code">alsa "plughw:E30" {<br />
nickname = "Topping E30"<br />
mixer = "E30 "<br />
mixer_device = "hw:E30"<br />
}<br />
</div>Once <code>forked-daapd</code> is restarted we can select the E30 as an additional output source.<br />
<h3>Concurrent Playback</h3>The E30 does not support concurrent playback - with <code>forked-daapd</code> this currently gives us a little problem in that audio at the end of the track can be truncated/dropped. To resolve this, we can create an ALSA software muxing device for the server but this does have the limitation of a fixed sample rate. However, we can add this as an additional output to allow us to dynamically choose between the E30 outputs.<br />
<div class="code"># /etc/asound.conf<br />
pcm.E30 {<br />
type plug<br />
slave.pcm "E30dmix"<br />
hint.description "Topping E30 DAC s/w dmix enabled device"<br />
}<br />
<br />
pcm.E30dmix {<br />
type dmix<br />
ipc_key 1025<br />
ipc_key_add_uid false<br />
ipc_perm 0666<br />
slave {<br />
pcm "hw:E30"<br />
period_time 0<br />
period_size 4096<br />
buffer_size 22052<br />
rate 44100<br />
}<br />
hint.description "Topping E30 DAC s/w dmix device"<br />
}<br />
<br />
ctl.E30mix {<br />
type hw<br />
card "E30"<br />
}<br />
<br />
# /etc/forked-daapd.conf<br />
alsa "E30" {<br />
nickname = "Topping E30 concurrent"<br />
mixer = "E30 "<br />
mixer_device = "hw:E30"<br />
}<br />
</div><!--
Samsa
https://www.audiosciencereview.com/forum/index.php?members/samsa.14481/#about
https://www.audiosciencereview.com/forum/index.php?threads/topping-e30-dac-review.12119/page-59#post-406812
https://www.audiosciencereview.com/forum/index.php?threads/topping-e30-dac-review.12119/post-407102
https://github.com/numbqq/USB-Audio-2.0-Software-v6.1/tree/master/sc_usb_audio/module_dfu/host/xmos_dfu_linux
VID = 0x152a, PID = 0x8750, BCDDevice: 0x106
diff --git a/sc_usb_audio/module_dfu/host/xmos_dfu_linux/xmosdfu.cpp b/sc_usb_audio/module_dfu/host/xmos_dfu_linux/xmosdfu.cpp
index f417185..e4d294f 100644
--- a/sc_usb_audio/module_dfu/host/xmos_dfu_linux/xmosdfu.cpp
+++ b/sc_usb_audio/module_dfu/host/xmos_dfu_linux/xmosdfu.cpp
@@ -4,9 +4,9 @@
#include <libusb-1.0/libusb.h><br />
<br />
/* the device's vendor and product id */<br />
-#define XMOS_VID 0x20b1<br />
+#define XMOS_VID 0x152a<br />
<br />
-#define XMOS_XCORE_AUDIO_AUDIO2_PID 0x3066<br />
+#define XMOS_XCORE_AUDIO_AUDIO2_PID 0x8750<br />
#define XMOS_L1_AUDIO2_PID 0x0002<br />
#define XMOS_L1_AUDIO1_PID 0x0003<br />
#define XMOS_L2_AUDIO2_PID 0x0004<br />
@@ -112,19 +112,19 @@ static int find_xmos_device(unsigned int id, unsigned int list)<br />
}<br />
<br />
int xmos_dfu_resetdevice(void) {<br />
- libusb_control_transfer(devh, DFU_REQUEST_TO_DEV, XMOS_DFU_RESETDEVICE, 0, 0, NULL, 0, 0);<br />
+ return libusb_control_transfer(devh, DFU_REQUEST_TO_DEV, XMOS_DFU_RESETDEVICE, 0, 0, NULL, 0, 0);<br />
}<br />
<br />
int xmos_dfu_revertfactory(void) {<br />
- libusb_control_transfer(devh, DFU_REQUEST_TO_DEV, XMOS_DFU_REVERTFACTORY, 0, 0, NULL, 0, 0);<br />
+ return libusb_control_transfer(devh, DFU_REQUEST_TO_DEV, XMOS_DFU_REVERTFACTORY, 0, 0, NULL, 0, 0);<br />
}<br />
<br />
int xmos_dfu_resetintodfu(unsigned int interface) {<br />
- libusb_control_transfer(devh, DFU_REQUEST_TO_DEV, XMOS_DFU_RESETINTODFU, 0, interface, NULL, 0, 0);<br />
+ return libusb_control_transfer(devh, DFU_REQUEST_TO_DEV, XMOS_DFU_RESETINTODFU, 0, interface, NULL, 0, 0);<br />
}<br />
<br />
int xmos_dfu_resetfromdfu(unsigned int interface) {<br />
- libusb_control_transfer(devh, DFU_REQUEST_TO_DEV, XMOS_DFU_RESETFROMDFU, 0, interface, NULL, 0, 0);<br />
+ return libusb_control_transfer(devh, DFU_REQUEST_TO_DEV, XMOS_DFU_RESETFROMDFU, 0, interface, NULL, 0, 0);<br />
}<br />
<br />
int dfu_detach(unsigned int interface, unsigned int timeout) {<br />
@@ -273,6 +273,7 @@ int read_dfu_image(char *file) {<br />
}<br />
<br />
fclose(outFile);<br />
+ return 0;<br />
}<br />
<br />
int main(int argc, char **argv) {<br />
<br />
<br />
http://www.tpdz.net/newsinfo/391293.html<br />
<br />
$ sudo toneboard_dfu_tool --upload current-e30.fw<br />
$ sudo toneboard_dfu_tool --downaload new-e30.fw<br />
--><br />
Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-86117737996830914452020-04-26T13:02:00.019+01:002021-01-06T14:05:33.859+00:00Poor sound with Dell onboard Conexant CX20641 / HDA intel soundcardSome Dell machines have a soundcard that is essentially wired up differently to how some software expects. This can be evident when using the line-out of the machine to go into speakers and finding that there is very poor bass output but can manifest itself in other soundcard jacks not behaving as expected. This is observed during enforced homeschooling on an old Dell Optiplex 390 running Fedora 32.<br>
<br>
Verifying the soundcard is using a similar troublesome chipset:<br>
<div class="code">
$ dmesg | grep snd_hda_codec_conexant
[ 19.026492] snd_hda_codec_conexant hdaudioC0D2: CX20641: BIOS auto-probing.
$ lspci -v -d 8086:1c20
00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04)
DeviceName: Onboard Audio
Subsystem: Dell Device 04f5
Flags: bus master, fast devsel, latency 0, IRQ 36
Memory at e4c30000 (64-bit, non-prefetchable) [size=16K]
Capabilities: <access denied>
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
</div>
<a name='more'></a>
The solution is detailed on a <a href=https://forums.gentoo.org/viewtopic-p-7475926.html#7475926>gentoo forum post</a> but it's certainly not obvious - it involves using a tool, <code>hda-jack-retask</code> to generate a runtime patch to reassign pins.<br>
<blockquote>
https://forums.gentoo.org/viewtopic-p-7475926.html#7475926
using <code>hdajackretask</code> the following re-assignments give proper 2 channel audio to the rear line out:
<pre>
Green Headphone, Front side Headphone
Pink Mic, Rear side Line In
Green Line Out, Rear side Internal Speaker
Blue Line In, Rear side Internal Mic
</pre>
</blockquote>
The following commands solved it for this Optiplex:<br>
<div class="code">
$ echo "options snd-hda-intel patch=hda-jack-retask.fw,hda-jack-retask.fw,hda-jack-retask.fw,hda-jack-retask.fw" > /etc/modprobe.d/hda-jack-retask.conf
$ cat > /lib/firmware/hda-jack-retask.fw << EOF
[codec]
0x14f150a1 0x102804f5 2
[pincfg]
0x18 0x40f001f0
0x19 0x0321403f
0x1a 0x02a19020
0x1b 0x0181304f
0x1c 0x90170150
0x1d 0x90a60160
0x1e 0x40f001f0
0x1f 0x40f001f0
0x20 0x40f001f0
0x21 0x40f001f0
0x26 0x40f001ff
EOF
</div>
On the next boot, looking at the <code>snd_hda</code> driver messages we can see the following and line out is now working as expected.
<div class=code>
$ dmesg | grep snd_hda
[ 16.599294] snd_hda_intel 0000:00:1b.0: Applying patch firmware 'hda-jack-retask.fw'
[ 16.610893] snd_hda_intel 0000:01:00.1: Disabling MSI
[ 16.621360] snd_hda_intel 0000:01:00.1: Handle vga_switcheroo audio client
[ 16.631347] snd_hda_intel 0000:01:00.1: Applying patch firmware 'hda-jack-retask.fw'
[ 17.654795] snd_hda_codec_conexant hdaudioC1D2: CX20641: BIOS auto-probing.
[ 17.661994] snd_hda_codec_conexant hdaudioC1D2: autoconfig for CX20641: line_outs=1 (0x1c/0x0/0x0/0x0/0x0) type:speaker
[ 17.668660] snd_hda_codec_conexant hdaudioC1D2: speaker_outs=0 (0x0/0x0/0x0/0x0/0x0)
[ 17.675134] snd_hda_codec_conexant hdaudioC1D2: hp_outs=1 (0x19/0x0/0x0/0x0/0x0)
[ 17.681382] snd_hda_codec_conexant hdaudioC1D2: mono: mono_out=0x0
[ 17.687498] snd_hda_codec_conexant hdaudioC1D2: inputs:
[ 17.693397] snd_hda_codec_conexant hdaudioC1D2: Front Mic=0x1a
[ 17.699094] snd_hda_codec_conexant hdaudioC1D2: Internal Mic=0x1d
[ 17.704658] snd_hda_codec_conexant hdaudioC1D2: Line=0x1b
</div Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com1tag:blogger.com,1999:blog-7800204991823004827.post-11295020151904126742019-07-27T20:00:00.001+01:002019-07-27T20:00:30.108+01:00New Fedora, new VMplayer startup problemsWith older machines we have the risk of unsupported software hitting compatib ility problems with each OS upgrade. A recent example of this is running <co de>vmplayer 12.5.9</code> on Fedora 30 which <a href=https://whatdoineed2do.b logspot.com/2018/10/vmware-player-12x-with-418x-kernels.html>ran fine on Fedo ra 26 .. 28 albiet needing a couple of tweaks</a><br />
<br />
But what's the new problems on Fedora 30 and what are the workarounds?<br />
<br />
<a name='more'></a><br />
As noted sometimes we simply can't upgrade your legacy software because of dr opped hardware support - in this case VMware stopped supporting older SandyBr idge Intel CPUs after <code>VMplayer 12.5.9</code> together with a forced Fed ora upgrade (harddisk head crash) meant it now we are now forced to use <code >VMplayer</code> on the latest Fedora release.<br />
<br />
After grabbing the 12.5.9 bundle from VMware (see link above) we follow the w orkaround instructions for getting the <code>VMplayer</code> kernel modules t o compile and starting the relevant service:<br />
<div class="code">$ git clone https://github.com/mkubecek/vmware-host-modules<br />
$ cd vmware-host-modules<br />
$ git checkout workstation-12.5.9<br />
$ sudo make install<br />
$ sudo systemctl restart vmware<br />
$ sudo systemctl status vmware<br />
● vmware.service - SYSV: This service starts and stops VMware services<br />
Loaded: loaded (/etc/rc.d/init.d/vmware; generated)<br />
Active: active (running) since Sat 2019-07-27 18:26:43 BST; 5min ago<br />
Docs: man:systemd-sysv-generator(8)<br />
Process: 22490 ExecStart=/etc/rc.d/init.d/vmware start (code=exited, st atus=0/SUCCESS)<br />
Tasks: 10 (limit: 4915)<br />
Memory: 22.9M<br />
CGroup: /system.slice/vmware.service<br />
├─22581 /usr/lib/vmware/bin/vmware-vmblock-fuse -o subtype=vmwa re-vmblock,default_permissions,allow_other /var/run/vmblock-fuse<br />
├─22611 /usr/bin/vmnet-bridge -s 6 -d /var/run/vmnet-bridge-0.p id -n 0<br />
├─22619 /usr/bin/vmnet-netifup -s 6 -d /var/run/vmnet-netifup-v mnet1.pid /dev/vmnet1 vmnet1<br />
├─22625 /usr/bin/vmnet-dhcpd -s 6 -cf /etc/vmware/vmnet1/dhcpd/ dhcpd.conf -lf /etc/vmware/vmnet1/dhcpd/dhcpd.leases -pf /var/run/vmnet-dhc><br />
├─22628 /usr/bin/vmnet-natd -s 6 -m /etc/vmware/vmnet8/nat.mac -c /etc/vmware/vmnet8/nat/nat.conf<br />
├─22630 /usr/bin/vmnet-netifup -s 6 -d /var/run/vmnet-netifup-v mnet8.pid /dev/vmnet8 vmnet8<br />
├─22636 /usr/bin/vmnet-dhcpd -s 6 -cf /etc/vmware/vmnet8/dhcpd/ dhcpd.conf -lf /etc/vmware/vmnet8/dhcpd/dhcpd.leases -pf /var/run/vmnet-dhc><br />
└─22664 /usr/sbin/vmware-authdlauncher<br />
$ sudo systemctl enable vmware<br />
</div><br />
right now, this looks fine so to start the player:<br />
<div class="code">$ vmplayer<br />
/usr/lib/vmware/bin/vmware-modconfig: Relink `/lib64/libssh.so.4' with `/lib6 4/librt.so.1' for IFUNC symbol `clock_gettime'<br />
</div><br />
This is less good.<br />
<br />
However, this error is actually misleading as the underlying problem has NOTH ING to do with <code>libssl</code> - some bug reports/references online refer to incorrect installation of the kernel modules but that isn't the culprit e ither.<br />
<br />
The solution for me was actually another previously documented 12.4 (version ??) fix related to the shared libraries shipped with <code>vmplayer</code>. We can verify that the VMware utils are failing to find the correct libraries (and then core dumping with <code>Bad RIP code</code> as shown in the system logs which is also unhelpful) by running <code>LD_DEBUG=libs vmplayer</code> and observing which libraries the linker fails to resolve or find.<br />
<br />
The real fix is to replace the following <code>vmplayer</code> libraries with the system equivalents:<br />
<div class=code>$ cd /usr/lib/vmware/lib<br />
$ for i in \<br />
libz.so.1 libexpat.so.1 libfontconfig.so.1 libfreetype.so.6 \<br />
do<br />
( cd $i && mv $i $i.orig && ln -s /lib64/$i )<br />
done<br />
</div><br />
Following this we should be able to start <code>vmplayer</code> as before.Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-37939946628735230442019-07-17T11:19:00.002+01:002019-07-18T11:13:53.528+01:00VMWare Horizon >4.8 on linux fails to start: undefined symbolsVMWare Horizon 4.8 works well with Fedora but when your service forces you to move to a later version you will find Horizon client 4.9 and above to fail to start. What to do?<br />
<a name='more'></a><br />
This is frustrating as clicking on the desktop icon nothing happens - when you figure out that the underlying script is <code>/usr/bin/vmware-view</code> and try running that directly you will find that the script complains about missing symbols:<br />
<div class=code>/usr/lib/vmware/view/bin/vmware-view-crtbora: symbol lookup error: /usr/lib64/libpangomm-1.4.so.1: undefined symbol: _ZN4Glib6ObjectC2EOS0_<br />
</div><br />
Initially most people will try to figure out why there are suddenly missing symbols after upgrading from VMware Horizon 4.8 and end up in a rat hole. The fix is remarkably simple (worked for 5.1) and to modify the <code>/usr/bin/vmware-view</code> file as below:<br />
<div class=code>binFile=<br />
if [[ $ROLLBACK_VMWAREVIEW = "" ]] && [ $cpu -eq 0 ]; then<br />
binFile="vmware-view-crtbora"<br />
else<br />
binFile="vmware-view"<br />
fi<br />
# hack!! add this!!!<br />
binFile="vmware-view"<br />
</div><br />
The reason seems to be that the Horizon clients > 4.8 wants to start with a seamless desktop (and the <code>vmware-view-crtbora</code>) which causes problems - forcing the script to use the (legacy support) <code>vmware-view</code> binary will resolve your >4.8 client startup problems and you can get back to your work.Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-51519936044052377592019-03-10T16:02:00.018+00:002023-11-13T13:35:18.735+00:00An easy way to remove duplicate filesAfter years of backing up files from various harddrives and various folders into new harddrivers and additional various folders, we inevitably get a duplicate music, video and photo files - here are 2 tools that can help clean up: <a href=https://github.com/adrianlopezroche/fdupes><code>fdupes</code></a> and <a href=https://exiftool.org/><code>exiftool</code></a><br />
<a name='more'></a><br />
<h3>Installing tools</h3>
<div class=code>
# Fedora
$ dnf -y install fdupes perl-Image-ExifTool
# debian
$ apt install libimage-exiftool-perl fdupes
</div>
<h3>Renaming based on metadata</h3>
The magic from this comes from using <code>exitfool</code> to give us some legibility of the files (rename them!) based on the metadata and then using <code>fdupes</code> to remove the duplicates based on initially file size and the file hashes. Assuming we have moved all the files into a directory called <code>backup</code><br />
<div class=code>
$ exiftool \
-d '%Y-%m-%d_%H%M%S' \
'-filename<${filemodifydate;$_=undef if $self->GetValue('DateTimeOriginal')}-%f.%le' \
-r \
-ext MOV -ext mov \
-ext MP4 -ext mp4 \
-ext JPG -ext jpg \
-ext JPEG -ext jpeg \
-ext HEIC -ext heic \
-ext PNG -ext png \
-ext NEF -ext nef \
-ext DNG -ext dng \
./backup
# use following to create 'year' directory automatically:
# -d '%Y/%Y-%m-%d_%H%M%S'
</div>
<h4>Additional Control for renaming</h4>
<div class=code>
$ exiftool \
'-filename<CreateDate' \
-d %Y-%m-%d-%H%M%%-c.%%le \
-r -ext MOV -ext mov -ext MP4 -ext mp4 -ext JPG -ext jpg ./backup
# update images to incl date and model
$ exiftool \
-d "%Y-%m-%d %H%M%S" \
'-filename<${datetimeoriginal}-${model}-${filename} \
*.NEF *.DNG
<br />
<br />
# and for the ones with no meta
$ exiftool \
'-filename<filemodifydate' \
-d %Y-%m-%d-%H%M%%-c.%%le \
-r \
-ext MOV -ext mov \
-ext MP4 -ext mp4 \
-ext JPG -ext jpg \
./backup
</div>
Similarly for audio files:
<div class=code>
$ exiftool \
'-Directory<<Artist/<Album'
-r \
-ext MP3 -ext mp3 \
-ext FLAC -ext flac \
-ext M4A -ext m4a \
./backup/
$ exiftool \
'-filename<$Track - $Title.%le' \
-r \
-ext MP3 -ext mp3 \
-ext FLAC -ext flac \
-ext M4A -ext m4a \
./backup/
</div>
<h3>Removing based on checksums</h3>
And finally remove duplicates based on file checksums:<br />
<div class=code>
$ fdupes -rdNsI ./backup/
$ find . -type d -empty -delete ./backup
</div>
Processing audio files may also be enhanced by using the audio data checksums and using a <a href=https://github.com/whatdoineed2do/fdupes>patched <code>fdupes</code></a>:
<div class=code>
# audio only checksum, equivalent on same audio files with different metadata
$ ffmpeg -hide_banner -i foo.m4a -c:a copy -bsf:a null -f hash -
# using patched fdupes and '-a' flag
$ fdupes -rdNsIa ./backup
</div>Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-84491667052175175742019-02-06T07:05:00.001+00:002021-01-05T12:07:29.428+00:00Migrating an old disk to larger diskThe need for more storage is a never ending process but the processes for migrating your old disks to new has never been easier withe collective experience over the years.<br />
<br />
But there can still be gotchas if we've been repeating our process of migrating old to new for many many years - one of these gotchas is when you are moving a disk that has its first sector startig at 63 instead of say 2048. What do we do then?<br />
<img src=https://live.staticflickr.com/7351/9591620593_50091be07c_b.jpg width=95%>
<a name='more'></a><br />
Having cloned my old disk, containing a single <code>ext4 partition</code> using <a href=https://clonezilla.org/>clonezilla</a>'s disk-to-disk imaging we are ready to extend the partitions to use the rest of the new space. The process for this is very simple:<br />
<ul><li>note partition start boundary via <code>fdisk</code></il><li>delete partition</li><li>recreate partition using same start sector number and new extended to end of disk</li><li>run <code>resize2fs</code></li></ul>So let's see what happens.
<div class=code># fdisk /dev/sdb
Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): <b>p</b>
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x3ef45f39
Device Boot Start End Sectors Size Id Type
/dev/sdb1 63 471859263 471859201 225G 83 Linux
Partition 1 does not start on physical sector boundary.
Command (m for help): <b>d</b>
Selected partition 1
Partition 1 has been deleted.
Command (m for help): <b>n</b>
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-1953525167, default 2048): 63
Value out of range.
First sector (2048-1953525167, default 2048): ^C
Command (m for help): ^C
Do you really want to quit? <b>y</b>
</div>
Hmm, so this isn't very good. We MUST recreate at the previous starting boundary but <code>fdisk</code> complains. To understand this we need to remember that in the past (probably up to the early 2000s) partitions would start at LBA address 63 with 512byte sector sizes. However, modern harddisks almost always come with 4k phsyical sector sizes and logical 512k sectors (see the output from above) and it is more effecient for data reads/writes to be aligned with the phsyical sectors.
Neverless, we can force <code>fdisk</code> to continue with the <code>-c dos</code> flag.
<div class=code># <b>fdisk -c=dos /dev/sdb</b>
Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
DOS-compatible mode is deprecated.
The device presents a logical sector size that is smaller than the physical sector size. Aligning to a physical sector (or optimal I/O) size boundary is recommended, or performance may be impacted.
Command (m for help): p
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Geometry: 225 heads, 37 sectors/track, 121601 cylinders
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x3ef45f39
Device Boot Start End Sectors Size Id Type
/dev/sdb1 63 471859263 471859201 225G 83 Linux
Partition 1 does not start on physical sector boundary.
Command (m for help): <b>d</b>
Selected partition 1
Partition 1 has been deleted.
Command (m for help): <b>n</b>
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (37-1953525167, default 38): <b>63</b>
Last sector, +sectors or +size{K,M,G,T,P} (63-1953525167, default 1953525167):
Created a new partition 1 of type 'Linux' and of size 931.5 GiB.
Partition #1 contains a ext4 signature.
Do you want to remove the signature? [Y]es/[N]o: <b>n</b>
Command (m for help): <b>p</b>
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Geometry: 225 heads, 37 sectors/track, 121601 cylinders
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x3ef45f39
Device Boot Start End Sectors Size Id Type
/dev/sdb1 63 1953525167 1953525105 931.5G 83 Linux
Partition 1 does not start on physical sector boundary.
Command (m for help): <b>w</b>
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
</div>At this point we have recreated a partition to fill up all usable space - time for a filesystem check before the resize operation.
<div class=code># <b>partprobe /dev/sdb1</b>
# <b>e2fsck -f /dev/sdb1</b>
e2fsck 1.44.2 (14-May-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sdb1: 39667/14745600 files (1.2% non-contiguous), 39865795/58982400 blocks
# resize2fs /dev/sdb1
resize2fs 1.44.2 (14-May-2018)
Resizing the filesystem on /dev/sdb1 to 244190638 (4k) blocks.
The filesystem on /dev/sdb1 is now 244190638 (4k) blocks long.
</div>Finally the resize is complete and we can check and mount the filesystem to show the new size.
<div class=code># <b>e2fsck -f /dev/sdb1</b>
e2fsck 1.44.2 (14-May-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sdb1: 39667/61054976 files (1.2% non-contiguous), 42775537/244190638 blocks
# <b>mount /dev/sdb1 1</b>
# <b>df -h 1</b>
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 917G 149G 722G 18% /tmp/b/1
</div>Perfect. This shows that we've completed the migration from our 225GB to 1TB disk.Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0tag:blogger.com,1999:blog-7800204991823004827.post-89755685560885424542019-01-26T13:58:00.000+00:002019-01-27T11:48:24.003+00:00The Remastered Aeron - What was once Old is New againThe original <a href=https://www.fastcompany.com/1671789/the-untold-history-of-how-the-aeron-chair-came-to-be>Herman Miller Aeron has been in the wild since 1994</a> and has been widely successful in both corporate and domestic office settings. In 2017, after nearly 23 years, Herman Miller released a revamped update, calling it the Remastered Aeron and ended production of the original Aeron later the same year.<br />
<br />
<img src=https://c1.staticflickr.com/5/4893/46832682612_b2dd50d316_h.jpg width=95%><br />
<i><sup>(c) Herman Miller</sup></i><br />
<br />
But aside from the <a href=http://toolsandtoys.net/reviews/a-review-of-the-remastered-herman-miller-aeron-office-chair/>revamped looks, weight and features</a> how does the Remastered Aeron sit and feel in the real world?<br />
<a name='more'></a><br />
Firstly lets get past the differences between old and new:<br />
<ul><li>lighter frame/base and streamlined tilt mechanism</li>
<li>forward/rear tilt limiter tabs replaced by a Mira style twist knob</li>
<li>revised arm rests offering forward/backward adjustment as well as more pivot locking positions</li>
<li>new PostureFit SL lumbar support option</li>
<li>update to Graphite base colour option being more grey-on-black mesh and frame</li>
</ul>There are many reviews online since the Remastered Aeron's release but they tend to talk about the new chair's form but unfortunately not its function.<br />
<br />
For over a decade, I've sat in <a href=https://whatdoineed2do.blogspot.com/2016/08/herman-miller-aeron-pitfalls-of-trying.html>Aerons of various descriptions, ages and specifications</a> as my main work chair clocking up my weekly 9-5s - in the last couple of months I've migrated over to the new Remastered Aeron.<br />
<br />
As you'd expect it sit's as you expect perhaps being a little more taut in some areas of the seat pan when compared to a new ~2016 classic; the Herman Miller Aeron marketing literature does mention:<blockquote><i> Aeron's 8Z Pellicle elastomeric suspension seat and backrest, eight latitudinal zones of varying tension envelop you as you sit</i></blockquote>The tautness across the seat does not affect comfort although the mesh has been revised and the pattern is closer together and appears more likely to trap more dust even if HM claims that it will trap less body heat.<br />
<br />
The new PostureFit SL lumbar support is an iteration over the original PostureFit but I'm a fan of neither, much preferring the original blocky adjustable lumbar support since it allowed much more controlled placement. Even with the new SL fully engaged I don't really feel I'm getting supported there whilst pushing myself into the base of the seat pan - perhaps not a problem for some as I often see office co-workers slumped into chair at their desks.<br />
<img src=https://c2.staticflickr.com/8/7910/46832586252_09a9d64bde_b.jpg width=95%><br />
<i><sup>(c) Herman Miller</sup></i><br />
<br />
For the seat positional adjustments, the newer chair has simplified controls although I'm sure it'll take a while for muscle memory to update when reaching for the tilt adjustments which I think are easier/more convenient on the original. However, the tilt adjustment shouldn't be a big problem as most people will set/forget.<br />
<img src=https://c1.staticflickr.com/5/4884/45969855685_16d7611188_h.jpg width=95%><br />
<i><sup>(c) Herman Miller</sup></i><br />
<br />
The extra pivot points on the arm rests are a welcome adjustment - the original pivot positions were out, straight and in whereas the new model has intimidatory positions too. The depth adjustable sliding arms pads are a bit hit and miss as they can be easily knocked further back than you intend but again this can be welcome for taller/shorter users.<br />
<br />
<br />
Overall the new chair feels and functions as you'd expect with no controversies if you're already an Aeron user.<br />
<img src=https://c2.staticflickr.com/8/7876/45970020085_ce6a614f59_k.jpg width=95%><br />
<i><sup>(c) Herman Miller</sup></i><br />
<br />
The question will be whether anyone would <i>need</i> to upgrade their existing Aeron? Personally, I would simply answer "no".<br />
<br />
The question is really for people buying their first Aeron or for those whos chair is beyond repair; being second hand or new. If funding is not a consideration, get the Remastered Aeron for the non-transferable (up to 12 years) warranty although some Herman Miller dealers may still carry the original Aeron as ex-display which are still a good option (less the 12 year warranty) as these will tend to be fully spec'd (front/rear tilt, adjustable pivot arms and PostureFit). Going into the second hand market is more <a href=https://whatdoineed2do.blogspot.com/2016/08/herman-miller-aeron-pitfalls-of-trying.html>hit and miss given the wide range and conditions</a> of chairs that could be available but still an option as long as you're aware of the potential pitfalls.Rayhttp://www.blogger.com/profile/02383886833424112903noreply@blogger.com0