dipalo.com
blosxom with a touch of python
https://www.dipalo.com/blog/index.atomJames Gemmellhttps://www.dipalo.com/blog/index.atomuser [hyphen] 483006 [at] dipalo [dot] comCopyright (c) 2023, James Gemmell
Pyblosxom https://pyblosxom.github.com/ 1.5.3
2023-05-06T10:47:00ZUsing Docker with RTL-SDR for ADS-Bhttps://www.dipalo.com/blog/2023/05/06/using_docker_with_rtl-sdr_for_ads-b2023-05-06T10:47:00Z2023-05-06T10:47:00ZJames Gemmell<p>I bought my first RTL-SDR receiver a little over a year ago and was
soon up and running with <a href="https://github.com/flightaware/dump1090">FlightAware's dump1090</a>. Shortly after that I
signed up with <a href="https://www.flightradar24.com/share-your-data">Flightradar24</a>, downloaded and installed the x86_64
package and started contributing. <a href="https://adsbexchange.com/how-to-feed/">ADS-B Exchange</a> has a script to
bootstrap Linux installation and that too made it easy to get up and
running. It was only when I created a second ADS-B feeder site at a
new location that I appreciated how onerous maintaining all the
installs and configuration could become.</p>
<p>Somewhat ironically I could not contribute to FlightAware either since
<a href="https://flightaware.com/adsb/piaware/build">PiAware</a>, as the name implies, only supports the Raspberry Pi and not
x86_64 Linux. I was similarly disappointed to find out that <a href="https://www.radarbox.com/raspberry-pi/guide">RadarBox</a>
only has Pi and Windows feeders.</p>
<p>While investigating <a href="http://blog.erben.sk/2020/08/14/how-to-run-radarbox24-feeder-on-x86/">various solutions</a> I stumbled across the work of
<a href="https://github.com/mikenye">Mike</a> and his fellow <a href="https://github.com/sdr-enthusiasts">SDR Enthusiasts</a>. They have done the hard yards and
bundled the RadarBox rbfeeder and other clients into docker images
supporting x86_64, arm32 and arm64 architectures - either by compiling
native binaries or emulation through <a href="https://www.qemu.org/">qemu</a>. The docker-compose examples
made it all too easy replace to my manual installs with
<a href="https://github.com/sdr-enthusiasts/docker-flightradar24">docker-flightradar24</a> and <a href="https://github.com/sdr-enthusiasts/docker-adsbexchange">docker-adsbexchange</a> then ditch dump1090 for
<a href="https://github.com/sdr-enthusiasts/docker-readsb-protobuf">docker-readsb</a>.</p>
<p>This was swiftly followed by the addition of <a href="https://github.com/sdr-enthusiasts/docker-radarbox">docker-radarbox</a>,
<a href="https://github.com/sdr-enthusiasts/docker-piaware">docker-piaware</a> and <a href="https://github.com/sdr-enthusiasts/docker-planefinder">docker-planefinder</a> feeders. Maintaining just a
single <a href="https://docs.docker.com/compose/compose-file/02-model/">docker-compose.yaml</a> in each location couldn't be more simple.</p>
<p>Deployments have been quite robust with very little intervention
required. The PiAware client occasionally loses connectivity to
FlightAware and requires a restart. The connection failure is picked
up by the <a href="https://github.com/sdr-enthusiasts/docker-piaware/blob/main/rootfs/scripts/healthcheck.sh">healthcheck.sh</a> script so I've added
<a href="https://github.com/willfarrell/docker-autoheal">Will Farrell's docker-autoheal</a> into the mix to automate the restarts.</p>
Loggers are not constantshttps://www.dipalo.com/blog/2015/11/24/loggers_are_not_constants2015-11-24T11:37:00Z2015-11-24T11:37:00ZJames Gemmell<p>Loggers are not constants, they are mutable objects.</p>
<p>I am currently working on a legacy Java/Spring codebase littered with
calls to LOG and LOGGER, proudly declared as static final. There
appear to be sound justifications for adopting this convention
(besides avoiding <a href="https://github.com/checkstyle/checkstyle">Checkstyle</a> complaints) but sometimes it is useful to
reflect on how we ended up here.</p>
<p>Loggers have traditionally been declared static as their construction
was considered expensive and therefore justified a singleton
status. Times have changed along with coding conventions.</p>
<p>Most of the applications in this particular codebase are of the
vanilla Spring application variety, namely singleton services
processing value objects on multiple threads.</p>
<p>Loggers used by service singletons are obviously singletons themselves
and clearly don't warrant this premature optimization. Loggers in
value objects shouldn't be there.</p>
<p>A grey area exists for short lived domain objects, say some business
logic encapsulated in a function that would log input arguments and
the result. The solution is to have the object factory inject the
logger at object construction time. Dependency injection ensures that
logging is just another service, singleton or not.</p>
<p>A further argument against using static final loggers is the increased
use of semantic logging. The implications are that if, in order to
unit test the logging feature, the object under test requires the
static mocking capabilities of <a href="https://github.com/jayway/powermock">PowerMock</a> then you're doing dependency
injection incorrectly or not at all.</p>
<p>In another example, this practice was followed to the absurd extreme
with SimpleDateFormat. Not only is this not a constant but the methods
are not thread-safe. Declaring it as such merely increases the
likelihood of multiple threads executing the unsafe methods
concurrently, even more so when running in a multi-application
container.</p>
<p>The <a href="https://google.github.io/styleguide/javaguide.html">Google Java Style</a> guide suggests some reasonable conventions for
<a href="https://google.github.io/styleguide/javaguide.html#s5.2.4-constant-names">constant names</a>.</p>
<p>Forgetting why we started doing it in the first place and blindly
following the static final Logger convention is, at best, premature
optimization and, at worst, a shining example of donning the
<a href="http://mikehadlow.blogspot.com.au/2014/03/coconut-headphones-why-agile-has-failed.html">coconut headphones</a>.</p>
A review of Packt Publishing's Spring Datahttps://www.dipalo.com/blog/2013/01/19/spring_data_review2013-01-19T02:46:00Z2013-01-19T02:46:00ZJames Gemmell<p>Packt Publishing's <a href="http://www.packtpub.com/spring-data/book">Spring Data</a> is their latest offering in the now
familiar cookbook format covering the Spring Data support for JPA and
Redis. The Kindle version was well laid out, which can be difficult
for technical texts.</p>
<p>The cookbook uses a simple domain model for contact data to
demonstrate the creation of a CRUD database and then steps the reader
through creating operations using Native SQL, Java Persistence Query
Language (JPQL) and the Criteria API. What I found most useful was
being able to compare and contrast the pros and cons of each approach.</p>
<p>This section may be less suited to the novice it assumes a working
knowledge of JPA. The cookbook could not be treated as a JPA tutorial
or reference as other texts have greater coverage using more complex
data models.</p>
<p>The chapters on Redis start with step-by-step instructions on
configuring the connection to a Redis service using the Jedis, JRedis,
RJC and SRP connectors. It then proceeds through to the implementing a
CRUD application for the contact data and adding publish/subscribe
notifications for updates.</p>
<p>By far the most beneficial thing I found in these chapters was how to
use Spring Data Redis to add transparent caching support to a JPA
repository.</p>
<p>Spring Data can be bought from <a href="http://www.packtpub.com/spring-data/book">Packt Publishing</a> and <a href="http://www.amazon.com/gp/product/B00A232HBK/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&tag=dipalo-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=B00A232HBK">Amazon.com</a>.</p>
A review of the Spring Web Services 2 Cookbookhttps://www.dipalo.com/blog/2012/05/15/a_review_of_spring_web_services_2_cookbook2012-05-15T11:00:00Z2012-05-15T11:00:00ZJames Gemmell<p><a href="http://www.packtpub.com/spring-web-services-cookbook/book">The Cookbook</a> fills a long-standing vacancy on the Spring bookshelf and
plays a useful complementary role to the online
<a href="http://static.springsource.org/spring-ws/site/reference/html/index.html">Spring Web Services reference documentation</a>. Considering the volume
of publications, Spring WS is by far the poorer cousin of the Spring
core, persistence, MVC, batch and integration lineup.</p>
<p>Over the last two years I've used Spring Web Services extensively in
the implementation of a multi-operation web service facade to a
foreign exchange dealing system in a large Australian bank.</p>
<p>I wish I'd had this book when I started the project. The WS examples
from SpringSource are great, but they don't come close to
demonstrating the richness of the platform's capabilities. The
Cookbook's recipes and the accompanying sample projects cover just
about every combination of transport and object-XML mapping that
you're likely to use (SOAP over HTTP and JMS with JAXB2 and XMLBeans)
and some of the more esoteric (SOAP over e-mail and XMPP with JiBX and
MooseXML.)</p>
<p>Both XWSS and WSS4J for security are dealt with comprehensively. I was
pleasantly surprised to find a chapter dealing with JAX-WS and Apache
CXF as well as creating RESTful services, using Spring MVC. Testing
using Spring's MockWebService as well as TCPMon and soapUI also get
their own much needed chapter.</p>
<p>As you would expect of any cookbook, this one doesn't read easily
cover to cover, and the recipes can appear repetitive at
times. In-depth coverage has been sacrificed in favour of
breadth. However, in my opinion, this is the Cookbook's strongest
selling point. The wide range of subject matter allows the reader to
easily explore the featured technologies and make educated evaluations
and comparisons.</p>
<p>I find the Spring Web Services 2 Cookbook a worthwhile addition to my
bookshelf. The book can bought from <a href="http://www.packtpub.com/spring-web-services-cookbook/book">Packt Publishing</a> and <a href="http://www.amazon.com/gp/product/B007BN37I6/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&tag=dipalo-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=B007BN37I6">Amazon.com</a>.</p>
Using What instead of Why to report errorshttps://www.dipalo.com/blog/2011/06/01/using_what_instead_of_why_to_report_errors2011-06-01T11:14:00Z2011-06-01T11:14:00ZJames Gemmell<p>During peer code reviews I have sometimes observed that there is a
preference for programmers to interpret errors or exceptions as part
of the error handling process. Instead of reporting <strong><em>what</em></strong> caused the
error, an interpretation is applied and <strong><em>why</em></strong> the error occurred is
reported instead.</p>
<p>As an example, an exception such as "<em>connection failed</em>" is reported
as "<em>the server is down</em>". This is, quite clearly, a naive
interpretation. There may be many other reasons as to why the
"<em>connection failed</em>"; the connection credentials may be incorrect,
there may be a network fault, the service application may not be
running or a solar flare may be affecting your Wifi. Under these
circumstances, all you can safely assume about the situation is that,
well, the "<em>connection failed</em>".</p>
<p>The example above seem obvious but it becomes even easier to make the
mistake when reporting business exceptions from a web service. When
required to handle a particular error, the programmer often has to
rummage through a toolbox of available business exception codes and
apply the one that fits best. More often than not it doesn't.</p>
<p>The importance of getting this right may only become apparent after
that prolonged phone call with the irate user who insists that your
"<em>server is down</em>" when you know perfectly well that it isn't.</p>
Moving a MythTV Master backendhttps://www.dipalo.com/blog/2010/08/29/moving_a_mythtv_master_backend2010-08-29T02:13:00Z2010-08-29T02:13:00ZJames Gemmell<p>This proved to be a whole lot easier than I thought. It required
backing up the <code>mythconverg</code> MySQL database on the old system and
restoring it on the new one.</p>
<pre class="example">
~ /usr/share/mythtv/mythconverg_backup.pl
~ /usr/share/mythtv/mythconverg_restore.pl --filename mythconverg-VERSION-TIMESTAMP.sql.gz
</pre>
<p>A further step was needed to update the hostname of existing
recordings to the new host.</p>
<pre class="example">
mysql> update 'recorded' set hostname='peeves' where hostname<>'peeves'
</pre>
<p>This machine became the frontend at the same time. The i3 GPU support
was included in <a href="http://intellinuxgraphics.org">the xf86-video-intel driver</a> from version 2.10. I ended
up using 2.11 which had just become available and added the following
entry to <em>/etc/portage/package.keywords</em>.</p>
<pre class="example">
=x11-drivers/xf86-video-intel-2.11.0 ~x86
</pre>
Faulty SOAPFaults and Java5https://www.dipalo.com/blog/2010/05/17/faulty_soapfaults_and_java52010-05-17T05:33:00Z2010-05-17T05:33:00ZJames Gemmell<p>There may not be many good reasons for wanting to perform XML schema
validation on a SOAP Fault. I had cause to as part of a unit test for
a piece of fault generation code. I used SAAJ to create the fault and
Spring's XMLValidator to validate.</p>
<p>The unit test passed on JDK 1.6 but failed with the exceptions below
when run under JDK 1.5.</p>
<pre class="example">
org.xml.sax.SAXParseException: UndeclaredPrefix: Cannot resolve 'SOAP-ENV:Server' as a QName: the prefix 'SOAP-ENV' is not declared.
org.xml.sax.SAXParseException: cvc-type.3.1.3: The value 'SOAP-ENV:Server' of element 'faultcode' is not valid.
</pre>
<p>The XML in question was the qualified name in the faultcode that had
been created by default.</p>
<pre class="example">
<faultcode>SOAP-ENV:Server</faultcode>
</pre>
<p>The fix was to remove the <code>SOAP-ENV</code> namespace prefix by calling
<code>setFaultCode("Server")</code> on the <code>SOAPFault</code>. The test then passed on both
JDKs.</p>
<p>Here's the reason. Under the hood, XMLValidator uses the Xerces JAXP
validator bundled in the JRE's <code>rt.jar</code>. From 1.5 to 1.6 the validator
implementation was changed from using a SAX parser to DOM. It appears
that the former is unable to resolve the prefix correctly when it
features in the text content.</p>
Building a new Gentoo MythTV backendhttps://www.dipalo.com/blog/2010/04/20/building_a_new_gentoo_mythtv_backend2010-04-19T23:17:00Z2010-04-19T23:17:00ZJames Gemmell<p>After a few of years of fairly intensive use I am migrating a MythTV
backend from a rather creaky and increasingly unstable Pentium 4 to a
shiny new Core i3 530 based box. I was quite impressed with
<a href="http://www.phoronix.com/scan.php?page=article&item=intel_corei3_530&num=1">Phoronix's Linux benchmarks of the CPU</a>. <a href="http://www.phoronix.com/scan.php?page=article&item=intel_clarkdale_gpu&num=1">The performance of the integrated GPU</a>
will help too since this box is destined to run an HD frontend at some
point.</p>
<p>It's been a while since I last set up a Gentoo box from scratch and
thought I'd give the <a href="http://www.gentoo.org/doc/en/gentoo-x86-quickinstall.xml">Gentoo Quick Install</a> a go rather than the
LiveCD. The i3's <a href="http://en.wikipedia.org/wiki/Hyper-Threading">Hyper-Threading</a> support meant that the boot was
graced with a 4 penguin salute and I was pleasantly surprised by the
performance.</p>
<p>When partitioning the 1TB drive I settled on the following layout,
setting aside <code>/dev/sda5</code> as a future <code>amd64</code> root partition.</p>
<pre class="example">
/dev/sda1 /boot 256MB ext2
/dev/sda2 swap 2GB swap
/dev/sda3 / 100GB ext3
/dev/sda5 [amd64] 100GB ext3
/dev/sda6 /mnt/mythtv 729GB xfs
</pre>
<p>I diverged from the install guide in a few places. When the
gentoo-sources kernel download threatened to take more than a couple
of hours I performed the mirror-select step early and portage pulled
it from a local mirror. I prefer using <code>genkernel</code> and, setting
<code>MAKEOPTS="-j5"</code>, this and the <code>emerge world</code> steps took next
to no time.</p>
<p>The backend is now up and recording and the next step is to
<a href="blog/moving_a_mythtv_master_backend.html">promote it to master backend status</a> and get the frontend working.</p>
Pretty printing with Emacshttps://www.dipalo.com/blog/2010/03/31/pretty_printing_with_emacs2010-03-30T22:26:00Z2010-03-30T22:26:00ZJames Gemmell<p>I recently completed a proof of technology using <a href="http://static.springsource.org/spring-ws/sites/1.5/">Spring Web Services</a>
to host a SOAP over JMS service. While writing it up for distribution
on a so-called Sharepoint "wiki" I needed to include the
<em>applicationContext.xml</em>. Pretty printing the XML with syntax
highlighting seemed like a good idea.</p>
<p>GNU's <code>xmllint --format --htmlout</code> was my first attempt but this does a
pretty poor job in that all it does is wrap the formatted output in an
HTML header and footer. I found nothing for Eclipse other than the
<a href="http://www.java2html.de/eclipse.html">Java2Html</a> plugin.</p>
<p>A somewhat foggy recollection of doing a similar thing in the past led
me to reacquaint myself with Emacs <a href="http://www.emacswiki.org/emacs/Htmlize">Htmlize</a> which I hadn't updated
since 1999. Needless to say quite a few new features have been added
in the interim and using an <code>htmlize-output-type</code> of <code>"inline-css"</code>
generated exactly what was needed to paste into the "wiki".</p>
NTFS recovery with SpinRitehttps://www.dipalo.com/blog/2010/02/07/ntfs_recovery_with_spinrite2010-02-07T06:21:00Z2010-02-07T06:21:00ZJames Gemmell<p>Some more corrupted sectors appeared on the same disk mentioned in
this <a href="blog/recovery_from_windows_xp_chkdsk_failure.html">earlier post</a>. Once again, the Windows recovery console was no
help as it would just hang but I could access the disk after booting
Linux and found the corrupted sectors using <code>badblocks</code>. I pulled the
sectors off, as I'd done before, using <code>ddrescue</code> utility. Since the
last failure was less than 6 months ago and there may be more failures
to come I decided that paying $89 for <a href="http://www.grc.com">GRC's SpinRite</a> was more than
justified.</p>
<p>What a fabulous utility! It was pretty easy to get up and running
after downloading and burning a boot disk. I initially had a problem
with the laptop CPU overheating and shutting down. This happened when
SpinRite was trying to recover some lost sectors and vexing the
CPU. Moving the laptop too a cooler location on top of a fridge did
the trick.</p>
<p>It took a few hours to run through the disk and recover the sectors,
where possible. I then rebooted in the Windows recovery console and
ran <code>chkdsk</code>. Job done!</p>
<p>At the same time I ran SpinRite over my aging Dell C400's 80GB
drive. This laptop is now used as a Linux MythTV frontend and has been
making a few noises sounding a little like seek errors in the making.
The hard disk was running a little hot during the scan which resulted
in some temperature warnings SpinRite. Moving the laptop <em>into</em> the
fridge solved the cooling problem and the scan continued
uninterrupted. SpinRite found and recovered a few bad sectors on the
<code>ext3</code> partition and I have yet to hear the noise again.</p>
Intel XVideo problems following Mythbuntu 9.10 upgradehttps://www.dipalo.com/blog/2010/01/02/intel_xvideo_problems_following_mythbuntu_9.10_upgrade2010-01-02T02:54:00Z2010-01-02T02:54:00ZJames Gemmell<p>Following an upgrade to Mythbuntu 9.10, one of my MythTV frontends
failed play back video smoothly and without stuttering. The frontend
is a rather old Pentium IIIM/i830M based Asus laptop but it played SD
resolution video quite acceptably before the upgrade.</p>
<p>The <code>mythfrontend.log</code> revealed that the problem was that the driver no
longer possessed the XVideo extension capability.</p>
<pre class="example">
VideoOutputXv Error: Could not find suitable XVideo surface.
VideoOutputXv: Falling back to X11 video output over a network socket.
*** May be very slow ***
</pre>
<p>No kidding. After much Googling of the Ubuntu forums I found a link to
the solution in the <a href="http://www.ubuntu.com/getubuntu/releasenotes/910#No%20Xv%20support%20for%20Intel%2082852/855GM%20video%20chips%20with%20KMS">Ubuntu 9.10 release notes</a>. The trick is to disable
kernel-mode-setting (KMS) using the nomodeset kernel boot option.</p>
Upgrading Gentoo MythTV to 0.22https://www.dipalo.com/blog/2009/12/28/upgrading_gentoo_mythtv_to_0.222009-12-27T23:36:00Z2009-12-27T23:36:00ZJames Gemmell<p>I recently <a href="blog/intel_xvideo_problems_following_mythbuntu_9.10_upgrade.html">upgraded a frontend to Mythbuntu 9.10</a> and got MythTV 0.22
as part of the deal. Rather than leap through the fiery hoops
required to revert it back to 0.21 I decided to take the plunge and
upgrade my other Gentoo-based MythTV backend and frontends to 0.22.</p>
<p>This was trouble free as upgrades go but I did encounter the
<a href="http://wiki.mythtv.org/wiki/Fixing_Corrupt_Database_Encoding">UTF8/latin1 database encoding problem</a> which requires a backup and
restore of the MythTV database after changing the default encoding to
<em>latin1</em>. <a href="http://wiki.mythtv.org/wiki/Fixing_Corrupt_Database_Encoding#Changing_the_MySQL_server_configuration">Changing the MySQL server configuration</a> is easy to do on
Gentoo as all that is required is to rebuild MySQL with the <em>latin1</em> USE
flag.</p>
<p>My <em>package.keywords</em> now looks as follows;</p>
<pre class="example">
>=media-tv/mythtv-0.22 ~x86
>=media-plugins/mythcontrols-0.22 ~x86
>=media-plugins/mythgallery-0.22 ~x86
>=media-plugins/mythmusic-0.22 ~x86
>=media-plugins/mythvideo-0.22 ~x86
>=www-apps/mythweb-0.22 ~x86
>=dev-python/imdbpy-3.8 ~x86
>=x11-themes/mythtv-themes-0.22 ~x86
>=x11-themes/mythtv-themes-extra-0.22 ~x86
</pre>
<p>and my <em>package.use</em> has;</p>
<pre class="example">
dev-db/mysql latin1
</pre>
DTV1000S Linux driver now workinghttps://www.dipalo.com/blog/2009/11/29/dtv1000s_linux_driver_now_working2009-11-28T23:17:00Z2009-11-28T23:17:00ZJames Gemmell<p>Top of my todo list for some time now has been to get my
<a href="http://linuxtv.org/wiki/index.php/Leadtek_WinFast_DTV1000_S">Leadtek WinFast DTV1000S</a>
DVB-T capture card to pay its way on a Gentoo MythTV backend instead
of gathering dust on the shelf.</p>
<p>Video4Linux (V4L) drivers exist for the individual DTV1000S components
listed below as they have also been used in other DVB cards.</p>
<ul>
<li>TDA18271 - terrestrial / cable silicon tuner</li>
<li><a href="http://linuxtv.org/wiki/index.php/NXP_TDA1004x#TDA10048">TDA10048</a> - channel decoder/demodulator</li>
<li><a href="http://en.gentoo-wiki.com/wiki/Saa7134">SAA7130</a> - PCI video broadcast decoder</li>
</ul>
<p>What was lacking was the support for the card. I made an unsuccessful
attempt at putting it together at the beginning of the year. Now
Michael Krufky has done all the heavy lifting and
<a href="http://kernellabs.com/hg/~mkrufky/dtv1000s">committed his changes</a>.</p>
<p>The easiest way to incorporate these into the 2.6.30-r8 kernel was to
follow the <a href="http://linuxtv.org/repo/#mercurial">V4L build instructions</a>. Revision 13263 has all the
necessary changes.</p>
<pre class="example">
hg clone http://linuxtv.org/hg/v4l-dvb
cd v4l-dvb
make
sudo make install
</pre>
<p>I've not tested the IR capabilities of the DTV1000S as I'm using an
<a href="http://linuxtv.org/wiki/index.php/AVerMedia_AVerTV_DVB-T_777_%28A16AR%29">AverTV DVB-T 777</a> for that purpose.</p>
<p>I used <a href="http://lists-archives.org/video4linux/22347-hvr1200-hvr1700-tda10048-support.html">Steven Toth's instructions</a> to get the TDA10048 firmware drivers
from <a href="http://steventoth.net/linux/hvr1700/">http://steventoth.net/linux/hvr1700/</a> and followed the
readme.txt. Hat tip to <a href="http://tw1965.myweb.hinet.net/">Terry</a> for his Leadtek product page.</p>
Purging MySQL binary logshttps://www.dipalo.com/blog/2009/11/04/purging_mysql_binary_logs2009-11-04T10:33:00Z2009-11-04T10:33:00ZJames Gemmell<p>The 60GB root partition of my MythTV server has been gradually filling
up and finally reached capacity over the weekend. The culprit turned
out to be the MySQL binary logs which have never been purged since I
set server up in early 2007. Weighing in at 26GB it was time for them
to go.</p>
<p>All that was needed is a <a href="http://legroom.net/2008/06/29/flush-and-reset-mysql-binary-logs">flush and reset</a> to purge the logs.</p>
<pre class="example">
mysql> flush logs;
mysql> reset master;
</pre>
Recovery from Windows XP chkdsk failurehttps://www.dipalo.com/blog/2009/08/30/recovery_from_windows_xp_chkdsk_failure2009-08-29T22:19:00Z2009-08-29T22:19:00ZJames Gemmell<p>My laptop started hanging at the Windows logo stage and successive
reboots wouldn't fix the problem. Booting into the Windows recovery
console and running chkdsk manually on the C: drive would hang at the
22% mark. Next I booted <a href="http://trinityhome.org/Home/index.php?wpid=1&front_id=12">Trinity Rescue Kit</a> 3.3. This Linux-based
<a href="http://en.wikipedia.org/wiki/Live_cd">Live CD</a> is essential to any system recovery arsenal. It can be booted
from a USB stick and even a PXE network boot, if necessary. Bruce
Allen's <a href="http://smartmontools.sourceforge.net/badblockhowto.html">Bad block HOWTO for smartmontools</a> is a great source of help.
Its focus is Linux file system recovery but the techniques are readily
adaptable to NTFS.</p>
<p>No obvious indications of errors were turned up using <code>smartctl</code>. The
next thing I did was use <code>badblocks</code> to run a low level read-only test;</p>
<pre class="example">
# badblocks -v -o bad.txt /dev/sda1
</pre>
<p>This came up with a list of 113 corrupted blocks starting at 19519872,
which is roughly 22% of the way into the NTFS partition. There was a
single block at this location and the remaining 112 in a contiguous
segment starting at 19519892. I used <code>dd</code> and <a href="http://www.gnu.org/software/ddrescue/ddrescue.html">ddrescue</a> to make a copy
of the blocks and <code>hexdump</code> to have a look at the contents. <code>ddrescue</code> is
one of the <a href="http://www.gnu.org">GNU</a> utilities I've fortunately never had to use before. It
is suited to recover entire disk images and has some smarts built in
to recover problem areas.</p>
<pre class="example">
# ddrescue -b1024 -i19519872b -o0 -s1b -t /dev/sda1 /tmp/bad2.img
# ddrescue -b1024 -i19519892b -o0 -s112b -t /dev/sda1 bad3.img
# hexdump -C bad3.img | less
</pre>
<p class="readmore"><a href="https://www.dipalo.com/blog/recovery_from_windows_xp_chkdsk_failure.html">Read more...</a></p>Corrupted USB flash key recoveryhttps://www.dipalo.com/blog/2007/03/15/corrupted_usb_flash_key_recovery2007-03-14T23:31:00Z2007-03-14T23:31:00ZJames Gemmell<blockquote>
<p class="quoted">There are only two kinds of people in the world, those who have
lost data and those who are about to. — Anon</p>
</blockquote>
<p>The 128Mb Swisskey belongs to a friend and contained the only edited
copy of a manuscript she has been working on. She had forgotten to
"trash can" or eject it before removing it from her Mac and could no
longer read anything from the key. It's doubtful whether the act of
removing the key caused the corruption but there does seem to be a
link.</p>
<p>The first thing I tried was reading the raw key image.</p>
<pre class="example">
# dd if=/dev/sda of=key.img
500+0 records in
500+0 records out
131072000 bytes (131 MB) copied, 132.2937 s, 1.0 MB/s
#
</pre>
<p>I repeated this step to create another image file and then compared
their md5 signatures to make sure the key wasn't corrupting the data
itself. The next thing I did was try to mount the image.</p>
<pre class="example">
# mount -o loop -t vfat key.img /mnt/usb
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
#
</pre>
<p class="readmore"><a href="https://www.dipalo.com/blog/corrupted_usb_flash_key_recovery.html">Read more...</a></p>Gentoo Linux on the Dell Latitude C400https://www.dipalo.com/blog/2006/05/24/gentoo-c4002006-05-24T07:09:00Z2006-05-24T07:09:00ZJames Gemmell<p>This guide started off some time after I upgraded the 10Gb drive on
the C400 to an 80Gb Hitachi and replaced the <a href="c400.html">Debian installation</a>.
<a href="http://www.gentoo.org">Gentoo</a> has a wealth of <a href="http://www.gentoo.org/doc/en/index.xml">documentation</a> so this is intended as an
installation specific supplement.</p>
<blockquote>
<p class="quoted"><strong>Disclaimer:</strong> This document comes with no guarantees. The steps I
followed worked for me but may not necessarily work for you or your
hardware.</p>
</blockquote>
<h3>Configuration</h3>
<dl>
<dt><strong>Gentoo</strong></dt><dd>
linux-2.6.18-suspend2-r1 kernel <br>
no Windows installation</dd>
<dt><strong>PIII-M 866MHz CPU</strong></dt><dd>
768 Mb RAM (256Mb + 512Mb) <br>
A12 BIOS <br>
Crystal 4205 audio <br>
3c905C-TX FastEthernet adapter (built-in)</dd>
<dt><strong>80Gb 5400rpm Hitachi 5K80</strong></dt><dd></dd>
<dt><strong>TrueMobile 1150 wireless (<a href="#wireless">disabled</a>)</strong></dt><dd>
Netgear MA401 PCMCIA adapter <br>
Netgear WAG511 PCMCIA adapter</dd>
</dl>
<h3>Post-install</h3>
<h4>genkernel</h4>
<p class="first">Thinking the genkernel built kernels to be a little bloated I resorted
to using the more traditional <code>make menuconfig</code> and <code>make bzlilo</code>. After
much fiddling, recompiling & rebooting every time I needed another
driver it was time to give genkernel another shot. I was pleasantly
surprised - it actually built most of what I needed! I'm now a
genkernel convert.</p>
<p><p class="readmore"><a href="https://www.dipalo.com/blog/gentoo-c400.html">Read more...</a></p>Scriptshttps://www.dipalo.com/blog/2006/03/14/scripts2006-03-14T07:15:00Z2006-03-14T07:15:00ZJames Gemmell<dl>
<dt><strong><a href="/scripts/news.py">news.py</a> and <a href="/scripts/urlCache.py">urlCache.py</a></strong></dt>
<dd>Used to display and cache RSS newsfeeds
using the <a href="http://feedparser.org">Universal Feed Parser</a>.</dd>
<dt><strong><a href="/scripts/RSS_Generic.pm">RSS_Generic.pm</a></strong></dt>
<dd>An RSS/RDF grabber used in the past for the
<a href="/news.cgi">news page</a> in conjuction with <a href="http://backendnews.sourceforge.net">NewsGrabber.pm</a>.</dd>
<dt><strong><a href="/scripts/delay">delay</a></strong></dt>
<dd>A perl script that parses the Received: fields in a mail
header and then calculates the differences between their
timestamps. It's been very useful in the past for quickly
highlighting the source of mail delays - usually a corporate mail
gateway or spam filter.</dd>
</dl>
Linux DVB resourceshttps://www.dipalo.com/blog/2003/05/02/linux_dvb_resources2003-05-02T07:29:00Z2003-05-02T07:29:00ZJames Gemmell<p>This is a list of sites I found useful while setting up a Debian Linux
system for use as a <a href="http://www.cadsoft.de/vdr/">Video Disk Recorder</a> and router for satellite
broadband.</p>
<ul>
<li>The <a href="http://www.tldp.org/HOWTO/Sat-HOWTO.html">Satellite HOWTO</a> is a little dated but a good starting point.</li>
<li>The <a href="http://www.linuxtv.org/lists.php">linux-dvb mailing list</a> is both for developers and users alike
so can be a bit technical. Be sure to search the archive before
diving in here as your question may have already been answered.</li>
<li>The latest drivers can be obtained via CVS but are also available
as <a href="http://www.linuxdvb.tv/download.html">daily tar bundles</a> from <a href="http://www.linuxdvb.tv">www.linuxdvb.tv</a></li>
<li>For IP over DVB I had the greatest success with the older driver
refererred to in <a href="http://www.linuxdvb.tv/driver/index.html">these instructions</a>. The DVB data <a href="http://www.hauppauge.de/files/boot24.zip">firmware update</a>
from Hauppauge should be used to replace the <code>driver/Dpram</code> and
<code>driver/Root</code> files in the distribution.</li>
<li><a href="http://sourceforge.net/projects/dvbtools/">dvbtune</a> is used to tune the DVB card to the desired frequency and
bring up the network interface at the same time.</li>
<li>The <a href="http://pptpclient.sourceforge.net/">pptp client</a> is usually required to connect to the satellite
ISP. This allows the routing of outbound traffic to the net via a
VPN tunnel to the satellite ISP.</li>
</ul>
Debian GNU/Linux on the Dell Latitude C400https://www.dipalo.com/blog/2003/04/30/c4002003-04-30T07:26:00Z2003-04-30T07:26:00ZJames Gemmell<p>This document started off early in 2002 when I installed Redhat 7.2 on
the C400. I've been through several Redhat versions since 3.0.3 but,
after enduring a hard disk crash, I decided for a number of reasons
that Debian was the way to go.</p>
<blockquote>
<p class="quoted"><strong>Disclaimer:</strong> This document comes with no guarantees. The steps I
followed worked for me but may not necessarily work for you or your
hardware.</p>
</blockquote>
<h3>Configuration</h3>
<dl>
<dt><strong>Debian 3.0r1 Woody</strong></dt><dd>
recompiled 2.4.18 kernel <br>
no Windows installation (see later)</dd>
<dt><strong>PIII-M 866MHz CPU</strong></dt><dd>
1x 256Mb RAM<br>A09 BIOS<br>
Crystal 4205 audio <br>
3c905C-TX FastEthernet adapter (built-in)</dd>
<dt><strong>10Gb Toshiba MK1517GAP</strong></dt><dd>
TrueMobile 1150 wireless (optional)</dd>
</dl>
<h3>Installation</h3>
<h4>Partitioning the disk</h4>
<p class="first">The replacement drive was empty. The first step was to create a 768Mb
save-to-disk (s2d) partition using mks2d.exe on the disk from
Dell. The reason for choosing this size is that I intend adding
another 512Mb RAM at some point in the future. Dell recommends that
you set it up as 768Mb * 1.01 + 4Mb on the first partition so it's at
a little over 800Mb now.</p>
<p>/dev/hda2 is set up as 50Mb boot partition (probably overkill here)
and the remainder the root partition. No swap partition required at
this stage as I've found swapfiles to be quite adequate in the past.</p>
<p><p class="readmore"><a href="https://www.dipalo.com/blog/c400.html">Read more...</a></p>