Blogs‎ > ‎

Tech Stuff

2015.12.07 Envious of the HP Envy 5642 Printer

posted Dec 7, 2015, 4:06 PM by Troy Cheek   [ updated Dec 7, 2015, 4:06 PM ]

A couple of months back, I needed to do some scanning and printing for a nephew.  No problem, as I had my trusty Kodak All-In-One Printer.  Except that in the month or two since the last time I'd printed anything, the Kodak had decided it no longer knew how to print yellow.  What had probably happened was that some ink had dried in the print head.  While the old HP printers I'd been buying for years had print heads built into the ink cartridges, the Kodak had permanent print heads.  This made the ink cartridges cheaper, and the whole point of the Kodak printer was that it was going to save me money on ink.  Of course, I always knew there was the chance that the print heads would give out on me, which is why I ordered a replacement off Amazon a long time ago.  I just checked and the order is still "pending."  I hope they never try to fulfill it, as I'm pretty sure that credit card hasn't been valid for years.

Not printing yellow was kind of a big deal, so I searched quickly for a replacement printer.  I figured one of the same model would be cheap and common since it was several years old by then.  Instead, I found them to be quite expensive and rare.  A little more searching revealed why:  Kodak had stopped making printers.  The few which were still on the market were something of a collector's item.  Oh, well.

A little more research and a quick trip into town netted us the HP Envy 5642 All-In-One Print, Copy, Scan, Photo, Web, Make Your Coffee In Morning Printer.  It was quick and easy to set up, did the job we needed it to do, and didn't even cost that much.  It's much faster and cheaper than the old Kodak (which has now been donated to a family friend who needed to scan but didn't need to print color).  Don't go looking for it, though.  The 5640 series has already been replaced by the 5660 series, apparently.

Is all fine and well with the HP Envy 5642 All-In-One Printer?  Well, it was until just a couple of days ago.  I bought a USB 3.0 hub, which plugs into one of my two (2) available USB 3.0 ports in the back of the computer and gives me four (4) ports up front where I can actually use them.  However, the hub came with no drivers, as it wasn't supposed to need any.  What that actually meant was that it was supposed to work with the drivers available from Microsoft.  And I had a moment of joy as Windows 7 did indeed find and install the drivers for the new USB 3.0 device.  This feeling of joy came crashing to a screeching halt as Windows declared it could not find drivers for the new USB 2.0 device of the same name.  I rebooted a few times, unplugged and replugged the cable, but the result was either that Windows could not find drivers and failed to install the USB 2.0 device, or Windows declared that the Unknown Device needed no drivers and have been successfully installed.  In either case, the end result was a USB 3.0 hub which was not backwards compatible with USB 2.0 devices.  That wasn't too bad, since I had numerous 2.0 ports available all over this and other computers.  It was just the thought of a USB 3.0 port that didn't support 2.0 like it was supposed to bothered me.

I searched the web, downloading various drivers for the USB ports on my motherboard and for the chipset in the USB hub.  In my haste, I forgot to set a System Restore Point before installing the drivers.  I guess I'm lucky any of my USB ports still work.  But aside from one reboot where my keyboard and mouse stopped working, things went along surprisingly smoothly.  Then I made the mistake of downloading some kind of "driver manager" which was supposed to scan my system and download the drivers I needed automatically.  As soon as I started installing it, I knew I'd made a mistake.  I kept asking me if I wanted to install other helper programs, search bars, browser extensions, etc.  I declined, declined, declined, and eventually used the Task Manager to just kill the installer program.

It was too late.  Several "Potentially Unwanted Programs" had installed themselves, which goes to show no matter what anti-malware software you have running, it won't protect you from what you install yourself.  Armed with my Anti-Everything USB drive, I went to work exterminating the programs that installed themselves every time I rebooted or left the computer alone for a few minutes.  Eventually, I got a handle on it.  The last problem was something that somehow that tells Firefox to load a file instead of a webpage every time it is launched only if pinned to the taskbar.  I still haven't found that, but I did figure out how to pin without it.

Fixing all that took a couple of days.  Somewhere along the line, I needed to print something.  I couldn't.  No error message, mind you.  The print job queued up normally, then just disappeared instead of printing.  I figured between all the drivers and uninstalling programs and clearing of registry keys I'd messed up the printer.  No problem.  I still had the original installation files for the printer driver.  A quick re-install and... Still nothing.  An updated version is available on the website.  Download.  Nothing.

Long story short, it turned out that I was not the only one with this problem, which quite possibly had nothing to do with the USB hub, drivers, or malware.  I was unique in that my printer worked at all in the first place then suddenly stopped working.  I tried all the suggested fixes and finally found one that worked.  While Windows 7 had no trouble locating, downloading, and installing the drivers for the HP Envy 5642, it turned out the solution was to install the drivers for the HP Deskjet 6980 series.  The procedure is as follows:

How To Add An Alternate Driver:

  1. Click on your Start menu
  2. Select Devices and Printers
  3. In the Devices and Printers folder right click on your HP ENVY 5640 and left click on Printer Properties
  4. Left click on the Advanced tab
  5. Left click on New Driver
  6. When the New Driver window opens just hit 'next' until you see a list of Manufacturers on the left and a list of Printers on the right.
  7. Select HP as the Manufacturer on the left
  8. Select Deskjet 9800 as the printer on the right. If Deskjet 9800 doesn't appear than select 'Windows Update' on the bottom left and once the update completes you will be able to select Deskjet 9800.
  9. After selecting Deskjet 9800 hit next to complete the New Driver Wizard
  10. Under the Printer Properties window select 'Apply' but don't hit OK
  11. Select the General tab
  12. Rename your printer back to HP ENVY 5640
  13. Hit OK
  14. Lastly, right click on your HP ENVY 5640 one more time and left click on Printing Preferences
  15. Left click on the Paper/Quality tab
  16. Left click on the 'Normal' dropdown for Print Quality on the bottom right and change this to Fast Normal
  17. Hit Apply and OK
Once that works, you go back again and select the HP Deskjet 6980 series instead because it allows the automatic two-sided printing.  You might be able to select that driver to start with, but I'm not taking any chances now that I have my printer working again.  Why am I listing this here?  Why not just bookmark the solution?  Well, I found this solution on exactly one (1) webpage in the entire world wide web, and I just know if I have this problem again in the future after I've forgotten how to get this printer working, that page will have changed or become unavailable or something.

Incidentally, this hack is only supposed to fix cases where the printer can print most things but not from Microsoft office products, implying a problem with the Microsoft products.  In my case, nothing including Firefox or Notepad++ could print, in addition to Microsoft office products and Windows 7's own "Print a Test Page."

Now that I have the printer working again, I forgot what it was that I wanted to print.

2015.11.26 Comcast Doesn't Know My Name

posted Nov 26, 2015, 4:32 PM by Troy Cheek   [ updated Nov 26, 2015, 4:32 PM ]

Now, I'm not implying that Comcast (or, as I like to call them Cardassian Cable), the company that supplies me with cable TV and internet service (and is continually trying to convince me to let them supply me phone service) is run by idiots, but knowing the names of your customers when you email them might be a good thing.  I've only been a customer for about 15 years and I took over the account from my father who had it for about 10 years before that, so we're talking a quarter century of service.  I recently discovered that my official Comcast email account was automagically forwarding messages to a personal email account I no longer use.  That would be my fault.  I set that up years ago myself and forgot to change it when I closed that account.  What isn't my fault is the first email I got when I redirected the forwarding to my gmail account.
For those who don't know me, my name is Troy Howard Cheek.  My Comcast email is set up for Troy.  In some places, the bill and online, Comcast knows me as Howard, either because I accidentally set it up that way a long time ago or they're confusing me with my father who used to get service at this address.  At no point at any time did I ever tell them my name was SAMUEL.  But that's not the funny part.
I thought that maybe it was just a glitch that my email mentioned SAMUEL.  I clicked the link to see my actual video bill.  I didn't see my actual video bill.  I saw one for someone named Daniel.  Again, Comcast might have legitimate confusion about whether I prefer to go by Troy or Howard, but at no time did anyone ever tell them my name was Daniel.  To make matters worse, I now know exactly which services Daniel gets from Comcast and exactly how much he's paying for them.

As I mentioned before, Comcast is trying to convince me to drop my current phone service, which I've only had for about half a century, and use their Xfinity phone.  My response is always that based on their TV and Internet services, I can't use their phone because I need a phone that works all the time.  Xfinity TV goes out on whatever channel I'm watching about once a week and goes out completely about once a month, sometimes for just a few minutes, sometimes for a few hours, sometimes for a few days.  Xfinity Internet varies.  Their response to that is to try to convince me that the phone service is completely independent of the other services and just because they stop working all the time doesn't mean that the phone will stop working all the time.  This may be true, but my understanding is that Xfinity phone service uses VOIP or Voice Over Internet Protocol, so if my cable modem can't get a Internet connection, the phone service by definition can't work.  How often does my Internet connection go up and down?  There are yoyos who don't go up and down as much as my Xfinity Internet.
I wrote a little program that pings every 10 minutes and prints how long it takes to get a return or "Internet down!" if it gets an error message.  See how often it gets an error message.  Technically, just because my computer can't ping doesn't necessarily mean that the Internet is down.  But if you can't contact Google, what use is your Internet anyway?

2015.11.15 Native IDE vs AHCI (SSD Upgrade Gone Wild)

posted Nov 15, 2015, 4:40 PM by Troy Cheek   [ updated Nov 17, 2015, 1:55 PM ]

Back when I put together my latest computer just a few short years ago, Solid State Drives (SSD) were expensive.  They were more than $1 a GB, meaning that a 1 TB drive would set you back more than $1000 (if you could find one that big) whereas a mechanical Hard Disk Drive (HDD) that size was much less expensive.  A more reasonable 250 or 500 GB SSD would still cost you $250 or $500.  I paid well over $100 for a 120 GB SSD and well over $100 for a 2 TB HDD.  I installed my OS and frequently used programs/games on the SSD and put my music, videos, and less frequently used stuff on the HDD.  It seemed like a good trade off between speed and capacity at the time.  As the years went by, I found more and more stuff I wanted to install, leading to my frequent complaint that I was spending more time moving data around to make room for the latest games than I did playing the latest games.

But recently some family members were complaining how slow their laptop computers seemed, taking forever just to boot up in the first place, then taking forever to load programs, forever to shut down, etc.  I hadn't experienced that in quite a while.  I did some checking around and found that the average SSD was much cheaper than it used to be, was about the same size as a standard laptop HDD, and had the same connectors.  It looked like a simple replacement job.  So, I ordered a couple of new SanDisk SSDs (along with a generic USB-to-SSD adapter cable), and set out to migrate a couple of laptops.  I used Macrium Reflect to clone the old HDDs onto the SSDs.  A few minutes with a screwdriver and the drives were swapped.  A couple of reboots later (Windows needed to pick up some new drivers for the new drives) and we were in business.  Boot times went from 40 seconds to 20 seconds, shut down times weren't worth measuring, and overall the laptops just seemed "peppier."  You don't realize it, but pretty much every time you open a window, click on a button, change a setting, etc Windows accesses the disk to read or write some file or another.  Reading those files from RAM chips instead of spinning metal platters made a noticeable difference.

Somewhere along the line, I realized that I was upgrading loved ones' computers with faster drives with more capacity while I was still limping along with the tiny SSD I'd first bought with my computer.  I ordered an extra drive and got ready to trade my 100 GB SSD for a 500 GB SSD.  (Technically, 120 GB to 480 GB, but whatever).  I'd already been imaging and swapping from HDD to SSD, a whole different technology.  Surely, swapping from an SSD to a larger capacity SSD would be simple.

Sometimes, the universe just waits for me to get cocky.

Cloning the drive was simple enough and took only an hour.  (Cloning the first laptop took 12:09:48.  We don't know if that was a slow USB port or a software glitch or what).  Swapping out the drives physically took about that long.  Whereas the laptops had one little panel with a couple of screws, to get at the SSD on the main tower computer I had to remove both side panels, reroute some wires, and do the standard contortion gag of twisting my body so I could see the screw I was supposed to be turning or actually being able to reach the screw I was supposed to be turning, but of course never both at the same time.  I think the guys who assembled the computer for me put the drives in before they mounted the graphics card, because there was no way that thing was going to slide out of there.  Luckily, the 2.5" SSD was mounted in a 3.5" bay with a couple of brackets, so I was able to unscrew the drive from the brackets and get it out of there on its side.  It wasn't easy, but it was doable.  I left one side panel off because completely sealing up a computer is a guarantee that you will have left a wire unhooked somewhere.  I powered up and waited to see how fast Windows would load.

A disk read error has occurred.
Press ctrl-alt-del to restart.

Surprisingly enough, this did not bother me.  From Day One I've had this problem pop up from time to time.  Literally the first time I booted the computer when I got it home, this happened.  Pressing the three keys always made the computer boot fine the next time.  I don't know why it happened, why it only happened sometimes, or why restarting (or power cycling) always fixed it.  I pressed the three keys in question and waited for Windows to load.

A disk read error has occurred.
Press ctrl-alt-del to restart.

After several attempts with the same results, this did bother me.  I checked all the wiring, double-checked my BIOS settings (swapping the hard drive had changed the boot order, but I'd already fixed that), and tried again.  Same error.  I plugged in the old drive.  It booted right up.  New drive.  Read error.  I re-cloned the drive two more times, using different settings.  Read error.  Thinking maybe it was a SATA problem, I booted from both drives using the USB-to-SSD cable.  The old drive loaded fine.  The new drive, read error.  This was ridiculous.  I'd already performed this procedure twice on ancient laptops.  Why was I having difficulty with a relatively new tower system?  To the Internet!

According the various geniuses whose wisdom you find when you search for that error, this problem is that the SATA ports are trying to use the old IDE protocol when they need to be using the newer AHCI protocol.  The Advanced Host Controller Interface works with newer hardware, which apparently the old Integrated Drive Electronics doesn't, and offers many more advantages that I didn't care about.  I also didn't care that the crappy old laptops could apparently talk to the brand new SSDs just fine.  I just wanted my computer to boot again.  I found the BIOS setting to enable AHCI, rebooted again when this apparently changed boot order of the hard drives, and watched the new drive almost load Windows before briefly showing a BSOD (Blue Screen of Death) and shutting down.  I hooked up the old drive, had to change the boot order again, and found that the old drive did the same thing.  This concerned me, because my previous train of thought had been that I had a perfectly fine operating system on the old drive, so if all else failed I could always just put it back in and go about my business.  Luckily, switching back to IDE (which was called Native IDE for some reason) let the old drive boot.  To the Internet!

Apparently, Windows needs special drivers to talk to IDE or AHCI devices.  Since Windows had been installed when the computer was in IDE mode, it had loaded the IDE drivers but had not loaded the AHCI drivers.  Halfway through boot when enough of Windows had loaded so that it took over, it tried to speak IDE to the AHCI port and bad things happened.  The trick was to either re-install Windows (Ha!) or change the registry so that Windows realized it was missing a driver.  I went for the latter solution, which it turns out even Microsoft admits is necessary sometimes.  I soon had the old drive booting in both IDE and AHCI mode.  The new drive still didn't work.  When I cloned the old drive, it didn't have the AHCI drivers loaded.  I had to clone the old drive again, hook the new drive up again, change the boot order of the drives again, and finally I could boot the new drive.  After a couple of reboots, all the drivers and settings and whatnot settled down.  I pulled up my Windows Experience Index.  For Windows 7, it goes from 1.0 to 7.9 and allegedly tells you how kick-ass your computer is.  With the old SSD, my Windows Experience Index was a paltry 5.9 possibly because I was in Native IDE mode or possibly because I had bad drivers or possibly because the Moon was in the wrong phase.  With the new SSD and new mode and new drivers, well, see for yourself:
I think I can go to bed now.

2015.11.10 One Last HDD/SSD/SSHD Rant

posted Nov 10, 2015, 10:18 AM by Troy Cheek   [ updated Nov 10, 2015, 10:19 AM ]

Not too many years ago, I specced a new desktop system and had to make some decisions about storage space.  I ultimately went with a 100 GB SSD boot drive for about $100 and a 2 TB HDD storage drive for about $100.  I installed the OS and some commonly used programs on the SSD, put my video collection and less-used programs on the HDD, set Windows 7 to default to thinking My Documents and similar folders were on the HDD, and off I went.

Things were fine for a while, but as I installed more programs and collected more data, I found that more and more often I was having to uninstall old programs from the SSD or move them to the HDD.  "Hey, that new game everyone is talking about is on sale!  Not enough space on the SSD!  Well, I'm not using this old game anymore, and this latest video project is finished, so..."  I'd end up spending the whole evening shuffling files around.  By the time I got the game installed, I was too tired to play it.  The thought occurred to me many times that I was doing a lot of work making things easier for the computer, when it was supposed the be the other way around.  The computer is supposed to be working hard to make things easier for me.

I heard about SSHD, a hyrbid of SSD and HDD technology.  In theory, frequently used files are kept on the SSD and other data is kept on the HDD, giving you the best of both worlds automatically.  However, the SSD parts of these drives seemed ridiculously small, like 8 GB.  That may not be enough to store a whole OS install to speed up boots and system operations.  If you have 8 or 16 GB of system RAM, that's not even enough to store a hibernation file to speed up returning from hibernation mode.  Obviously, the SSHD concept was flawed, or at least not implemented effectively.

There was another option.  Combining separate SSD and HDD devices at the OS level.  Basically, it was a roll-you-own SSHD where you combined your existing SSD and HDD into a single device.  All the advantages of an SSHD with sizes you set yourself.  Except that if you're running Windows, you have to shell out big bucks for an "enterprise-level solution" or switch to Linux to have this capability.

My suggested solution was a utility program that did the "whole evening shuffling files around" thing automatically.  You'd install all programs and save all data to the SSD.  The utility program would monitor the SSD.  When it started getting full, the utility would seek out the oldest, least used "stale" programs and data and move them over to the HDD.  With the proper use of symbolic links and similar tricks, the OS and the user don't even realize that the files have been moved.  If it turns out that the moved files are needed again, they can be moved back just as easily.  Again, such a thing exists, but only very expensively or for other operating systems.

I wrote a proof of concept program in BASIC of all things in a few hours, just to show that such a thing was possible.  It worked well enough, but it wasn't something I wanted to trust my data to.  I almost considered hiring a programmer friend to write a bullet-proof version of it for me, but I figured if I was going to spend that much money, I could go with one of the existing solutions.  No one else seemed interested in such a utility, so I went back to shuffling files around manually.  I figured a solution would eventually present itself.

Just the other day, a family member was complaining how slow his laptop was.  I had an unused SSD laying around so I offered to install it for him, the caveat being that he'd have to pare down his data because his HDD was 500 GB and the SSD was 100 GB.  He wasn't willing to do that, but the idea did intrigue him, so we checked current prices.  It turns out you can get a 500 GB SSD for around $150 now.  I ordered one for him, along with a cable that should make cloning the old drive easier.  I ordered one for myself, too.  I'll replace my boot drive and probably spend many an evening shuffling files around.  The unused 100 GB SSD drives will probably become USB backup drives or replace the even smaller drives on the older laptops laying around.

You win again, Moore's Law!

2015.10.16 More Custom DVR Stuff

posted Oct 16, 2015, 7:21 AM by Troy Cheek   [ updated Oct 16, 2015, 7:25 AM ]
I've occasionally talked about my homemade DVR solution.  It's basically a computer that records television for me, which is all a TiVo or other box you get from your cable or satellite TV provider actually is.  My software of choice is SageTV.  For a long time, I could not recommend this software as the company was bought out by Google and it was no longer available for sale.  If you wanted to see it in action, you had to have Google Fiber TV.  However, Google has recently given permission for the old owner of SageTV to open source the program.  The SageTV community has already started to work on a package that will have all the functionality of the old program, with of course improvements on the way.  If all you want is to record television on your computer, this is the program for you, especially since Microsoft has dropped Windows Media Center from the last couple of versions of Windows.

I, however, want to do more than just record television on my computer.  First of all, all the recording is generally done in the now-ancient MPEG-2 file format.  This is what's used on DVD media.  That's what digital OTA (over the air) and cable TV uses.  It works very well for DVD quality 4:3 standard definition video meant to display on old CRT televisions.  It's not so good for 16:9 video, high definition, or LCD screens.  Problem one is the file size.  SD video runs about 3 GB per hour.  HD video can be as high as 12 GB an hour.  That takes up a lot of hard disk drive space (my first 500 GB dedicated drive could only hold 40 hours of HD video).  And since I might want to watch the videos on my laptop or tablet computers, that limits how many hours I can take with me on the road.  Problem two is that even with a kick-ass computer, a large HD file has so much data that has to be sent through a network cable, decompressed, and displayed that playback sometimes gets choppy.

H.264 or MPEG-4 Part 10, Advanced Video Coding (MPEG-4 AVC) in the MP4 file container to the rescue.  This is a much more recent file format.  Even at compression settings approaching "no noticeable difference" it create files that are usually much less than 1 GB per hour for SD video.  For HD video, well, I do most of my viewing on a small screen, so I don't really need HD video.  As long as I'm converting the file anyway, I can take the opportunity to scale it down to SD resolution.  And some video has black bars where 16:9 content is show on 4:3 screens and vice versa.  Might as well crop those out while we're at it.
There are a lot of programs that can do that stuff, but why sit in front of a computer and tell it how to crop, scale, and convert each and every file?  Isn't the whole point of computers to make my life easier?  So I've done my best to automate the whole process:
  1. A program called Directory Monitor 2 (DirMon2) watches my video directories for new files.  When it finds one, it fires off a series of other programs that do specific things to the file.
  2. A program extracts the closed caption data (CCExtractor) from the MPEG-2 file and sticks it in a text file.  This data would otherwise get lost when the file is converted from MPEG-2 to H.264 as the MP4 file container does not support it.  I could use MKV or some other file container which does support captions or subtitles, but I prefer MP4 because it plays on all my devices.
  3. A program scans the video and marks commercials for me (comskip).  This creates another text file that most of my players can read.  It also creates a log file that I can scan to find information about aspect ratios, black bars, and the like.
  4. I scan the video with a special tool called ffprobe which tells me things like resolution, frame rate, total length in seconds, etc.  I use this information to make a "sanity check" at the end of the process.
  5. Using data from Step 3, I use a video conversion program (HandBrake) to do a test conversion of 10 minutes or so of the video file, making sure to choose 10 minutes that are TV program, not commercials.  The results are saved to a log file.  With this log file, I can double check the black bars and also check to see if the video is interlaced.  MPEG-2 handles both interlaced and progressive video, but there's nothing in the file that says which is which.  On an old CRT television or monitor, interlaced looks fine.  On an LCD, you can see the lines.  You can deinterlace when you convert, but if the video isn't interlaced to begin with, that makes it look worse.
  6. Using another video conversion program (FFmpeg) I strip out the audio and place it in another file, downmixing it to stereo if necessary.  Some audio is in 5.1 surround sound, which some of my devices can't play.  And it's all too quiet, which leads to...
  7. Using an automated dynamic range compressor program (Wave Booster CLI) I massage the audio to boost the quiet parts and damp down the loud parts.  They apparently use trained monkeys to do sound mixing nowadays.
  8. Using FFmpeg again, I take the video from the original MPEG-2 file and the audio from the massaged file and convert the whole thing to an H.264 MP4 file, doing all the scaling down of HD video, the cropping of black bars, and interlacing (if necessary) at the same time.
  9. After using ffprobe again to make sure the converted file is a working video file of the same length as the original, I delete said original, freeing up GBs of disk space.
  10. Directory Monitor checks for new video files and starts the whole process all over again.

There might be some question as to why I use both HandBrake and FFmpeg, since they both basically do the same thing.  Actually, they can each do things the other can't.  HandBrake has a decomb filter.  This scans each frame and looks for the effects of interlacing, applying the proper filters to only the parts of the picture that actually need it.  This gives better results than just straight converting an interlaced file or applying a blanket deinterlacing filter.  Unfortunately, HandBrake has no way to boost, level, normalize, compress, or otherwise manipulate the audio portion of the video.  HandBrake can't even take video from one file and audio from another.  FFmpeg can combine multiple sources.  (In fact, FFmpeg recently added code from a Dynamic Audio Normalizer project very similar to the old and no longer supported Wave Booster CLI, so I may be able to cut out a step or two.)  I use HandBrakes decomb filter to scan the video to see how interlaced it is to decide whether or not to run FFmpeg's standard deinterlace filter.  Future plans include using HandBrake to decomb the video, then use FFmpeg to combine that video with the normalized audio.  I'm hoping to convince the FFmpeg people to include the decomb video filter, or the HandBrake people to include the normalization audio filter, allowing me to cut out a few steps.

2015.06.22 GFA BASIC 32 mAlloc() Vs the Super Strings

posted Jun 21, 2015, 7:20 PM by Troy Cheek   [ updated Jun 21, 2015, 7:21 PM ]

Because I'm old and set in my ways and too close to death to learn a new programming language, GFA BASIC is the hammer I use on all my nails.  I currently use GFA BASIC 32.  The short version is that GFA BASIC 32 is a modern BASIC which looks and acts more like C than any BASIC someone my age grew up on.  Luckily, while I was in college I took a few C courses.  I also followed the evolution of GFA BASIC, from it's humble beginnings as Turbo-Basic XL for the Atari 8-bit line of computers, to GFA BASIC for the Atari ST line, to GFA BASIC 32 for Windows which was first released back about the time Windows 95 seemed like a good idea.  Development stopped about the time Windows XP came out, but with some minor hacks it still works on my Windows 7 64-bit Home Premium Bacon Ranch Edition computer.  There were also versions for the Amiga, MS-DOS, and 16-bit Windows.  All these were created by German programmer Frank Ostrowski in 1985, and it all started because he was between jobs after leaving the military.

But enough history.  Today, we're talking about memory allocation.  For speed and ease of use, if you want to work on a really big data set of some kind, it's easier to load it into the computer's RAM memory first.  To do that, you need to allocate a block of memory by requesting it from the operating system.  In GFA BASIC, the command for memory allocation is mAlloc(), which seems obvious in retrospect.  From the GFA BASIC 32 help file:

mAlloc(n) allocates memory to the program. n specifies the amount of memory to reserve in bytes. If n is -1, mAlloc(-1) returns the size of the largest available block of free memory. The reserved memory block is not initialized.

Actually, there are several negative values you can use for n, which will return things like total free memory, virtual memory (Windows page file), percentage of memory used, etc.  Unfortunately, none of these values are correct if you're using a 64-bit version of Windows.  I guess the Windows memory handler is different or something.

But mAlloc(n) still works with positive values of n.  If you want to read a file that is 1 MB long into memory, you just use addr%=mAlloc(1024*1024), and addr% is a 32-bit signed integer containing the memory address of the start of the memory block you just reserved.  If there's a problem, let's say you don't currently have a full 1 MB of free RAM in one contiguous block anywhere in your system, then addr% will be set to -1.  Always check to make sure that mAlloc() didn't return a -1 before you go on with your program.

Now, 1 MB was a pretty big file, back when 1.44 MB floppy disks were the norm and a computer with 16 MB of RAM was considered overkill.  Today, however, it's not unheard of to work with a video file that's 4 GB or more.  No problem.  My computer as 16 GB of RAM with over 9 GB free at the moment  We'll just try addr%=mAlloc(1024*1024*1024*4).  Hey, addr%<>-1, so we must have our 4 GB of memory reserved, right?  Unfortunately, wrong.

GFA BASIC 32 is, as the name suggests, a 32-bit program.  Even running on a 64-bit operating system, it's still limited to accessing memory addresses in a 32-bit range.  32 bits translates to 2^32 values, or 4096 MB, or 4 GB.  However, we're working with signed integers here, meaning that they can contain both positive and negative numbers.  It takes 1 bit to show positive or negative, so we're left with plus or minus 31 bits which translates to -2^31 to +2^31, or -2048 MB to +2048 MB, or -2 GB to +2 GB.  So, the largest possible memory block we can reserve is 2 GB.

But didn't we just successfully allocate 4 GB?  Well, no.  While mAlloc() didn't return the -1 error code, it did return 0.  I'm not sure what that means.  It's not in the help file, but it is a sign of failure of some kind.  I think, theoretically, a successful memory allocation could return an address of 0, just meaning that the memory block starts at the very first memory location (computers like to start counting at 0) this program has access to, but in practice it's an error.  You can't request a memory block bigger than what can be expressed as a 32-bit signed integer.

Now, you might be tempted to try mAlloc(1024*1024*2) to get a block of memory 2 GB long, but 32-bit integers can't actually express positive 2 GB because you have to have room for the 0 in the middle (remember, computers like to start counting a 0), so actually the biggest number you can use is 1024*1024*2-1 or 2 GB - 1.

But don't try that, either, because you'll get a -1.  You see, while GFA BASIC 32 programs can access 2 GB or so of memory, they reserve some memory for themselves for stacks and heaps and variables and file buffers and housekeeping.  To make a long story short (too late!), the biggest block of memory you'll ever successfully allocate is a bit over 1.25 GB.  I limit my memory blocks to 1 GB even, just to be sure.

However, I grew up not using mAlloc() for memory allocation.  You see, there's a corresponding command mFree() which tells the memory manager that you're finished working with this memory block and it's available for use by other programs.  Back in the Atari ST days, the memory manager wasn't very smart.  If a GFA BASIC program allocated memory, it stayed allocated until it was set free.  If the program crashed or abnormally ended or was just programmed incorrectly, it would not free the memory.  It then became impossible to free the memory because the location of the block was lost when the program ended and all it's variables were cleared, namely the addr% or whatever variable.  The block would stay allocated until the Atari ST was rebooted.  Since I was writing programs that would crash or at least end prematurely, they wouldn't always clean up after themselves, so I'd eventually run out of free memory and have to reboot my ST.  Luckily, modern versions of Windows have better memory management, and memory blocks are almost always freed when the program ends.  At worst, I've had to exit and restart the GFA BASIC editor.

So, if you're not using mAlloc(), how do you allocate memory blocks?

Enter the Super Strings!  (fanfare!)

Back in the Atari 8-bit days, Atari BASIC (and, of course, Turbo-Basic XL) handled strings differently than other BASICs of the day.  For most home computer BASICs, string variables (a$ = "Hi there!") were limited to a length of 255 or so characters (bytes).  That's perfectly acceptable if you're using string variables to hold a person's name or address or a single line of text.  However, if you need a larger data structure, 255 bytes is a bit small.  Atari BASIC allowed you to have "super" strings.  As it was described at the time, a single string variable could be as long as available memory.  Of course, back then home computers had 16 KB of RAM, but that's still pretty long for a string.  Later Atari computers had 32 or 48 or even 64 KB, and we learned that Atari's super strings were limited to 32,767 bytes or 32 KB - 1 (remember, you have to leave room for the zero).  Still, that's pretty darn big.  Eventually, pretty much all BASICs allowed strings this big, but Atari was the first in 1979!

Why use strings?  Well, back then we didn't have mAlloc() or similar memory management tools.  Strings were a good way to allocate memory without having to pull out the memory map (seriously, we had a book describing all 65536 possible memory locations) and find an open area (and hope that some other program didn't also want to put data there).  You could use the instr command to rapidly search for text or byte sequences.  You could put sprite (primitive graphic object) information in a string and move it around the screen by moving bytes within the string using very fast BASIC commands.  Short machine language routines could be placed in string variables.  Screen graphics could be buffered by putting them in string variables.  Super strings were great!

GFA BASIC for the Atari ST continued the trend, and I remember using string variables for file buffers and the like instead of mAlloc().  I continued the practice when I started using GFA BASIC 32.  It was easier to use buf$ = space$(1024) for a short file buffer than to fiddle with mAlloc() and the like.  I got to thinking, though, what is the limit of super strings in GFA BASIC 32?

Trying to create a string variable longer than 2 GB fails, of course.  I tried various other big numbers.  They resulted in messages like Out of String Memory, Access Violation, etc.  To make a long story short, the longest super string in GFA BASIC 32 is 256 MB - 1.  Now, that's an odd number, and maybe even a prime number, so I generally limit my super strings to 128 MB to make things even.  However, you can create multiple string variables.  You can create ten 128 MB strings before you run out of memory, for a combined total of 1.25 GB.  This is the same limit as we discovered for mAlloc().

Of course, strings aren't the only variables.  You can create an array of 32-bit integers, or 4 bytes per array element.  Dim a%(1024*1024/4) would give you 1 MB of memory.  I'll save you the trouble of trying it out.  The largest amount of memory you'll be able to reserve this way is about 1.25 GB.

The main difference between using variables and mAlloc() is that if you try to create a variable too big for available memory, your program stops with an error message.  If you try to a create a memory block with mAlloc() that's too big for available memory, you just get an error code, which you can check for and program around.  What follows is a simple routine for a large memory block that fits within memory constraints:

dim memloc%, memlen% as int32
memlen% = 1024 * 1024
memloc% = mAlloc(memlen%)
while memloc%<1
    memlen% = memlen% / 2
    memloc% = mAlloc(memlen%)

This starts by trying to allocate 1 GB.  If that fails, it tries 512 MB, then 256 MB, 128 MB, and so on.  I like programming with round numbers.  Very quickly, the largest block of free memory will be determined, located at memloc% and memlen% bytes long.  Remember to mFree(memloc%) when you're finished!

2015.06.21 Hybrid SSD/HDD/SSHD Windows Software Technical Specifications

posted Jun 21, 2015, 3:31 AM by Troy Cheek   [ updated Jun 21, 2015, 6:56 AM ]

What I'm asking for is a simple, plain, automatic hybrid utility that can move files from the SSD to the HDD and back.  It will sit silently in the background watching the SSD drive, scanning all files except in folders we've told it to exclude, like maybe the Windows folder because that's probably best left on the SSD anyway.  When the SSD drive gets filled to a user-specified amount, say 80%, the utility will check the SSD looking for the least used files or the oldest files or the files whose "accessed by" date is oldest or whatever's easiest to program.  Let's call these "stale" files.  Enough of these stale files will be moved to the HDD to free up enough space to bring the SSD under 80%.  Symbolic links or junctions will be created so that as far as the user is concerned, these stale files are still on the SSD.  If he tries to access the files, he'll be able to in the usual fashion, and he might not even notice that it's loading off the slower HDD.  If a file is accessed like this, it's no longer considered stale.  The utility will at the first opportunity silently move the file back to the SSD and remove the symbolic link.  Optionally, if SSD usage drops below a certain threshold, say the user defines this as 50%, then the freshest of the stale files will be moved back from the HDD to fill the SSD to that level.

In a previous article, I talked about how I had this flash memory SSD that was new and fast but unfortunately small and expensive. I also had this standard mechanical HDD that was old and slow but also very large and cheap. I talked about how I had to manually decide which games and applications and data I had to put on which drive to get the perfect balance between speed and storage space. You'd think this would be something your computer could do for you automatically. And, it turns out, if you're running a recent Linux or Mac OS X or "enterprise" version of Windows, your computer can do this for you automatically. If you're running standard Windows 7 like I am, you can't.  I didn't like that and set out to see if such a thing was possible.  With a few hundred lines of BASIC code, I proved that the project was technically feasible, almost simple.  However, I'm not a good enough programmer to write something that I want to entrust all my data to.  However, I might be a good enough communicator in general to describe what I'm asking for in terms that a good programmer could create what I want.  What follows is my attempt at such a description.  Since my weakest point in programming has always been the user interface, I'll be describing the software in terms of the GUI.

Configuration Screen

This is where the user sets all the options for the program.  The first time the program runs, this should be the first screen they see.  Options should include...

Set SDD:  This lets the user specify which drive should contain the "hot" files, the games or applications or data that the user uses often.  It doesn't have to be an SSD; it could just be a faster drive or an internal drive as opposed to an external one.  NAS is outside the scope of this specification.  Default would be the smallest fixed drive.

Set Free Space:  I read somewhere that an SSD drive operates most effectively if it has 25% free space.  (My 120 GB SSD should then have 30 GB free instead of the current 18.  This whole project started because I had 12 GB free and wanted to install a Steam game that required 15, causing me to spend half the night moving files around instead of playing the game I'd just spent $30 on.)  It could be set as a percentage or perhaps a number of GB.  When "leveling" the drives as explained later, this would be the goal the program strives toward.  Default would be 25% of SSD.

Set HDD:  This lets the user specify which drive/folder should contain the "cool" files, the games or applications or data that the user hasn't used in a while but wants to keep available.  ("Cold" files would be those stored in archives or offline storage.)  It can be an internal or external drive, but not NAS because reasons.  Default would be a folder on the largest fixed drive.  Check to make sure the user specified a folder and not the whole drive.  Part of the beauty of this process is that the user can continue to use the HDD as normal if he wants to manually move files around on it.  We might want to hide the folder.

Always Ignore:  Allow user to specify folders or file extensions or individual files to exclude from the whole hot/cool game.  I think it would be a good idea to ignore the WINDOWS folder, system temporary folders, temporary files, and the like.  A professional video editor would probably want his current project folder to be ignored.  Default would be WINDOWS, TEMP, TMP, and whatever folder we've installed this program in.

Always Ignore Files Smaller Than:  In my tests, I found that you sometimes have to move a lot of small files to equal moving one large file.  Which would you rather work with: a thousand 1 kilobyte files or one 1 megabyte file?  This option allows users to ignore smaller files.  Default size of 1 MB.

Always Process:  The opposite of Always Ignore, specifies folders or extensions or individual files which should always be moved to the cool drive.  Examples would include archives, downloads, torrent files, etc.  These are files that you know you aren't going to use often or don't need to access quickly or plan to sort into another folder if you actually start using them.  No default suggested.

Always Process Files Larger Than:  Some files are huge and even if they're used daily, it's unlikely that the entire huge files is loaded in one fell swoop.  A video file viewer or editor is probably only going to load a relatively small section of the file at a time.  A game loading a huge game data file is probably going to load part of the file, process it, load more, process, etc.  In other words, for most really large files, disk access time isn't always the bottleneck; it's what you do with the files as/after you load them that slows things down.  So, in order to free up space quickly and easily, always move large files to the cool disk.  Default size of 512 MB.  (On my whole SSD, I found about 6 of these, and most were old test files or installation files that I thought I'd already deleted.)

Process Automatically:  Allow the user to decide if/when the program would level the drives automatically.  This wold include options like "nightly" (every day at midnight or 2:00 AM or whenever), "when computer is idle for" (no user interaction for X hours or when the screen saver kicks in), "at start up" (when Windows first starts or reboots), "at shut down" (duh), and "low disk warning" (Windows throws a warning in a system log when free space is less than 10%).  I think most/all of these options can be accomplished with Windows Task Scheduler, meaning that the program doesn't have to worry about implementing any type of monitoring or scheduling option; it just has to know how to schedule.

Scan SSD

This command scans all the files in all the folders on the drive specified as the SSD or hot storage.  What we're scanning for is the file name, complete path, size, and file created date (or modified or accessed).  Yes, that's potentionally thousands of folders and millions of files (in my case, 310K files in 36K folders), but if we ignore hidden files/folders and system files/folders and user-specified files/folders and files smaller than the default 1 MB, the numbers are more manageable (in my case, less than 9K files).  This info should be prettied up and presented to the user in a nice list sortable by any of the criteria (bonus if you use bubble sort!) including a notification as to if the file should be moved into cool storage and why.  Reasons for moving the file into cool storage include:  file is in an Always Process folder or wildcard match, file is too big as previously specified, or file is too old or hasn't been accessed in a while and would count towards the free space goal.  If there's enough free space, we shouldn't be moving any files.

Determining the last accessed time might be a bit tricky.  The file may have been created years ago and modified months ago, but if it's read multiple times every day we don't want to move it.  NTFS keeps track of file creation, modification, and last accessed times, but since the time of Windows Vista keeping track of last accessed time has defaulted to disabled.  Apparently, there's a tiny performance hit every time a file is accessed if this option is enabled, but I've enabled it and haven't noticed a difference.  Some have reported that last accessed times sometimes are the same as creation time, the same as modified time, or seemingly random.  And, of course, if the user has been running with accessed times disabled, the last accessed time is going to be incorrect and probably set to modified time.  By the way, do not check last accessed time by right clicking on the file and choosing Properties, because that counts as accessing the file and will change the time.  It took me an embarrassingly long time to figure that one out.  Instead, modify your folder view to add a Date Accessed column or use the dir /ta command.  There may be other file metadata maintained by Windows or NTFS that will help us here that I don't know about.  The point being is that we're trying to do this whole process without having to install some kind of monitoring program or system hook to keep track of every single file as it's being accessed in real time.  Unless it's easier for you to program it that way, in which case you totally kick ass and I want to have your babies.

Once the information is presented to the user in a nice list, the user should be able to right click on any file or folder and make some selections such as adding it to the Always Process group, adding it to the Always Ignore group, or "freshening" the file (changing Date Accessed to current time), or just pinning this file to SSD or HDD.  I will probably use this option to check out all the files and say about most of them, "What?  That's still around?  I thought I deleted that file years ago!" or "Oh, that's right!  I thought that game sounded cool and downloaded it but must have never played it."

Move to Cool Storage

Or whatever we're going to call the command that actually moves the files from the SSD to the HDD.  I keep wanting to call it cold storage, but in IT terms that apparently means data that is packed into archive files or stored offline somewhere and isn't immediately available.  We're going to move these files to new locations.  Make sure that the Date Accessed file attribute doesn't change, because having every file we move look like it was just accessed would make things difficult later.  To keep the files available at the old locations, use the NTFS mklink command to create symbolic links in the old locations pointing to the new locations.  This command is, I'm told, only available from Windows Vista on.  There is a "junction" program available that does much the same thing and works on older Windows, but a) it only works on folders and not individual files, and b) if you're using an SSD on Windows XP/2000/3.1.1 then you've probably got bigger problems than hybridizing your SDD with your HDD.  A symbolic link looks like a standard shortcut file of 0 KB but acts like a super shortcut invisible link to the file at a different location, whether a different folder, different partition, or different drive.  By creating symbolic links, the user or application or game or the operating system itself can access the file at the old location without noticing a difference.  We just need a little bit of error checking.  If a file can't be moved because it's in use or locked or read only, it probably needs to stay where it is.  If the mklink command fails and we can't create the symbolic link, we probably need to move the file back where it was.  What I did was to replicate the folder structure of the SSD on the HDD (in a hidden folder) for any file I wanted to move.  That way I didn't have to create a database keeping track of what file came from where.

If the Move command immediately follows the Scan command, we can get right to work.  If not, we probably need to Scan before we Move.

Scan the HDD

This command scans all the files in all the folders of wherever the user specified for the HDD or cool storage.  In addition to what we scan for when we Scan the SSD, we also need to check for the presence of a symbolic link at the old location.  If the symbolic link is missing, then maybe the user moved or deleted the file.  We probably need to do the same, but only after asking the user about it.  The nice list of files should mention this, along with any files that have recently become "hot" again.  If the Date Accessed file attribute works like I think it does, accessing the cool file through any means (symbolic link at old location or file itself at new location) should update the Date Accessed time on either the link or on the moved file or both.  This will give us an indication that the file was used since the last time the program was run without having a background process constantly checking file handles or something.  The nice list of files would allow the user to right click on any file or folder and make selections as described in Scan the SSD.

Move to Hot Storage

Or whatever we're going to call the command that moves the file back from the HDD to the SSD.  All you have to do is delete the symbolic link and put the file back at its old location, making sure to update the Date Accessed file attribute to the current date.  Again, if the file can't be moved, leave it where it is and re-create the symbolic link.  As before, unless you've just use Scan on the disk, you'll have to Scan before doing this step.

Level the Disks

This definitely needs a better name.  This combines both Move to Cool Storage and Move to Hot Storage.  Remember to Scan first.  At any given time, let's assume that the user has both some cool files on the SSD that can be moved to cool storage and some hot files on the HDD that definitely need to be moved back to hot storage.  While we could just move all the hot files from HDD to SSD, that might lead to a case where we have 30 GB of files we want to move and only 20 GB of free space.  So we might want to move 10 GB of cool files from the SSD to the HDD first.  There is also the goal of space we want to keep free on the SSD.   We'd end up moving more cool files from the SSD to the HDD to just to free up space.  This might be done at the start or in sections, moving a few files one way and then the other, just in case the user gets bored and wants to cancel the operation half way in.  Hmm.  Start by bringing the SSD down to the free space goal plus an additional 5 GB by moving cool files to the HDD.  Move 5 GB of hot files from the HDD to the SSD.  Repeat until done.

Also, this may not be a problem, but we want to avoid a case of data ping ponging.  We don't want to move a file to the SSD, only for it to take up space requiring we move other files to the HDD, only for the user to want those files back on the SSD, only for the program to move the original file back to the HDD, other files taking up space, ad infinitum.  Unless the user is accessing a lot of huge files daily and has a microscopic SSD, I can't see this as being a major problem, but I thought it should be mentioned.

Automatic Operation

Technically, this isn't part of the GUI.  It's how the program works when it automatic mode.  Ideally, it should be as simple as scanning both disks, then moving files around as described in Level the Disks.  I'd personally like to do this at 2AM every day, but I understand some people don't leave their computers running all night, so they may wish to run the program automatically when the computer is idle, or whenever the computer starts up, whenever the computer is shut down, or just manually.  We need a lot of options because I can see any one of these options pissing off somebody for some reason.  I personally know people who would conceivably schedule a task for 2 AM, shut off their computer at 9 PM, and then wonder why the task was just now running the next morning.  I can see people upset that Windows is taking too long to start up.  I can see people upset that Windows is taking too long to shut down.  Part of the reason for having an SSD in the first place is to make Windows start and shut down faster.  Another reason is so games and applications can start faster, but if we accidentally slow down their favorite game because we moved a file to cool storage, they'll get upset, even if moving the file didn't really slow anything down or if they only think we moved the file.  If we have to interrupt saving some file because they've run out of disk space and we have to free some up for them, they'll get upset even if it meant they wouldn't be able to save the file unless they freed up space manually in the first place.  In other words, I want to somehow make this program easy and convenient for people who understand the reasoning behind it and completely unavailable to people who don't.

The program would have to be sure to generate plenty of log files for those of us trying to debug programs or who just like to look at such things.

The Philosophy Behind All This

I heard that SSD drives could really speed up a computer, but I couldn't afford one big enough to hold all my files.  I supplemented my storage space with a large mechanical HDD.  I put my larger files on the HDD.  I reconfigured my Windows Documents and similar folders to point to the HDD.  As space on the SSD became scarce, I moved more files to the HDD.  If I needed more speed, I'd move things back to the SSD.  This sometimes involved uninstalling and re-installing games and applications to a different drive.  It sometimes meant installing or saving new things on the HDD and avoiding the SSD altogether.  Then it hit me:  Why am I doing all these things to make it easier on the computer when it's the computer's main function to make life easier for me?

I decided that it would make more sense for me to always install or save files to the SSD and let the computer decide what files needed to be moved where for fastest access times and most storage space.  I began searching for a way for the computer to do that for me.  Solid State Hybrid Drives (SSHD) sounded like an answer, but then I read that most of them had microscopic amounts of flash RAM (the SSD part) which was used mostly as a cache for the HDD part.  Even the ones with a larger SSD part seemed to move all the data to the HDD part and then maybe move the most frequently used data back to the SSD part eventually.  That wasn't what I was looking for.

Then I read about Apple's Fusion Drive which operated like what I wanted.  Data is saved to the SSD part, optionally mirroring it to the HDD when the drive is idle.  When the SSD gets full, data is moved to he HDD (or, if it was mirroring all along, simply deleted from the SSD).  If you start using data from the HDD, more space is cleared on the SSD and the hot data is moved there.  Exactly what I was looking for!  Unfortunately, I'd need to buy a new Fusion Drive and move all my data there.  Then I'd need to buy a Mac because this product only worked with one.  But, wait!  It turns out that Fusion Drive only works with Apple products because all the hard work is done by the Mac OS X operating system.  In fact, OS X can provide the same function with separate SSD and HDD devices.  Exactly what I was looking for!  Unfortunately, that meant I'd have to sell my Windows computer and buy a Mac, or figure out how to run OS X on my existing computer.  Let's throw away 20 years of Windows programs and knowledge.

Then I read about Linux and btier, which again operated exactly like what I wanted.  Again, I'd have to scrap Windows to use it.

Then I read about certain data tiering options for "enterprise" level operations using Windows Server 2012 RC2.  Exactly what...  Oh, who am I kidding?  While it's basically a "data center" version of Windows 7/8, it's overkill for a single-user desktop system and costs somewhere in the arm/leg/testicle range.  Or maybe not.  There's about 5 different editions, some less expensive than the others, but I'm not sure which provide for data tiering.  If I understand what I read correctly, I'd have to at the very least install a new version of Windows and convert my drives to the new ReFS, which will probably mean losing all my data.  I think I can still use all my existing programs.

Update!  I forgot about Intel's Smart Response Technology.  It does mostly what I want, but it requires an Intel CPU and certain Intel chip sets on the motherboard.  I don't have those, so I'd have to scrap my current computer and buy another just to get that capability.

The thing is, to get this functionality, I shouldn't have to change hardware or operating systems or buy new drives.  While it's a crappy little BASIC program that I threw together in a few days, I've got proof of concept that this functionality can exist on NTFS file systems on consumer versions of Windows starting with Vista.

If you can program your way out of a paper bag and are interested in coding this project for me, please let me know.

2015.06.18 Roll Your Own Hybrid SSD/HDD (SSHD)

posted Jun 18, 2015, 4:26 AM by Troy Cheek   [ updated Jun 19, 2015, 1:59 PM ]

I just spent the last couple of days removing some old games and programs from my SSD drive.  I had to install the last Steam game I bought on the HDD because I was running out of space.  Like many people, I cheaped out (relatively speaking) when I had my last computer system built.  I bought a small SSD (120 GB) for my operating system and frequently used programs.  I also bought a large HDD (2 TB) for storing less-frequently used programs, data, huge video files, mp3 collections, etc.  The small SSD cost me about as much as the huge HDD.  I could probably buy bigger and faster ones today for half of what I spent a few years ago.  But as I was deleting some files I didn't think I'd ever use again and moving some others to the storage drive, I got to thinking:  Why is this necessary?  I've got a computer with a million times the processing power it took to fake putting a man on the moon.  Why am I manually moving files around?  Isn't this the sort of menial task my computer is supposed to be doing for me?  Isn't my computer supposed to be doing things to make my life easier instead of the other way around?

SSD = Solid State Drive
HDD = Hard Disk Drive
SSHD = Solid State Hybrid Drive

When I first set up my new system, I made some compromises.  Windows 7 was installed on the SSD.  Power button to usable desktop in 17 seconds!  The big video files I was editing were created, edited, and rendered on the HDD.  I told Windows 7 to put my Documents, Pictures, Music, and Downloads on the HDD.  I installed games and programs on the SSD, but configured them to put downloaded levels and other data on the HDD.  As the SSD filled up, I manually deleted files or moved them to the HDD.  I even turned on drive compression to squeeze a few extra gigabytes out of the SSD.

Surely, there's some way to combine the fast access times of an SSD with the storage capacity of the HDD.

It turns out there is a way and don't call me Shirley.

It turns out there's such a thing as a hybrid drive, combining a small SSD and a large HDD in a single package.  To the computer they're connected to, they look like a single drive.  The SSD might be as small as 4 or 8 GB, which is smaller than the smallest dedicated SSD.  The hybrid drive uses this as a disk cache in different ways depending on the implementation.  In some implementations, drive writes go straight to the HDD.  Drive reads come from the HDD.  But as you read more and more, the drive figures out what files are used most often and moves those to the SSD.  Eventually, for frequently used files at least, you get the read performance of a full SSD drive.  Writes are HDD speed, however.  In other implementations, all drive writes go to the SSD first.  When the drive is idle, the data is copied to the HDD.  For drive writes, at least those smaller than the size of the SSD, you have full SSD performace.  If you read the data before it's purged, you get SSD read speeds as well.  I'm sure some implementations combine the two in some way, shape, form, or fashion.

It turns out that you don't need a special hybrid drive to get hybrid performance.  There's at least one Windows hardware product that lets you hook up an SSD and a HDD to the same device.  This hardware uses the SSD as a cache for the HDD, much as described above.  There's at least one way to do the same with software in Mac OS, at least until Apple figures out that's cutting into the sales of hybrid drives and disables that utility.  I'm sure Linux has this problem beat all to hell already.

The problem with most of these products, however, is that they use the SSD as a sort of temporary home for data as it moves to and from the HDD.  That's not what I bought an SSD for.  I bought it to store my data quickly.  I don't want to load a program a dozen times and maybe see launch times decrease because some controller finally realized that I'm running it a lot.  I don't want a 120 GB drive to sit idle most of the time, only to be partly filled after I save a project file, then copied over to the SSD.

What I'd like is a product which lets me use my SSD as an SSD.  Only when the SSD starts getting full are old, seldom used files automatically moved over to the HDD.  If I start using the file again, it's automatically copied back to the SSD.  This process seems simple to me.  I don't see why it's so difficult.  I don't see why I need a special part SSD/part HDD drive, a certain Intel motherboard chipset, a chunk of fancy hardware, Mac OS, Linux, or something other than plain old Windows and a couple of standard drives.

It turns out there is a partial solution.  SSD Boost Manager will allow you to move files from your SSD to your HDD.  It will even create symbolic links or junctions or whatever they're called.  This means that as far as you or the operating system or the files themselves are concerned, they're still in the original location.  Furthermore, you can move the files back at any time.  It's great.  It's fine.  It's wonderful.  It's French!  I speak a little French, but generally I am le sucks at it.  I don't want to trust my data to my imperfect understanding of the language.  According to reviews, there's a way to switch the interface to English, but that option doesn't appear to be available in the only version I've been able to locate and download.  Also, the project seems to have been abandoned since 2011, about 4 years ago as of this writing.  The biggest problem is that SSD Boost Manager automates a lot of things, but it's still up to the user to decide what to move from SSD to HDD.  As I mentioned before, the computer is supposed to be making my life easier, not the other way around.

What I'm asking for is a simple, plain, automatic hybrid utility that can move files from the SSD to the HDD and back.  It will sit silently in the background watching the SSD drive, scanning all files except in folders we've told it to exclude, like maybe the Windows folder because that's probably best left on the SSD anyway.  When the SSD drive gets filled to a user-specified amount, say 80%, the utility will check the SSD looking for the least used files or the oldest files or the files whose "accessed by" date is oldest or whatever's easiest to program.  Let's call these "stale" files.  Enough of these stale files will be moved to the HDD to free up enough space to bring the SSD under 80%.  Symbolic links or junctions will be created so that as far as the user is concerned, these stale files are still on the SSD.  If he tries to access the files, he'll be able to in the usual fashion, and he might not even notice that it's loading off the slower HDD.  If a file is accessed like this, it's no longer considered stale.  The utility will at the first opportunity silently move the file back to the SSD and remove the symbolic link.  Optionally, if SSD usage drops below a certain threshold, say the user defines this as 50%, then the freshest of the stale files will be moved back from the HDD to fill the SSD to that level.

And that's it.  I'm fairly tempted to try to code something like this myself.  Windows 7 ships with the "last access" functionality disabled by default, but you can fix that by using the command fsutil behavior set disablelastaccess 0 with administrator privileges.  You can create symbolic links by using the mklink command.  The hard part would be scanning all the files on a drive and keeping track of which files had been copied, so you'd know to where to move them back to.

Anybody know the best way to keep track of 121,955 files in 13,448 folders?

Update on June 19, 2015

I've done some more research.  Apple Mac OS X has something called a Fusion drive which combines an SSD and a HDD into one device that stores data on the SSD part and then moves data back and forth to the HDD part depending on what's most used.  And it turns out that this functionality is part of the OSX operating system (Core Storage?) and not the hardware, so with a little effort you can do the same with your own separate SSD and HDD devices.

In the Windows world, there's Storage Spaces and Tiered Data.  Apparently, in Windows Server 2012 R2 Bacon Ranch Edition, you can set up your SSD and HDD into a single logical drive where the "hot" frequently used data is automatically moved to the SSD while the "cold" infrequently used data is moved to the HDD at the sub-file level.  Exactly what I'm looking for.  Unfortunately, data tiering seems to only be available in the server editions, and storage spaces is only available in Windows 8 or later.  In fact, I read somewhere that Storage Spaces is "intentionally incompatible" with Windows 7.

Now, I can understand that some programs won't run on some versions of Windows because a feature they depend on isn't available or works differently.  I don't think I can ever understand intentionally writing a program that will only work on one version of Windows when it could work on some/many/all of them.

Speaking of it could work...

I'm about halfway through coding a utility to automatically move files between SSD and HDD depending on usage.  This GFA BASIC 32 program clocks in so far at 240 lines.  The compiled version is 10 KB.  I'm going to say that again: halfway done with this project and I'm at 240 lines of BASIC.  The fact that someone else hasn't done this already is scary.  This is a one-banana problem, people!  I'm tempted to set up some kind of open source project and set a bounty just to see this done by someone who actually knows how to code.

2015.06.15 Encrypting Data with GFA BASIC 32

posted Jun 15, 2015, 5:31 PM by Troy Cheek   [ updated Jun 15, 2015, 5:31 PM ]

Please let us suppose that, for whatever reason or another, you've decided to encrypt some data on your hard drive.  You want to keep this data around, but you don't want anyone else to be able to read it.  Luckily, there are many many many programs out there that will allow you to encrypt your data, many using very complicated and sophisticated methods that are, for all practical purposes, unbreakable.

There are two problems with using one of these readily available programs:
  1. If the program is known, then the encryption method it uses is known, leaving it open to brute force attacks.
  2. If it's known you have encrypted files, someone can hold a gun to your head and force you to type in your password.
A brute force attack means using a computer (or multiple computers) to try each and every possible password (or key phrase or whatever it's called in your particular case) until by sheer chance you stumble upon the correct one.  Some programs will just throw up an error message if you try a wrong password, but some will go ahead and try to decode the data anyway.  If you're lucky, an incorrect password may be close enough for some of the data to come through in the clear, giving you an indication that you are close and allowing you to zero in on the actual password.

A phrase which comes up when discussing brute force attacks is the "heat death of the universe."  This means something like "Sure, you can try every possible password, but it will take so long that the universe itself will end before you can finish."  In other words, it would take so long to brute force guess the password that you don't have to worry about it even as a possibility.  The problem with that assessment is that computing power roughly doubles every time you change your underwear.  My first computer had 16 KB (that's kilobytes) of RAM and ran at a whopping 1.7 MHz.  I'm typing this on a computer with 16 GB of RAM and 8 cores running at 3.6 GHz each.  I have literally a million times more memory.  I am running literally two million times faster, or 16 million times faster if multithreading.  This means a calculation that would have taken 100 years on my first computer would take a bit over 3 minutes on my current one.  "Heat death of the universe" suddenly translates as "next Tuesday."

Have you ever heard of a video game designer who put in options that even the most powerful commercially available video cards couldn't render?  It's because they knew that by the time they finished the game and it was ready to ship, the next generation of video cards would come out and be able to handle it.

The other problem is that if they know you used a famous encryption program to encrypt the data, they can just fire up said encryption program and force you to hand over the password.  They don't even have to hold a gun to your head.  Supposing that the police are investigating a crime.  You are of course completely innocent, but in investigating you they have discovered some encrypted files on your computer.  These files are completely unrelated to the crime at hand, but you don't want to turn over the password because the files are personal.  If you decide to keep your big mouth shut, the police can simply have the courts cite you for contempt and lock you away in jail until you change your mind.

Now, there are encryption programs which admit the possibility that you might be forced to give up your password.  They allow you to set up multiple levels of encryption.  You can use one password to encrypt and recover certain files, and a second even more secret password to encrypt and recover the really secret hidden files.  After a certain amount of token resistance, you can give up the standard password which will decode some decoy files you are only pretending to want to keep secret.  The really secret files are still hidden.  The problem, of course, is that to use this method you have to have that encryption program installed on your computer, and the possibility of really secret hidden files is mentioned right there in the README file.  We're right back to the problems of brute force and guns to the head.

The presence of obviously encrypted files is a bit of a red flag.  Any geek worth this soldering iron will recognize encrypted files as encrypted files, and probably recognize which type of encryption is used.  You can hide your encrypted files by using steganography, which is any of various methods in which one file can be hidden inside another file.  You can hide a sound file inside an image file and vice versa, for example.  The problem again is that, like encryption programs, the presence of steganography programs on your computer will tip off the geeks that you've hidden something, and once again you're in jail or have a gun pointed at your head.

Obviously, the solution is to encrypt your data in a nonstandard way that doesn't call attention to itself as an encrypted file.

Enter GFA BASIC 32 or your obsolete/obscure programming language of choice.

GFA BASIC 32 has the crypt command, which encrypts a string using a key (max 116 characters = 924 bits according to the help file) which seeds a random number generator.  This fires off a substitution cypher, meaning each single byte of the string is replaced with another single yet seemingly random byte.  This process is symmetrical in that using the same command with the same key will reverse the process.  Think of it as ROT13 on steroids.

This is a string to be encoded.  It contains SECRET information!

Processed through the crypt command, it looks something like this:


The result is the same length as the original, but I think some of the characters didn't copy properly.  Even if you don't know the key (which in this case is "GFA BASIC 32" without the quotes), there are only 256^116 possible keys, which is ripe for a brute force attack.  However, this implies that you a) know what GFA BASIC 32 is, and b) you know the crypt command.  Now, if they encrypted file is in your GFA BASIC 32 folder next to a program called MyFirstCrypt.G32, that will be pretty easy to figure.  However, if there's no such program and the encrypted file is hidden away somewhere (more on this later), this may never occur to them as a possibility.

GFA BASIC was designed in many ways to be similar to Visual Basic.  VB has, I'm told, ways to access the standard Microsoft Encryption API which has encryption methods much more secure than GFA BASIC.  Why not use VB then?  Because, as mentioned before, it's standard.  If you use any standard encryption method, it's likely to be recognized, and once it's recognized, out comes the brute force and/or guns.

Mentioned in the crypt help file is the pack command.  The pack command compresses data similar to the ancient ARC program, which most of you have never heard of.  Consider it the ancestor of ZIP.  Running our "This is a string..." example through the pack command gives us a result something like this:

PCk1= @ 8 @ ../ð.D..OõE.1¡/ÑOc.ovF_ÇuLôûK.¿¡**ÝÊ’D.óá6.;(@ÂÄ%¶ž›¿M

Now, technically, this isn't encrypted so much as compressed at the bit level, but the effect is pretty much the same.  You'd never look at that and think it had anything to do with the data we started out with.  However, the geeks among you will note that the compressed info starts with "PCk1" from which you can probably guess "pack method 1" without too many hints.  Also, if the data being compressed is too short or too complicated, pack will give up and output the original data, so watch out for that.

Of course, you're probably thinking that it's a bit overcomplicated to use a cryptic command in a discontinued language to compress data when you can just use the ARC I mentioned or ZIP or any modern equivalent.  Again, the reason is that the GFA BASIC 32 pack command is nonstandard.  It's like ARC, but it isn't ARC.  You can't just use ARC or ZIP or another modern archive program to uncompress it.

(As an aside, back in my university days, a group of computer science students were working on cryptoanalysis.  A few underclassmen like me were asked to encrypt an example text file on the supplied disk using any method we saw fit.  The upperclassmen would take the encrypted file and figure out how we had encrypted it.  They already knew what the example file said, so I thought it wasn't a fair challenge, but I was game.  Using the recently released pkzip program, I compressed the file.  I snipped the "PK" off the start of the file.  I added a bunch of junk bytes to the end of the file to pad it out to the size of the original text file.  I might have reversed the whole file or done something else like that using one of the "standard" programs which was included on the supplied disk.  I copied the encrypted file back to the disk and turned it in.  If I remember right, at the end of the semester I was accused of cheating by turning in a "random" file from which the original could in no way be recovered.  I had to show the upperclassmen how to recover the original text in front of the grad student running the contest.  Re-reverse the file, add back the "PK" identifier, pkunzip the file.  The upperclassmen still thought it was cheating because pkzip wasn't one of the "standard" encryption tools supplied on the disk.)

The pack command isn't symmetrical, but it takes no genius to figure out that the command to reverse it is called unpack.

Another command mentioned in the help file is UUEncode, which uses standard methods to uuencode the specified data.  Since it's standard, we won't use that.  You can reverse it with UUDecode, by the way.

Another command mentioned in the help file is MiMeEncode, which uses standard methods to mime64 encode the specified data.  Since it's standard, we won't use that.  You can reverse it with MiMeDecode, by the way.

However, mentioned in the MiMeEncode help file entry is "There are also two keywords _MiMeEncode and _MiMeDecode which seem to be unrelated but correlate with each other as do MiMeEncode and MiMeDecode BUT do not perform the same conversion with the latter being the correct versions for MiMe64."  In other words, they do a similar conversion but in a nonstandard way.  Nonstandard being the word of the day, let's check that out.  Once again, we start with the following:

This is a string to be encoded.  It contains SECRET information!

Running it through the standard UUEncode we get this:


Running it through the standard MiMeEncode we get this:


But running it through the nonstandard _MiMeEncode we get this:


Now, any geek worth his salt can probably recognized these examples as examples of UUencoded and MiMeEncoded text.  I'll bet that there are some people out there who could do the conversions in their heads.  But the last example is a nonstandard implementation.  Even if it's recognized as some form of MiMe64, using standard decode methods yields this:

R.šÌ.šÌ.˜€ÁÝÊY›œ.¼.˜”.™¸Ý›‘Q™¸ ˆ%.ȏٛÐUš¸ÑÈM].IQ.‚I››Ùœ´Q§Ù›„

And once again we're back to completely unreadable nonsense.

So, let's combine all these nonstandard techniques available in GFA BASIC 32 into a simple program that we can type from memory.  If we don't save a copy of the program, it's not there to be found by the police or whoever, so there's no evidence that a GFA BASIC 32 program was used to encrypt anything.

a$ = "This is a string to be encoded. It contains SECRET information! The fate of the world depends upon this."

b$ = _MiMeEncode(a$) ' use the nonstandard MiMe64 encoder

c$ = Pack$(b$,2) ' compress the result using nonstandard archiver

d$ = Crypt$("GFA BASIC 32",c$) ' encrypt the result using nonstandard encryption

I changed our secret message because the original one turned out to be just short enough and just complicated enough in _MiMeEncode format to cause pack to barf back the string in it's original form.  Making it a little longer surprisingly made it pack better.  Anyway, we start off with this:

This is a string to be encoded.  It contains SECRET information!  The fate of the world depends upon this.

Running it through _MiMeEncode gives us this:


Running this result through pack gives us this:

PCk1Š ” … ” .1ORÒ1¸?(ee(11‘:?.¡.4..o..A...49Eph"84Ï%I*Wÿ.†£B.çé¡­o1Ú¾Y_q
Ä’Šg|.œ.}K*Ø    (í0çÚßo¼I 09£"oYqŠ&çº^zÂMAwJs].Ò\쬋ʥ޷mI….Hûá¨Daú

And running that through crypt gives us this:

#¾ñ.Ì3Ð9ÛÍ娑+Ï~é@ms8.    š,³.Ui«v.Î÷–œ¸ÉFf.ì.Ü`¼r \–¹U?·Ð.\.Á¶...àà.~œº€šË5ŸpÝuŒI...

This gives us an encrypted file which isn't obviously any known encryption.  Even if through brute force (which requires at least some understanding of the encryption method used) someone can guess the right key used in the crypt statement, it will be buried in billions of other results.  Since it's not plain text, it probably won't be recognized.  If they recognize it as some kind of compressed data, they'll have no idea how to decompress it, unless they are familiar with GFA BASIC 32 and the pack command.  Even if through brute force they manage to uncompress the data, the correct data will be buried in billions of other results.  If they do recognize the data as MiMe64, they'll have to figure out a nonstandard encoding scheme, which will be the easiest step in the whole process because the results will be plain text.

If the original input wasn't plain text, but rather a picture or sound or video file or the like, or a word processor document that already uses its own compression and/or encryption, well that just makes it all the harder to recognize the original file among the zillions of other possibilities while trying to brute force the problem.

Now for the fun part.  If you've got a file laying around called encrypted.txt, the police or whoever else is snooping around on your computer is going to know something is up.  We may have made it impossible to brute force the file because they have no idea how it was encrypted in the first place, but there's nothing to keep them from putting a proverbial (or literal) gun against your head and forcing you to decrypt the information.  We need to hid the encrypted file.  The easiest way to do this it stick the string "GIF89a" on the front of the file, save it as unassuming.gif, and store it in a folder full of a hundred other gif files.  It will help if a few of the other gif files in that folder were also "corrupted while downloading."

Now, exit the GFA BASIC 32 editor without saving the program.  Delete the original file.  Delete any temp or backup or shadow files.  Defragment your drive.  That should make it impossible to recover any scraps of info.  You might want to set up a RAM drive and do all your work on that.

Any police tech squad or snooping spouse or J. Random Hacker looking for incriminating evidence won't find any.  If they find a few mangled gif files mingled in a collection of hundreds, they won't think "Aha!  Encrypted data!" but rather "This guy still uses gif files?"

But when you need the data, it's a simple matter of finding your unassuming.gif file, snipping off the "GIF89a" at the front, writing a quick GFA BASIC 32 program using the crypt, unpack, and MiMeDecode commands in the right order, and you've got your original data again.  I've tried it and it takes about three tries to be able to write the encrypt and decrypt programs from memory.  It will probably take less effort than trying to remember that huge random password that the real encryption program wants you to use.

This method is called "security through obscurity."  It's not that it's better in a technical sense than the latest greatest encryption method, but it's better in that no one will ever figure it out because they've never heard of the techniques being used.  For maximum effect, don't do any of the things I've described here.  Come up with your own unique technique.  Try it against some local college students.  It's a blast!

2015.06.11 Windows File Sharing Follies

posted Jun 11, 2015, 2:04 PM by Troy Cheek   [ updated Nov 17, 2015, 2:14 PM ]

For a long time, I was the only person in my family who owned computers.  I had several networked together, dedicated to searching for aliens, recording and serving television shows, file storage, etc.  Then, for a while, we had a single "family" computer for everybody.  Now, it seems like every family member and visiting friend has multiple computers, laptops, tablets, and pads of some sort or another.  And, of course, everybody has files they'd like to share with at least one other person.  Hence, the unsecured LAN (Local Area Network) I've set up.  Sure, having an unsecured network is a bit of a security risk, but we live way out in the boonies.  If you're close enough to leech off our wireless, you're close enough to be picked off with a shotgun.

Most of the computers are running some kind of Windows, Vista or 7 mostly.  Vista gets a bad rap, but once all the patches and service packs came out, it worked pretty darn well.  Unfortunately, it wasn't patched into a usable shape until about a week before 7 came out.  7 is what convinced me to finally give up on XP and 2000, though I think I've got a "borrowed" laptop circulating among friends which still use one of those.

As I'm usually the source of most shared files, it falls upon me to set the sharing properties.  To share a file in most Windows versions, it's a simple matter of right-clicking on the drive or folder you want to share, selecting "share" in the drop-down menu, optionally set a few options, and clicking "OK" or "Apply."

Too bad that never works.

When I do some variation of what I just described, I get one of the following results:
  • The shared folder isn't visible from other computers.  It's like it was never shared.
  • The shared folder is visible, but trying to open it gives an access violation error.
  • The shared folder is visible and can be opened, but some or all of the files in it aren't visible.
  • The shared folder is visible and can be opened, but trying to open the files gives an access violation error.
  • The shared folder is visible and can be opened and all files can be accessed.
Now, right now some Windows expert is shaking his head, saying "Well, Uncle Troy, if you'll just walk me through exactly which options you're setting, I'll tell you exactly what you're doing wrong in each of those cases and we'll have you up and sharing in no time!"

The problem is, there is not "each of those cases."  It's not like I set option A and I have problem 1, set option B and have problem 2, etc.  The thing is I can set the exact same options on three folders and get three different results.  I'm going to repeat that:  I can set the exact same options when sharing folders and get different results when I try to access those folders.

One of the things a good user interface is supposed to strive for is consistency.  If the user does the same thing more than once, he expects the same results every time.  That sounds reasonable, doesn't it?  If I want to share a few folders, I should be able to click click click the same way for all three and get the same results.  I've been sharing files the same way on the same computer running the same version of Windows for years, setting the same options every time.  All the folders should be shared (or not shared) the same way.

Why doesn't that happen?

Let's say I get a new computer and want to have access to some files from an old computer.  It's a simple matter of sitting down at the old computer, sharing a few drives or folders, then sitting down at the new computer, navigating through the network to the old computer, opening up those drives or folders when I need the files.  Except, as explained before, some folders aren't visible or can't be opened, some of the files inside aren't visible or can't be accessed.

I try going back to the old computer and trying to copy files the other way, only to find out that the shared folders I just set up on the new computer are likewise screwed up.

I end up sitting down at the old computer, sorting through all the files on drive E: and drive F: and copying the ones I want to drive D: folder UNCLETROYSHARE because somehow, some way, that's the only folder the new computer can access.  All of which kind of kills the whole idea of sharing all the old folders and copying or just accessing the various files as needed from the new computer.

All I want to do is set up the occasional folder which I can share with everyone.  Is that so hard?

And, it turns out, it is.

I recently discovered that you can have Windows sharing settings and NTFS  sharing settings.  What is NTFS?  That's the File System used in Windows with the NT kernel, which was the various server versions throughout history and every consumer version starting with 2000/XP.  (I was always told the NT stood for Network Terminal, as the NT versions were designed for network servers.)  Apparently, setting sharing in Windows does not automatically pass those settings along to the file system, causing you to have to dig around and set more settings to actually share the file, though sometimes it does automatically pass those settings along to the file system, so you don't.

Furthermore, it turns out that part of the problem was that I was trying to share files with everyone.  Now, Windows has access groups with names like Administrators, Users, Guests, etc.  There are access groups which apply to all Users accessing a file from a certain computer, all Users who have logins on a certain computer even if they're accessing files from another computer, etc.  It can get very complicated trying to determine not only what access group has what access level, but what access group a particular User is a member of at a particular time.

Unbeknownst to me, Windows actually has an access group called Everyone.  Any User who is a member of any other group or no group of all is a member of the group Everyone.  If a folder is set to be shared with Everyone, then anyone of any group can access it.

Why wasn't that happening?

Well, it turns out, if you tell Windows to share a file, it doesn't automatically assume you want to share it with Everyone.  Sometimes it will, sometimes it won't, even if you're doing the same click click click you always do.  Or, at least, when I do it, it doesn't.  When I do it, I get this:
  • Everyone is not added to the list of groups who have access.
  • Everyone is added to the list, but not actually given any access.
  • Everyone is added, but only given read or some other limited access, perhaps not even access to list the files in the folder.
  • Everyone is added and given the access level I actually asked for.
I discovered this only recently, and going back to various folders on various computers that I've been trying to share unsuccessfully for literally years, I've discovered that the problem has always been that Everybody wasn't listed as a group having access or was listed but assigned no/limited access.

Sure, it's all my fault.  I've been doing it wrong.  I get that.  The problem I'm having is that I've been doing it the same way all these years.  If I'm always doing it the same way and that way is wrong, the question isn't why I can't share files.  The question is, how was I ever able to share files in the few cases where it did work?

Update November 17, 2015:  I bought a external drive the other day and had the same problem yet again, re-read this and realized I was a little vague on how to actually solve the problem.  With Windows Vista/7 (the only systems I have on hand to test), when you want to share a folder, right click, select Properties, select the Sharing tab, select Advanced Sharing, check the Share this folder checkbox, then click Permissions.  If the user Everyone doesn't exist, Add it.  Set Everyone to the permissions you want.  Now go to the Security tab.  Add Everyone if it doesn't exist and set the permissions you want.

1-10 of 24