I've consistently ran into open source projects, different kind of archives and data that I've just taken for granted that they are there, and subsequently been reminded that they can be taken away just like that without warning. Now I save and maintain everything that is important to me myself without relying on them existing elsewhere on someone else's computer.
How does this differ from the deliberate saving mentioned in the article? I can't reliably tell what piece of data it is that will be important, out of the whole collection maybe a couple percent has ever been called upon, but those few percent are very, very valuable.
How long should one maintain the copies then? Well the oldest record to still save a bit over $10K in cost is well over 30 years old data, while archiving it has only cost an aggregate of a few dozen bucks. So I'd say just don't get rid of it.
dgunay 1 hours ago [-]
I don't delete things by default but generally everything I might care about automatically gets backed up off device. I have seen lots of stress and turmoil from people needing to get data off of their old devices and being unable to do so. At any given moment, I would be comfortable throwing my phone off a cliff, in that I wouldn't worry about losing data. Anything of sentimental or practical value is backed up.
Similarly with Git, I rarely use stashes. If I have to switch contexts, anything I care about gets committed to a branch (and ideally pushed to a remote) or I blow it away.
hinkley 3 hours ago [-]
I have a different policy of transience and that's not to use my work computer to store anything important. If it's important it should be where I can find it if my laptop takes a spill down the stairs, or by others if I win the lottery and don't show up to work one day.
I was already working toward this policy when I worked at a place where an entire batch of computers came with defective hard drives that died between 24 and 30 months of first power-on. We had 6 people rebuilding their dev environments from scratch in about a 4 month period. By the time mine died more than half the setup time was just initializing whole disk encryption. Everything else was in version control or the wiki, with turn-by-turn instructions that had been tested four times already.
kristel100 59 minutes ago [-]
This idea quietly stuck with me. We’re so obsessed with archiving everything—but sometimes knowing something won’t be permanent makes it more human, more careful.
AstralStorm 2 hours ago [-]
The policy results in a lot of wasted effort and inefficiency.
Even secure systems like Tails have an option for persistence for that very reason.
Lack of session management is in fact annoying in the OSes, X11 protocol is generally unsupported anyway.
True persistence, however, is indeed in storing the scripts and advanced things in a backup archive, properly labelled. Sadly there is no good site to share these to reduce the unneeded effort.
I feel like I have the opposite. I always find that I need something I thought was transient again months later, so I have a policy of permanence. Everything gets saved/cached somewhere, and the only time it is deleted is when the cache is full.
How does this differ from the deliberate saving mentioned in the article? I can't reliably tell what piece of data it is that will be important, out of the whole collection maybe a couple percent has ever been called upon, but those few percent are very, very valuable.
How long should one maintain the copies then? Well the oldest record to still save a bit over $10K in cost is well over 30 years old data, while archiving it has only cost an aggregate of a few dozen bucks. So I'd say just don't get rid of it.
Similarly with Git, I rarely use stashes. If I have to switch contexts, anything I care about gets committed to a branch (and ideally pushed to a remote) or I blow it away.
I was already working toward this policy when I worked at a place where an entire batch of computers came with defective hard drives that died between 24 and 30 months of first power-on. We had 6 people rebuilding their dev environments from scratch in about a 4 month period. By the time mine died more than half the setup time was just initializing whole disk encryption. Everything else was in version control or the wiki, with turn-by-turn instructions that had been tested four times already.
Even secure systems like Tails have an option for persistence for that very reason.
Lack of session management is in fact annoying in the OSes, X11 protocol is generally unsupported anyway.
True persistence, however, is indeed in storing the scripts and advanced things in a backup archive, properly labelled. Sadly there is no good site to share these to reduce the unneeded effort.
Distributed archive, for that matter.