Restoring vista

Popular wisdom can be frightening. Past errors? Normally unrecoverable, better learn to live with their consequences. Second chances in the future? Rare, don’t count on them. Present predicament? Uncertain at best, so grin and bear it. In the long run, time is an entropic roller coaster where some local ups inevitably tend to an overall down and a halt. This is life at its nakedest. Blackish as the picture might be, it has a pinch of serenity in it. Pessimists can only be pleasantly astonished, never bitterly disappointed.

It is with this wisdom that we used to approach Microsoft products and especially Windows. Install some unusual program, press the wrong series of keys, or double-click too nervously on some buttons and down you went with your computer crashing. And if you screwed-up your operating system you knew that it was like a sour marriage: dead was the past idyll, you could only hope to divorce and restart from scratch. Cut your losses, unplug, press the off button, format the hard disk, get a new life. Some times, it was not too late. Many users thought all this exceedingly annoying, even preposterous. But it had that taste of real life in it which no digital hype could hide: irreversibility. You, your hardware and your software shared the same wagon. The machine was as embedded in time as its user. And as with your car or relations, fixing was something you did by moving forward, the only direction allowed.

Something then changed. System Restore was introduced in Windows XP. Now, you could roll back the system files and the registry to those from a previous date, when the system was known to work properly, without losing any personal file (say documents or e-mail messages). Magic. So, these days, is your computer actually unhinged from time? Not quite.

The truth is that your software lives in parallel universes. Shadow paths are created whenever there is a significant system event. Made a mistake? Simply switch to another possible world where that mistake has not occurred. Nothing terribly new then, technologically: it’s just that now we have some many tons of Gigabytes in our hard-disks that we can keep piling up possible worlds as we like. Irreversibility is achieved in terms of never-closed, always–available, alternative tracks. It is not time travelling, it is the power of memory, but who cares? The outcome is indistinguishable.

The new MS Vista makes restoring even easier and more powerful “In Windows Vista, System Restore allows recovery from a greater range of changes than in Windows XP. The file filter system for system restore used in previous versions of Windows is replaced with a new approach: Now, when a restore point is requested, a shadow copy of a file or folder is created. A shadow copy is essentially a previous version of the file or folder at a specific point. Windows Vista can request restore points automatically, or do so when you ask. When the system needs to be restored, files and settings are copied from the shadow copy to the live volume used by Windows Vista. This improves integration with other aspects of backup and recovery and makes System Restore even more usable.”

How tempting. With the opportunity of jumping to another possible world at no cost, whenever something goes wrong, are we going to be more irresponsible? Imagine a Life Restore button. Would you be more reckless? Or would you learn from past mistakes and get things right?

One thing seems predictable. One can easily imagine System Restore repeat the same file restore procedure again and again, like a broken (vinyl) record. You will be trying to leave that possible world, but System Restore will push you back there after every reboot. You will be trapped in some digital Punxsutawney, like Phil (Bill Murray) in Groundhog Day. Catchy name for the next Vista virus?

Comments

  1. Version Control: A Leibnizian Perspective

    As I see it, the transition Microsoft is making in Vista's system restore is from using a backup-style restore feature to a version-control-style restore. Instead of a monolithic backup of the entire system, where a snapshot of the entire OS is captured and stored, in Vista individual portions of the file tree are versioned separately.

    Version control is not new. Software developers have used it for (dare I say) decades. CVS (Concurrent Versioning System), Subversion, and Arch represent just three of the popular Open Source variants, and there are at least a half dozen commercial products, as well. Microsoft has its own, in fact – it's called MS Visual Source Safe. What a version control system does is track changes of pieces of information (usually organized into files) over time, providing a user interface for “checking out” and “checking in” files. Each time a file is checked in, a new copy is made of the file (in its current state) in the version control repository.

    As a file changes and is repeatedly checked in, versions of the file are saved in the repository. Each version is labeled with a version number. Should you want to revert back to an earlier version, you can simply check out the older version by specifying the version number you want. (Of course, there are also features for browsing through the repository in order to find out what versions are available.)

    But in the standard version control system, it is also possible to maintain parallel versions of a single file or collection of files. This is called “branching.” Say, for example, that you have a paper that you want to turn into two papers, each tailored to a different audience. One way of doing this in the version control system would be to create a branch. A branch takes the current version of the file, and puts it on parallel versioning tracks. In crude ASCII art, let's say that we have a paper, A, that we want to branch into two papers, A' and A''. Both will begin from A, but each will diverge in a different direction. To do this, we introduce a branch:

    A-- A'
    |
    A''



    We can then create a second branch (notated by the vertical bar). Now, beginning from the same starting point, A, we can create two different papers: A' and A'' – both of which have common ancestry in A, and both of which get stored separately in the version control system. From that point on, I can work on the two as separate instances of the same paper. Modifications to A' have no impact on A'', and each can be versioned separately, all the way back to the branching point of A.

    Incidentally, the name of the paper remains the same in both branches. Name is not used as a distinguishing mark between branches. Version number and other metadata do that. That is, if A had the file name 'my_paper.txt,' A' and A'' would both also be called 'my_paper.txt'. (Leibniz, in his letters to Arnauld, runs into a similar naming issue when talking about the identity of indiscernibles. He has to explain to Arnauld the difference between the actual St. Peter, and possible St. Peters, who, by the identity of indiscernibles, cannot actually be properly called versions of Peter at all. If only Leibniz had had version numbers. But I digress....)

    While the utility of branching in paper writing may be disputable (I certainly don't use branching this way), branching is very useful when writing software. For instance, it allows one to maintain an old version of a software package (say, version 1.0) while writing a new version (2.0). With branching, I can continue to fix bugs in 1.0 (making versions 1.1, 1.2, 1.3, etc.) while working on the newer code in the 2.0 branch separately.

    The net effect of branching and versioning is that the version control system can be used to create elaborate trees in which, for any portion of any tree, one can return to any arbitary point in the history stored within the version control system.

    Admittedly, that was all a lengthy setup for my main point. What if we look at this from a Leibnizian perspective? What Vista offers is (as you pointed out) a route for an individual to choose a different possible world. In Leibnizian terms, this would be like giving a monad a very limited ability to make choices in a way not constrained by necessity. I want to mention a less limited scenario.

    What if we look at the world of the object-oriented programmer using a version control system to store code? The programmer is engaged in creating an environment. On some level, at least, we could rightfully call it a world, as it is composed of rule-governed objects that interact (in a closed environment) over time. But the programmer creates the system outside of the realm of time in which these objects interact. (That is, the program's time is its runtime, for the objects in a program only interact while the program is running. In fact, properly speaking, the objects only come into being when the program starts.).

    Further, the programmer has a different perspective on the world she or he creates: the programmer has analytic knowledge of everything in the code. Looking over the code in its pre-compiled state, the programmer can have complete knowledge of how the program will work when it is compiled and executed. She or he can see when objects will be created, what will happen to them while they exist, and under what conditions they will be destroyed. The programmer creates the rules for the system, as well as the the classes (general descriptions of what will, at runtime, be used as the 'form' for the objects, which are essentially instantiated classes). In fact, I have even observed programmers put 'miracles' into their code, wherein under specific (rarely occurring) circumstances, the other rules of the system will be temporarily suspended in order to accomplish a specific and more imortant goal. Of course, programmers don't call these 'miracles'... they call them 'hacks.'

    What is more, the programmer, through branching and versioning with the version control system, can create multiple possible worlds (the world is only actualized when it is run), where each world represents a set of compossible (or co-possible) code. That is, in each branch, the classes (and supporting code) are designed to work together within the context of that branch. Thus, the programmer can try out various ways of writing the code, making subtle changes to this or that world, and examining which method will work out best.

    Ultimately, of course, the programmer will select the best branch – the best possible world, as it were – and dub that the official software.

    A being who engineers worlds, having complete analytic knowledge of those worlds, chooses which world to instantiate, and stands outside of the temporal sequence of the world that is made actual... there's only one being in Leibniz's metaphysics who can do that....

    ReplyDelete
  2. Hello, here some comments on the notion of possible worlds:
    The notion of possible worlds can be traced back to Lucretius, Averroës and John Duns Scotus. The Scholar who is best known is Leibniz: Leibniz believed that the best of all possible worlds would actualize every genuine possibility.
    Leibniz also called for the creation of an empirical database as a means of furthering all the sciences. His characteristica universalis, calculus ratiocinator, and a "community of minds"—intended, among other things, to bring political and religious unity to Europe—can be seen as distant unwitting anticipations of artificial languages (e.g., Esperanto and its rivals), symbolic logic, even the World Wide Web.
    A systematic theory derived from possible worlds semantics was first introduced in the 1950s work of Saul Kripke and his colleagues.
    The term "possible worlds semantics" is often used as a synonym for Kripke semantics, but this is widely regarded as a mistake: Kripke semantics can be used to analyse modes other than alethic modes (that is, it can be used in logics concerned not with truth per se; for example, in deontic logic, which is the logic of obligation and permission); and Kripke semantics does not presuppose modal realism, which the language of possible worlds arguably presupposes.
    The concept of possible worlds has sometimes been compared with the many-worlds interpretation of quantum mechanics; indeed, they are sometimes erroneously conflated. The many-worlds interpretation is an attempt to provide an interpretation of nondeterministic processes (such as measurement) without positing the so-called collapse of the wavefunction, while the possible-worlds theory is an attempt to provide an interpretation (in the sense of a more or less formal semantics) for modal claims. In the many-worlds interpretation of quantum mechanics, the collapse of the wavefunction is interpreted by introducing a quantum superposition of states of a possibly infinite number of identical "parallel universes", all of which exist "actually", according to some proponents. The many-worlds interpretation is silent on those questions of modality that possible-world theories address.
    Major differences between the two notions, aside from their origins and purposes, include:
     The states of quantum-theoretical worlds are entangled quantum mechanically while entanglement for possible worlds may be meaningless;
     according to a widely held orthodoxy among philosophers, there are possible worlds that are logically but not physically possible, but quantum-theoretical worlds are all physically possible.
    Given that both possible-world theories and quantum many-world theories are philosophically contentious, it is not surprising that the precise relations between the two are also contentious.
    We may conceive of an approach to metaphysics called multism which carries a monist point of view to the fundamental multiplicity of the universe. A theory can admit a single primal substance, but allow for a mechanism of multiplicity. This multiplicity can be used in the explanation of subjective experience.
    As one can see, there is a long history in thinking and writing about possible worlds. Not all is said about the question of possible worlds. The question stays open for further investigation.

    Are we living in a world, our world or the world?
    Is it a world in a world?
    Is it our world in our world?
    Is it the world in the world?
    Is it our world in a world?
    Is it a world in our world?
    Is it a world in the world?
    Is it the world in a world?
    Is it our world in the world?
    Is it the world in our world?
    Is it our world in a world in the world?
    Is it our world in the world in a world?
    Is it a world in the world in our world?
    Is it a world in our world in the world?
    Is it the world in a world in our world?
    Is it the world in our world in a world?

    It’s my world in our world that is a world and not the world!

    ReplyDelete

Post a Comment

Popular posts from this blog

On the importance of being pedantic (series: notes to myself)

Mind the app - considerations on the ethical risks of COVID-19 apps

On being mansplained (series: notes to myself)

Call for expressions of interest: research position for a project on Digital Sovereignty and the Governance, Ethical, Legal, and Social Implications (GELSI) of digital innovation.

On the art of biting one's own tongue (series: notes to myself)

Il sapore della felicità condivisa

Gauss Professorship

The ethics of WikiLeaks

The fight for digital sovereignty: what it is, and why it matters, especially for the EU

ECAP 2008