front - face recent changes local names special software-bazaar-wiki

DistributedWiki

front - face

distributed wiki

The high availability wiki project (aka the distributed wiki project) (aka ARGSS! the Automatic Regenerative Grocking Storage System!)

stream-of-consiousness thoughts on the distributed wiki project. Please help organize. previous thoughts at Community:DistributedEditing , WikiFeatures:FailSafeWiki, 2007-02-19 – should I move them here?

I would like to suggest that we should move them here – SamRose

My primary goal is a “fault-tolerant” wiki: a wiki where, if any one thing goes wrong (for example, if any one server is suddenly unplugged), none of the regular wiki users even notice.

Secondary goal: testability.

keywords

keywords: “twin”, “backup”, “failsafe”, “wiki”, “fault tolerant”, “data”, “data store”, “file system” “data storage”, “continuous data protection”, “vault”, “high availability”, “brittle vs. resilient”, …

next steps

implications

goal: fault tolerant.

Therefore, distribute across at least 2 separate servers (preferably in different cities).

goal: testability.

Therefore, avoid encryption (at least in version 1.0), because that makes things more difficult to test.

Therefore, distribute simple copies of wiki pages (at least in version 1.0), rather than complicated erasure codes (ErasureCode).

possible implementation strategies for redirecting users to an active server when one fails

SteelEye Technology Inc. mentions “transparent failover”. How do they do that? Why don’t they list how much their products cost?

clock design principles

See clock design principles

Possible implementation strategies for distributing wiki pages

In any case, When a new server comes online, somehow tell it the URL of some node in the network. Then it automatically downloads from that node some (or all?) URLs of other nodes in the network. Then it automatically downloads the latest version of some (or every) page in the wiki, in the normal way (i.e., each page is independently downloaded from the geographically closest node that has that page, with a brief query to a few more distant nodes to see if that is really the latest version). Then it edits its local copy of the “list of URLs of every node in the network” (is this a normal wiki page?) to add its own URL, pings some (all?) the other URLs and tags ones that are currently not online with a frowny face. After that file is saved, that new version of the file is distributed normally to the rest of the network of nodes. Eventually the URLs with the most frowny faces are pruned from the list.

backup tools

See SynchronizeTip.

distributed file systems

See SynchronizeTip.

specialized hardware

Specialized hardware. We don’t need any of this stuff for version 1.0 . But it might help reduce hardware costs for very large systems.

At first I thought I would need some sort of custom, or at least specialized, fault-tolerant hardware – perhaps the NSLU2 or something like it: NSLU2 wiki: How to back up your Linux box with NSLU2

"better than RAID"

“If you are a sysadmin contemplating the use of RAID, I strongly encourage you to use EVMS instead. Its a more flexible tool that uses RAID under-the-covers, and provides a better and more comprehensive storage solution than stand-alone RAID.” http://linas.org/linux/raid.html links to http://evms.sourceforge.net/

advanced mathematical techniques

advanced mathematical techniques. We don’t need any of this stuff for version 1.0 . But it might help reduce hardware costs for very large systems.

For example, say we want to guarantee no files are lost when any one disk fails. And we want to be able to store N disks full of files.

Other erasure codes (such as the one used by RAID 6) are even more robust against drive failure.

see ErasureCode.

stuff that might, possibly, have some tangential relevance

WikiWikiSync: Leslie Michael Orchard writes: “I have a wiki on my laptop, my PDA, and on my webserver. I’d like a way in which I can keep all three of them synched up in part or in whole. Because, I use my laptop for personal notes in places where there is no internet access and I’d like certain categories of those notes to be public on my webserver. … I also want to be sure not to clobber changes which might have been done while I’ve been away twiddling. …”

Wikipedia:Reliability%2C_Availability_and_Serviceability

Wikipedia:Backup_validation

DAV: The one component most likely to fail in a data storage array is a hard drive. RAID does a great job preserving the data when any single hard drive fails. I want something similar to that, except even more fault tolerant. I want something that not only preserves the data, but also keeps on allowing people to read files and write files, even when one of the power cords is unplugged.

MAID

MAID is a technique that, given a certain collection of disks, tries to make them last as long as possible (at the cost of a small reduction in performance).

See wear leveling for details.

other similar-sounding systems

[this describes something like what I want … but much, much larger. Can I actually buy a small version of this ?] “The technology used a proprietary web server from company X. The system was large: it consisted of hundreds of Unix-based SMP servers and a handful of mainframes located in two cities, mirrored in real time so that a major catastrophe in one city would not disrupt the operation of the system. It cost tens of millions of dollars, … The powerful, personal lesson that I take away from this is never, ever use proprietary software for something important. Even if your life depends on it. Especially if your life depends on it. If a bug needs fixing, I can hire a programmer, or two or three or a dozen, to fix it. The cost will be less than waiting for the vendor to fix it for me.” http://linas.org/theory/freetrade.html

file server hardware

Perhaps I could take 2 (or more) of these file servers, and with a little bit of additional software convert them into a single fault-tolerant file server system.

Would such a distributed fault-tolerant file server system be a good base for my fault-tolerant wiki?

possibly related stuff


2007-02-19

Community:HighTrafficWiki http://communitywiki.org/HighTrafficWiki


n a democracy, information access requires an information base secure from intrusion, distortion, and destruction; one protected from both physical and technological deterioration. -- Patricia Wilson Berger, 1989 President of the American Library Association http://www.clir.org/pubs/reports/graham/intpres.html


http://c2.com/cgi/wiki?FaultTolerance


lvm2 “the Linux Logical Volume Manager … groups arbitrary disks into volume groups. mdadm “a program that can be used to create, manage, and monitor MD devices (software RAID) …” (Can this be used to group one hard drive in Oregon and another hard drive in Tulsa into a RAID system?)


“storage space is no longer an issue. A couple of weeks ago, Amazon announced S3 (Simple Storage Service). For 15-cents/month you get a gigabyte of online storage.” http://www.wikisquared.com/2006/05/storage_space_i.html (yet another online file server)



Should I put this distributed wiki project on Sourceforge?


“The Linux Virtual Server is a highly scalable and highly available server built on a cluster of real servers, with the load balancer running on the Linux operating system. The architecture of the server cluster is fully transparent to end users, and the users interact as if it were a single high-performance virtual server.” http://www.linuxvirtualserver.org/


would any of the items in the “Distributed Computing” section of https://help.ubuntu.com/community/UbuntuScience be useful in assembling a fault-tolerant data store?


“The aim is that if you unexpectedly switch off the computer, or if it crashes, then on re-booting the computer is as similar as possible to its state just before the disaster, and certainly hasn’t become inconsistent or corrupt.” http://rrt.sc3d.org/Software/Tau/Doing%20it%20better%201999-06-10%20c


“I’m working on a user friendly mirroring/monitoring system, so I (and other users) can keep multiple copies of their site in sync at multiple Memebot (or other) servers.” http://www.memebot.com/faq.html?faqindex=4#faqanswer


decentralization, in general – perhaps a topic for my fault-tolerant wiki

Pavatars are something like Gravatars, but they are independent of a central server.

use cases

<blockquote> Our new goal is to have no single point of hardware failure be able to bring down any of our sites, to be able to upgrade OSes without bringing everything down, and to be able to recover from very major problems in 15 minutes. This goal is fairly absurd. There will always be single points of failure. The ozone layer, for example. A more accurate goal would be “No single hard drive, power supply, or fan failure takes us out of commission for even 1 second. Any other server failure can be recovered from in 15 minutes. Routine Windows Updates will no longer bring our site down. If something happens to the colo facility, its power supply, or the City of New York, well, we go off the air for a while.” … In choosing servers, there’s a lot of benefit to using totally standard, identical parts. It makes it easy to swap things around when you need to scale or when something fails, and things are cheaper by the dozen. In order to avoid going totally broke, we still want reasonably affordable servers, but my requirement of “no single point of failure” means we need servers with dual power supplies, hot-swap RAID, and everything engineered for longetivity. … </blockquote> http://www.joelonsoftware.com/articles/ColoExpansionPart1.html


The Plastic File System is an LD_PRELOAD module for manipulating executing programs’ ideas of what the file system looks like. This allows virtual file systems to exist in user space, without kernel hacks or kernel modules. http://members.canb.auug.org.au/~millerp/README.html


DAV: perhaps promote my distributed wiki as “the distributed wiki”, with initial discussions on distibuted computing, … what else would be related ? … anti-censorship? Well, no, I think starting out without paranoid secrecy. … Ah – networks of trust, trust metric, classification. … hardware fault tolerance … hardware failure analysis, software failure analysis … CommunityWiki:OpenTheory ? …

Community:DistributedEditing

competitors

fault-tolerant data store: competitors

“Western Digital Puts Storage on VPN: Western Digital has introduced a new dual-drive shared network storage product for small business.” by Chris Mellor, Techworld, Friday, February 23, 2007 http://www.pcworld.com/article/id,129325-c,storage/article.html <q> The My Book World Edition II can have [up to] 1TB of data … The device can run independently of a host PC and has a browser-based link to the Internet. Part of its software makes the device a target in Windows ‘Save As’ and ‘Open’ operations whenever the user is connected to the Internet. That means users can have the device attached to their base office PC and access it over the Internet when hundreds or thousands of miles away. They can also backup new data on their laptop back to their office using the same software.

Small workgroups can share documents and other files over the Internet or within a local network without the need for separate FTP server or network-attached storage (NAS) functionality.

… quasi-VPN tunnels are set up between the device and valid accessing PC users across Internet links. It uses Mionet software. Accessing users are validated via a Mionet facility and can then read and write files on the device securely. Transmitted data is encrypted. (Read the Mionet FAQ here.)

The My Book World Edition II comes with wired or wireless access to a host PC and in 500GB or 1TB capacities. It is user-serviceable, allowing users to open the enclosure to replace one or both of the hard drives by following the detailed instructions, without voiding the system’s three-year limited warranty.

A base version – Edition I – comes with 500 Gbyte of capacity and retails for US$279. The twin drive Edition II costs $499 (about £270 at today’s exchange rates).

</q>

… mentions: MioNet? FAQs http://www.mionet.com/products/faqs.jsp

online file storage

online file storage / internet file storage Drive Headquarters: online storage service providers http://drivehq.com/

[fault-tolerant data store: online file storage / internet file storage] http://datadepositbox.com/

discussion

man mdadm mentions “… mirroring over a slow link” – does it already do “twin data backup” ?


This sounds like a good property for my fault-tolerant data store -- which underlying filesystems have this property?

XFS journals metadata. This means that if you pull the power cord out of your computer, when you turn it on again, XFS will be in a consistent state (i.e. you will see your directories, and be able to list the files they contain). This is an advantage compared to not being able to see anything. However, you will probably have lost the data from files that were open at the time of pulling out the power cord because XFS does not journal data changes. SGI (August 28, 2006). XFS FAQ. XFS is not alone here - some other file systems (such as JFS) also journal metadata changes and not data changes because it’s a good strategy when mediating between speed requirements and safety. -- Wikipedia:Xfs

inspirational quotes on fault tolerance

“While no vessel can ever be regarded as unsinkable, it should be capable of absorbing a number of errors and misfortunes before there is danger of sinking.”

Statement of the Royal Institution of Naval Architects. Seatrader Review, December 1997, page 7 http://www.uscg.mil/hq/gm/moa/docs/fishco.htm

discussion

“A place to share about technology, business continuity, and crisis management.” http://stevewatson.net/

“Setting up with a computer with recovery and security in mind” http://www.stevewatson.net/index.php/2005/10/24/setting-up-with-a-computer-with-recovery-and-security-in-mind/


Wikipedia:High-availability_cluster

(would “anycast” be a good addition? Wikipedia:Anycast )

The High Availability Linux Project wiki http://wiki.linux-ha.org/HomePage Does this already do what I want (synchronize between a Linux box in Wichita and a Linux box in Tulsa), or does it only work for clusters connected to the same Ethernet network?


Wikipedia: Filesystem in Userspace

DAV: I imagined something like the http://synology.com/ products, except twin boxes …


http://wiki.apache.org/lucene-hadoop/

Hadoop is a framework for running applications on large clusters built of commodity hardware. The Hadoop framework transparently provides applications both reliability and data motion. Hadoop implements a computational paradigm named Map/Reduce, where the application is divided into many small fragments of work, each of which may be executed or reexecuted on any node in the cluster. In addition, it provides a distributed file system that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Both Map/Reduce and the distributed file system are designed so that node failures are automatically handled by the framework.

The intent is to scale Hadoop up to handling thousand of computers.

“YaK?:: Links to Some Google Papers on Distributed Systems” http://wiki.yak.net/950


(todo: play with this software) (todo: make some comments here on “distributed wiki”, and link to Software bazaar: distributed wiki)

a little java program which remotely accesses every page of kayakwiki, and creates a backup of all of the wiki text. http://www.kayakforum.com/cgi-sys/cgiwrap/guille/wiki.pl?Backup_Strategy

terminology

Syncing Versus Backups

Some people confuse syncing with backing up, which is not surprising because each employs similar techniques and strategies. Both syncing and backing up take the information you have stored in one place and copy it to another place -- http://www.oreillynet.com/pub/a/mac/2006/09/05/synching.html?page=3

testability

Is there a better way to make sure this sort of thing never happens again?

Is there a better way to test the backup system before it is too late?

data backup

See DataBackup.

discussion

(post to wikifeatures: fail-safe wiki) Wikipedia:MogileFS http://www.danga.com/mogilefs/

  1. No single point of failure – all three components of a MogileFS? setup (storage nodes, trackers, and the tracker’s database(s)) can be run on multiple machines, so there’s no single point of failure. (you can run trackers on the same machines as storage nodes, too, so you don’t need 4 machines…) A minimum of 2 machines is recommended. …
  2. “Better than RAID” – in a non-SAN RAID setup, the disks are redundant, but the host isn’t. If you lose the entire machine, the files are inaccessible. MogileFS? replicates the files between devices which are on different hosts, so files are always available.

Writing a distributed filesystem in 24 hours http://wsobczuk.blogspot.com/2006/08/writing-distributed-filesystem-in-24.html

Amazon Simple Storage Service (Amazon S3)

Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers.

Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers.

http://www.amazon.com/gp/browse.html?node=16427261


http://blog.dreamhost.com/2006/08/25/ask-dreamhost-customers/ http://blog.dreamhost.com/2006/10/13/sales-are-slow/ <q> We’ve got a good system of RELIABILITY and PERFORMANCE already.. but the cost per usable GB is $10. The main problem is the 300GB Fiber Channel drives we use, which are $800 each. Is there anything out there that can do the same but with SATA drives that cost more like $100? Even if we needed twice or four times as many drives for the same performance and reliability, it seems possible!

There are also some REALLY WANT TO HAVE features, though possibly could be passed up if the top three are satisfied.

</q>


Wikipedia:Zfs <q> ZFS, is a free, open-source file system produced by Sun Microsystems … Dynamic striping across all devices to maximize throughput means that as additional devices are added to the zpool, the stripe width automatically expands to include them, thus all disks in a pool are used, which balances the write load across them. … ZFS … presently only n+1 redundancy is possible. n+2 redundancy (RAID level 6) is only in the development branch – via the OpenSolaris? distribution[2]. … Matt Dillon started porting ZFS to DragonFly? BSD … </q>


Wikipedia:Talk:RAID#Z_RAID


Wikipedia: comparison of revision control software

post to http://wikiindex.org/Revctrl_wiki

DAV: the list of over a dozen distributed version control systems listed at Wikipedia: List of revision control software#Distributed_model .

It makes no sense to me why we need a dozen different free and open source systems – why don’t people, you know, merge the best features of each into one really good system? You know, the way people merged the best features of each Linux distribution so now there are only 300 different distributions? :-)


add this link to the “fault-tolerant file store” documentation … and email/comment at this link to point to SoftwareBazaar: distributed wiki. http://rambleon.org/2006/03/08/i-knew-this-was-coming/


Building a Self-Healing Network by Greg Retkowski 05/25/2006 http://www.onlamp.com/pub/a/onlamp/2006/05/25/self-healing-networks.html “Greg Retkowski is a network engineering consultant with over 10 years of experience in UNIX/Linux network environments.”

Perhaps I should pitch my fault-tolerant data store ideas at Greg?


“the nascent home NAS market” http://www.engadget.com/2006/09/18/hp-media-vault-nas-we-go-again/

DAV: perhaps “vault” is a nice marketing term?


google for “distributed server”


“a wiki file system” Wikipedia:Wikifs

“Hadoop is a collection of Free Java software previously developed by the Nutch project but now maintainted by Lucene. The system includes a distributed filesystem reminiscent of GoogleFS? named the “Hadoop Distributed File System” (or just DFS).” Wikipedia:Hadoop … “Both map/reduce and the distributed file system are designed so that node failures are automatically handled by the framework.” http://lucene.apache.org/hadoop/about.html


How To Share State Among Multiple Computers http://c2.com/cgi/wiki?HowToShareStateAmongMultipleComputers


“Svk is a decentralized version control system. It uses the Subversion filesystem but provides additional, powerful features.” http://www.wikiindex.com/Svk_Wiki


http://freehaven.net/ freehaven-dev mailing list Archives http://archives.seul.org/freehaven/dev/


ideas for integrating automatic backup into the OS

keywords: “virtual desktop”


“The tape checkpoint is independent of machine configuration and can be restored and restarted on any system with sufficient disk space. In fact, this is the way KeyKOS? is distributed.” … “Key Logic developed a prototype UNIX-compatible system implemented on top of KeyKOS?. At UNIFORUM ‘90, we demonstrated this system by literally pulling the plug on the computer at random. Within 30 seconds of power restoration, the system had resumed processing, complete with all windows and state that had been on the display. We are aware of no other UNIX implementation with this feature today.” -- http://www.agorics.com/Library/KeyKos/checkpoint.html


DynaOS - Dynamic Operating System: An OS that can boot from remote server.” by Niyaz PK


"The Next Few Decades of Computing" by Linas Vepstas 2000-2004 describes the “Eternity Service”. It makes sure that I can access “my” files from any computer in the world, even if my computer crashes. Accessing previous versions would also be nice. Also lists many other nice features such a service should have (and a few features that conflict with each other).

The distributed wiki looks like it might fulfill a big fraction (but not all) of these features.


"I Want a Pony: Snapshots of a Dream Productivity App" (2005) mentions “IMAP-like syncing - For me, the Hegelian truth between the “OS <whatever>” vs. “Web OS” rhubarb is that’s it’s both and neither. Personally, I want my important information stored on a secure server, but I want the data and its structure seamlessly sync-able to applications on the web, via wireless devices, and yeah, in my most important desktop apps. Why is there not “IMAP” for my address book and task outlines? Why the heck isn’t there a standard calendar format that lets me collaborate with colleagues and use whatever program I want? Why do I have to learn CVS to have smart versioning on plain text? I don’t know either. (insert image of Merlin angrily kicking his slippers at the television)”

Yes, I want this distributed wiki to do both syncing and easy-to-use versioning on plain text. I also am astonished that someone hasn’t already done it. – DavidCary


discussion

“Publius: Censorship Resistant Publishing System” http://www.cs.nyu.edu/~waldman/publius/

“Submitting documents to Eternity” http://cypherspace.org/adam/eternity/

“The OceanStore? Project: Providing Global-Scale Persistent Data” http://oceanstore.cs.berkeley.edu/ http://oceanstore.sourceforge.net/

“the Peer Distributed Transfer Protocol” http://pdtp.org/ … Ruby programming language …

“Real-time Mantra highlights the key issues in real-time system design. It also cover real-time design patterns and issues in complex software design.” http://eventhelix.com/RealtimeMantra/ links to Fault Handling and Fault Tolerance http://www.eventhelix.com/RealtimeMantra/FaultHandling/


Would Koopman be interested in my “twin backup” ideas? Dependable Embedded Systems Research:


DAV: can I hijack the IMAP protocol to produce something more like a fault-tolerant wiki? Mark Crispin’s Web Page http://staff.washington.edu/mrc/ “My main programming project is the IMAP toolkit, which is the internal engine used by the popular Pine mailer. The IMAP toolkit is the reference implementation of the Internet Message Access Protocol (IMAP), which I invented in 1985. IMAP, often called the best-kept secret in electronic messaging, is a protocol which enables an advanced distributed client/server electronic mail paradigm.” … “A good rule of thumb to keep in mind is that any field of study which has “science” in its name is not a science.”


(would this be a good person to discuss the “twin backup” idea with?) http://www.ludism.org/mentat/MarkSchnitzius


“OpenBinder? is a complete open-source solution supporting new kinds of component-based system-level design. A variety of of production-quality operating system services (not included with OpenBinder?) have already been implemented using it, including a distributed user interface framework, display server and window manager, and media framework.” http://openbinder.org/


fault tolerance load balancing


alternative to distributed wiki? Or one way to implement it?

Keeping Your Life in Subversion by Joey Hess 01/06/2005 http://www.onlamp.com/pub/a/onlamp/2005/01/06/svn_homedir.html

another alternative to distributed wiki http://www.datadepositbox.com/partners/?pid=08422f400a7c200547f40838027632e9

(subversion and SVN and SVK) “SVK, the SVN without the .svn” http://weblogs.goshaky.com/weblogs/page/lars?entry=svk_the_svn_without_the “I switched to SVK, a distributed version control system built on top of SVN and written in Perl.”


“ATA over ethernet”

“Virtual Linux Server Project” Linux Virtual Server


“If it’s true that the web is becoming the world’s memory, regardless of its problems, then I feel that we, the creators and owners of this immense wealth of information, ought to have a free (free as in freedom) way of accessing it. A way that doesn’t depend on the health or ethical decisions of a particular company (Google, Microsoft, or whatever). A way that doesn’t require a rack of 400 servers (and growing) or immense amounts of bandwidth in order to index what’s there and process it.

Unfortunately, nobody has come up with a solution yet. However, any ideas are most definitely welcome.”

http://freesoftwaremagazine.com/free_issues/issue_08/editorial/

the comments respond

There is a solution, it’s called a distributed data store (Wikipedia:Distributed_data_store) and one of more well known ones is Freenet (Wikipedia:Freenet).

http://freesoftwaremagazine.com/cgi-bin/comments/comments.pl?article_path=issue_08/editorial


http://freesoftwaremagazine.com/free_issues/issue_08/intro_tla/index_p2.html (discusses TLA, apparently a distributed version control system – exactly what a distributed wiki needs) (includes a “type this” tutorial)


“If the Empire tears one [host] up, the data can move to another one. This is probably part of Distributed Consumerium, a later phase of the project.”


failover http://linuxgazette.net/issue32/tag_ping.html

load-balancing

mention “Extremely Reliable Systems”


(Is this joey a wiki programmer? Is there any way I can persuade him to help me build a fault-tolerant wiki?) http://kitenet.net/~joey/blog/entry/ikiwiki_split.html


Wikipedia:Talk:RAID#Z_RAID


(does the “replicated database” mentioned here … would this analysis also apply to distributed file server? distributed fault-tolerant wiki?) Secure Property Titles with Owner Authority Copyright (c) 1998,1999,2002,2005 by Nick Szabo permission to redistribute without alteration hereby granted http://szabo.best.vwh.net/securetitle.html


(todo: check out “wackamole”, available on Ubuntu systems via synaptic.) “Whackamole is an application that helps with making a cluster highly available. … “

(todo: check out “ucarp” available via synaptic.) “UCARP allows a pair of hosts to share common virtual IP addresses in order to provide automatic fail-over. … homepage: http://ucarp.org


wikifeatures : failsafe wiki

“I am considering making it possible to use some form of mirroring or distrubuted storage of wiki pages to inconvience censorship. I welcome suggestions and advice on the matter.” http://www.sacredelectron.org/why.shtml


Todo: Build a distributed wiki.

http://communitywiki.org/CommunityRepository recommends “darcs”, a distributed revision control system. http://abridgegame.org/darcs/ http://darcs.net/DarcsWiki


Wikipedia:ZSeries


http://c2.com/cgi/wiki?FailSafeWiki http://c2.com/cgi/wiki?WcpStatus


[electronic failure modes] “Re: How many hours will an average TV work?” by Madhu Siddalingaiah http://madsci.org/posts/archives/aug98/899004396.Eg.r.html discusses the various failure modes of typical television sets. The picture tube will eventually wear out, but typically it lasts (on average) at least 10 years. “The semiconductors, passive components, and connections can theoretically last forever.” but the article lists situations that can cause them to fail.


Would an account here at this non-profit help me set up my “fail-safe wiki” ? If so, set up that account. http://www.obscure.org/info/faq.shtml#why (I found this web site listed in the “mirror sites” section of the Physics FAQ http://www.xs4all.nl/~johanw/PhysFAQ/index.html )


fail safe wiki? http://wikidev.net/Talk:MediaWiki_hosting_and_development


http://c2.com/w4/apl4tec/wiki.cgi?OnePersonManyMachines


“Why do so many chips fail?” article by Nicolas Mokhoff 2006-02-09 says “There is very little that can be done about chip failures -- other than to design chips correctly in the first place -- concluded a panel Wednesday (Feb. 8) at DesignCon 2006” http://eet.com/news/design/showArticle.jhtml?articleID=179102593


yet another single-point failure

“Although we have triple redundant pressure sensors and dual redundant pressurization valves, when this component shorted, it caused the valve controller board to reboot, effectively eliminating the redundancy.” http://www.spacefellowship.com/News/?p=1414


the difference between Failsafe vs. Redundant (also mentions “fail soft”: “Example: when your car computer cannot manage closed loop but allows the engine to continue in open loop rather than leaving you stranded.” … aka “limp mode”).


Notes from a conference talk about a distributed web service in python: http://wiki.sheep.art.pl/Notes_from_RuPy_2007

Interesting. Is this merely talk about what someone wants, or is there an actual implementation already? – DavidCary

It’s actual, working, deployed implementation (see http://grono.net). They started small but now have tens of servers in use. They refused to give more technical details, being afraid of competition. – RadomirDopieralski


Maybe monotone.ca would fit better than CVS?

About the “plain text”: are you talking about markup that is optional and linking that is automatic?

– PatrickAnderson?

Yes, the “Better SCM Initiative” ( http://better-scm.berlios.de/ ) mentions that Monotone is better than CVS. (But it also mentions 8 other version control systems that are also better than CVS). So, have you seen the Revision Control wiki ( http://revctrl.org/ ), which appears to be a better place for comparisons?


Perhaps some of the file synchronization utilities mentions on the SynchronizeTip would be helpful.

“Unison http://www.cis.upenn.edu/~bcpierce/unison/ is a file-synchronization tool for Unix and Windows. It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other.”

Unison wiki.


the Circle is an open source, scalable, decentralized, peer to peer application. “The Circle is written in Python. It runs on Linux and Windows.” The Circle is based on the Chord lookup protocol. “no single point of failure.” “As long as there is one Circle peer running, anywhere in the world, there’s still a network.” infoAnarchy wiki: The Circle.

how many levels

which architecture is best?

The “three level” system seems more complicated, but it helps partition our special “distributed wiki” software into the middle – it uses standard web browsers on one side, and some standard data storage system on the back end. The “back end” nodes might be

more scary stories

See DataBackup.

uptime and high availability

Joel gives some very good advice in this article. However, I hope things are not as bad as this particular quote implies. (Or perhaps what Spolsky thinks of as “really high uptime” is a much higher and far more difficult than my hoped-for “pretty high uptime”). For a distributed wiki, I tried to think up every possible thing that could possibly go wrong, and I only came up with 4: “One or more things wrong in Wichita”, “One or more things wrong in Tulsa”, “Something wrong elsewhere”, and “Simultaneous failures in multiple geographic locations”. That seems to cover all possible things that could possibly go wrong. Or is there something else I missed?

For the first 2 types of failures, I don’t wait for a human to switch out failed parts – I can let the software running in areas that haven’t failed (yet) automatically bypass the city with the failure, then continue to run (“limp mode”) until a human figures out what went wrong and fixes it, and then the software discovers that city is now working fine and starts using it again.


Tudor Marian, Ken Birman, and Robbert van Renesse. “A Scalable Services Architecture”. Proceedings of the IEEE Symposium on Reliable Distributed Systems (SRDS 2006), October, 2006. Available online: http://www.truststc.org/pubs/163.html ; http://www.cs.cornell.edu/projects/quicksilver/public_pdfs/marian-Scalable-Services-srds06.pdf ; and mirrored elsewhere.


Is this similar to the “cloud computing” described at http://www.cloudcomputing.gopc.net/ ?


"The Tiger Video Fileserver" by William J. Bolosky, Joseph S. Barrera, III, Richard P. Draves, Robert P. Fitzgerald, Garth A. Gibson, Michael B. Jones, Steven P. Levi, Nathan P. Myhrvold, Richard F. Rashid. “Tiger is a distributed, fault-tolerant real-time fileserver.”


"A Conversation with Jim Gray" interview by Dave Patterson has some interesting ideas on data storage and how to move bits from one place to another.


"Direct-to-NFS" by Michael Alyn Miller discusses the implementation of “Target Revocable E-Mail”, involving clustering with live fail-over, plenty of redundancy, fault-tolerance, … the same qualities I want the distributed wiki to have.


“Offline access to a wiki” http://www.pycs.net/users/0000177/2004/07/29.html#P279 is a step towards a distributed wiki …


“all supported configurations of the 7410 have multiple paths to all JBODs across multiple HBAs. So even without NSPF, we have the ability to survive HBA, cable, and JBOD controller failure.” -- Eric Schrock's Weblog


"So how much work was it?" has some comments about “Enthusiasts have been doing homebrew NAS” and comparing it to “a complete, polished solution that stands up under the stress of an enterprise environment”.


"The Foundation for P2P Alternatives" wiki – can they help me build a high-reliability wiki?


the Fedora Directory Server has “multi-master replication … gives administrators the ability to provide no single point of read failure and no single point of write failure, so the directory is always available. … automatically recovers at startup in the event of power failure or other outage. …” the Fedora Directory Server wiki: FAQ

So can I make a high-availability wiki out of it?


"Back to the Future for Data Storage" mentions “scalable databases” – Bigtable from Google, SimpleDB? from Amazon, CouchDB?, Hypertable, etc. Are any of them useful for a distributed wiki?


Aegis is a transaction-based software configuration management system. It provides a framework within which a team of developers may work on many changes to a program independently, and Aegis coordinates integrating these changes back into the master source of the program, with as little disruption as possible.

Is this useful for building a high availability wiki?


"The Risks of Distributed Version Control" by Ben Collins-Sussman. Do any of these risks apply to high-availability wiki, and if so, what can/should we do to mitigate those risks?


In certain conditions it is impossible to guarantee that distributed processes to come to consensus. See the “FLP impossibility proof” at Wikipedia: consensus (computer science). Do those conditions apply to high-availability wiki? If we cannot guarantee consensus, can we design it so that when some parts fail (a) the remaining functional parts are likely to come to a correct consensus, and (b) any problems caused by a lack of consistence are small and easy to fix ?

Coda Hale. "You Can't Sacrifice Partition Tolerance". 2010.


Yet another wiki that has gone offline and lost data: http://www.gentoo-wiki.info/ http://en.gentoo-wiki.com/outage-10-08.html


Is this something that can be used by a fault-tolerant wiki?

“Rackspace is open-sourcing the specs for its Cloud Servers and Cloud Files APIs under the Creative Commons 3.0 Attribution license, enabling third-party developers to copy, implement and rehash them as they see fit.” – http://www.techcrunch.com/2009/07/23/rackspace-opens-the-cloud/


“… the … sort of fault-tolerant distributed system that makes Gmail trustworthy today. … Whenever ubiquitous reliable non-local data storage arrives, it won’t be a minute too soon. The world now buys hundreds of millions of hard drives every year, so I bet a lot more than a million drives now die every year.” -- "Won't someone PLEASE think of the hard drives?!" by Daniel Rutter 2009


What is the "Best way to ‘harden’ embedded ext4 file server against unexpected loss of power?"

Other “Server Fault: load-balancing” questions and “Server Fault: balancing” questions might have some interesting ideas.


“Distributed Storage Failure Recovery” [2] describes “the distributed storage project” -- how can the “distributed wiki project” work with them?


comments

MattisManzel:
This is an excellent page on a very important issue which is a prerequisite for democracy, true. Well done.

Moved the page to a CamelCase? page name and added distributed wiki to the local names. It works soon, patience. ;)

Define external redirect: OpenBinder OceanStore YaK FreeNet MioNet MediaWiki WikiShare SlashDot NexentaStor CamelCase SimpleDB FreeNAS WikipediaFS KeyKOS MogileFS OfflineIMAP OpenSlug DragonFly grammar check WikiMedia JavaScript CouchDB GoogleFS PatrickAnderson MikMik RoSES OpenSolaris

EditNearLinks: SoftwareBazaar LionKimbro SamRose BrandonCsSanders RadomirDopieralski AlexandreDulaunoy OpenTheory