Wiki Ide

A 'Wiki Integrated Development Environment' or Wiki Ide would be a collaborative development environment for applications and services, and probably suites of such -

Inspired from Programming In Wiki

The virtues one would expect of any modern Wiki would be part of the Wiki Ide (if it is to deserve the 'Wiki' name) - these include:

The ability to edit the Wiki in a browser, over the web.

Collaboration - multiple editors.

History. A Wiki Ide would go further with this, and allow full source control and 'snapshots' - views collected of relevant parts of a Wiki at a given point in time.

Automatic Link Generation or at least 'easy' link generation (e.g. Wiki Words).

Discussion, Commentary, Markup - akin to Literate Programming, but with Hyper Text; distinct discussion pages and manual pages associated with each page may also be provided

User Pages, with user information. This would include information of the "I'm gonna blab about myself for a while now" sort, but could easily include a wide variety of other information: images, contact information, public keys to check signed contributions, certificates signed by various project-leads offering rights to administer, search, or access certain medium-security projects, preferences for syntax highlighting, subscriptions and RSS feeds that continuously send detailed updates back, personal projects, and any additional per-user support (e.g. tracking Thread Mode conversations). In the IDE side, User Pages (or a commonly named page under the user's User Page namespace) would be ideal places to track instantiated processes with throw-away names

Search, at minimum. In its capacity as an IDE and Refactoring Browser, support for semantically meaningful queries and 'views' of the Wiki contents would be a far better deal. 'View' support would also integrate well with project versioning.

Support for hierarchical namespaces as part of the 'Wiki' would be favored by the 'IDE' aspect. It could be as simple as any page acting as a namespace via some sort of tagging mechanism. This would be similar to 'categories' as provided in a plain wiki, but the flat namespace provided by regular wikis would not readily support the vast supply of redundant names ultimately required for project configuration management, and would create difficulties when a particular project needs to override the meaning of a particular Wiki Word (e.g. due to versioning, or due to a name collision). Disambiguation pages could still be provided, linking to a word under every namespace that provides it in addition to the 'root' namespace. Category pages could still be provided, linking into various projects and instances that are tagged with the category identifier.

Similarly, one would expect certain properties associated with programming languages and IDEs:

Semantically significant links (Wiki Words) - referencing values, functions, algorithms, procedures, active services, process descriptions, predicates (which may be associated with fairly large databases constitution propositions that fill or explicitly don't fill the predicate), service-clouds, etc. Depending on the context, a reference might be part of a discussion (as per a traditional wiki) or something to-be-applied in some manner or other (e.g. a function in position to be applied over values at 'runtime' or as part of a 'view', a service to be accessed, etc.)

Note: This suggests, possibly requires, a language with both syntax and semantics designed for use in the context of a Wiki. Such a language could still work outside of such a context, but would still involve hyperlink-style lookups on commands identified by Wiki Word.

Note: There are some levels of support available between traditional languages and the one described here. For example, one could limit Wiki Word use to the 'import' statements (i.e. 'import Bubble Sort'). This wouldn't be quite as sweet as just writing 'Bubble Sort this_list' somewhere in your code and having the compiler figure out that you want to apply the Bubble Sort algorithm to the list you offered.

'Project' specification support. A project isn't just an algorithm hanging off on its own; it's a mashup or agglomeration of different pieces of code applied to some unified purpose, such as provision of a service. Essentially, as an IDE, a Wiki Ide needs to provide 'project' pages that describe the construction of various applications. These would serve vaguely the same purpose as Make Files - i.e. specifying how components are to be linked together, not active runs of the service. Projects would have many associated components, such as project description pages, project administration (user rights management, for example), project versioning (allowing snapshots of the linked Wiki Words or components at any given time), project statistics (ideally constructed automatically), project 'test' scenarios, project documentation (e.g. requirements, user-stories, 'cards'), etc.

It might be possible to apply constraints and aspecting at the project-level, too - e.g. whether strict type-checks should be applied, or whether Hard Real Time constraints must be guaranteed, or whether the service or application described by the project is to run in a fixed amount of memory, or where to send the error logs.

Support for minor tweaks to a project description - i.e. project configuration management - is nearly essential. Something like inheriting a project and overriding various features would be a reasonable way to go. This would allow 'aspected' configurations of a project (with aspect-oriented programming) or to override where error logs go for a particular run of a mini-project designed for exactly that purpose.

Project test support, especially with feedback. Something similar to an REP loop would be very useful, just as it has proven to be in many other IDEs - allowing immediate tests of fragments of projects. Unit Tests for fragments of code would also be a major augmentation to project-specific 'test' scenarios.

Support for running projects and interacting with them, ideally in some non-volatile manner (since volatile activities aren't very 'Wiki'-like). This could include active instances of projects (be they services or modal applications) given Wiki Word names. This would suggest preference for a Programming Language that readily supports persistent and non-volatile processes with small runtime costs, such that they can easily be pulled out of persistent memory only when required, and farmed out to various servers when experiencing heavy use. This, in turn, suggests a language with an explicit Process Model - e.g. Actor Model, CSP, or Process Calculi - and possibly with First Class Processes (which would be useful if you can link them via Wiki Word names). Non-volatile instances of services and applications implies you could close the browser, go to the movies, go home, log back into the Wiki, open up the instance by some linked Wiki Word (even if it is back up, and continue operation.

Support for debugging applications running in the Wiki Ide.

Support for packaging up projects and creating 'installers' for them that will allow them to run on various other systems - e.g. Wiki Ides owned by other people, on Mac, on Windows, on Linux, on bare metal, etc. This implies support for compiling to a number of different back-ends. The mechanism for running on the Wiki in which it was developed should, perhaps, be an instantiation of this package-support that is made readily available for all projects; packaging up for other systems might require reconfiguring the back-end services. (one can essentially treat an Operating System as a back-end in much the same fashion as bare metal - one essentially describes how to access certain required services, even if it requires creating them on the fly).

Support for automatic generation of throw-away Wiki names. These would be useful the initial Wiki Word identifiers for instantiated projects and their associated services or applications. For example, when a user instantiates a project, it could automatically get a Wiki Word associated with the service/application that not only provides a handle for using it (e.g. go to that page to access the application, or send messages to that Wiki Word to interact with that service), but also would provide a handle for any other process information (start time, resource consumption, held capabilities, priority, etc.) and for managing it (kill it, give it an expiration date, redirecting a more permanent Wiki Word to it, etc.). Users don't need to worry about naming things (e.g. they just instantiate a project and its throw-away identifier shows up under UserPage:ProcessInfo until such a time as they choose to explicitly destroy the process or remove all links to it. Wiki Ide-wide garbage-collection becomes possible based on removing processes that aren't referenced from a 'permanent' source. (Of course, there are other mechanisms for specifying collection properties... such as attaching an expiration-date and/or using a Wiki Word to identify that a page should be collected.) Users may even control, to a small degree, who else can see the process... but as a security measure, capability-security is a better choice.

Refactoring Browser for source, Object Browser for projects and especially active values and services. These are 'views' of the Wiki in a sense, but to support particular utilities - e.g. renaming a particular Wiki Word across the entire Wiki.

Properties required as a Collaborative Web & Code Services platform:

Access to the running services by a range of protocols. E.g. grabbing port 80 and 443 for HTTP (and maybe something like 2280 for rescue - see Runtime Upgradeable Core), but also additional ports to support non-HTTP protocols... e.g. something to trap and support WSDL or SCA or CORBA or fast, trusted binary communications with supported services, each with dedicated ports and each built and loaded from within the Wiki Ide. Keep in mind that the Wiki Ide services are named by Wiki Word pages - i.e. the 'URI' references running services/'objects' far more than the port number.

Safety and Security concerns must be fully addressed. A Wiki Ide could be used for some phenomenal cross-site scripting attacks, as a platform for negative operations (e.g. spamming), and is subject to programmable virii. There are people of questionable motives who can and will attempt to inject malicious code for fun or profit. Frankly, the Wiki Ide host cannot reasonably allow for fully anonymous access to the Wiki Ide for purposes of creating or running projects - there are likely to be legal issues that arise should the Wiki Ide be utilized to some nefarious purpose. On a per-project basis, also, one needs to be careful that 'bad' code not be mixed into the good stuff. See Assume Good Faith Limitations.

It needs to be made very easy to control access to the Wiki Ide - in particular, to remove those who might be considered 'vandals', and possibly to legally persecute or track those who have been intentionally causing (serious) harm. (Even the threat of exposure or banning would be a large deterrent from inside-job attacks, and would also help deter license violations and other misbehavior.)

Integrated use of a Web Of Trust: ability for projects to list 'trusted' parties and have these 'trusted' parties sign off on particular versions of linked pages (with such signatures entering a common 'tags' database). At build-time, then, only the most recent 'trusted' versions of pages will be integrated with the project. Note: it should be possible for projects to 'trust' parties who aren't part of the project team. I.e. I could say that my project trusts any page that some other project trusts, or any page some particular person who isn't on my project has signed off on, which allows me to be a lot lazier about trying to figure out which versions of which pages I actually trust (there isn't a lot of value in massively duplicating effort!). This Trust Web would vastly reduce instances of 'malign code' entering a project.

Sane license control without any hidden gotcha's - everything needs to be damn obvious to a programmer. The easiest way to handle this is for the host of the Wiki Ide to demand that software be provided with a particular license (creative commons, gpl, etc.) or as public domain, with purpose of guaranteeing legal interoperability. But sweeping rules tend to cause problems. Perhaps better: most stuff, excepting objects under a Project Page's namespace, a User Page's namespace, and a License namespace, must use a common (very open) license - perhaps disclaiming patents and such, but available for commercial use and compatible with everything. Then stuff in a Project Page's namespace could be licensed on a per-project/per-page basis. And stuff under a page/namespace like 'Gnu General Public License' would be more obvious in their licensing (leaving only the question of version 2 vs version 3). Users would know when they need to be careful about license stuff because they'd be using namespaces that would appear like they are peeking into somebody else's code or appear like license names. Thus, it would be less 'hidden'.

Of course, figuring out how to properly apply, say, the Gnu Lesser Public License to a Wiki Ide would still cause headaches. Does it mean that commercial systems can only link the final 'object/service' and not the code?

It wouldn't hurt if some reflection could automatically compute whether a project utilizes any conflicting licenses, and to automatically report the license(s) under which a project-as-a-whole is provided. Brings a whole new meaning to Type Checking, eh?

Culturally, use of deviant licenses for code should be discouraged on the Wiki Ide through peer pressure. Individual licenses would tend to create rifts across which projects cannot communicate, and aren't a Good Thing relative to the Wiki Way. However, there are also costs of not allowing programmers control over their own creations, so a decision will need to be made by the wiki hosts on the involved requirements.


Some ability to extend this set of supported protocols from within the Wiki (e.g. adjusting a Wiki Page redirect that determines that Web Server in current use) would be an advantage... e.g. one could add (from within the Wiki Ide) support for advanced Object Browser and debugger support that needs higher bandwidth or more of a publish/subscribe approach (should HTTP be deemed insufficient).

Support for distributed services, parallel operations, leveraging server-farms; distributed Wiki Ide

Transactions support

In many ways, a Wiki Ide is nearly a full-blown operating system. This is hardly a unique property to Wiki Ide, of course (a great many IDEs, such as those for Lisp, Smalltalk, Ruby, Erlang, etc. are essentially operating systems minus the device drivers). In any case, if we have a Wiki Ide written in Wiki Ide such that it can be repackaged for bare metal, it wouldn't be a bad idea to actively explore the possibility of running it on bare metal using a variety of Wiki Word services for hardware-access and resource management as well as the browsers themselves. Perhaps it could be reversed, and OS ideas could be added to flesh out the Wiki Ide (see New Os Features, Object Browser).

Aside from being an interesting curiosity, what would a Wiki Ide give you that couldn't be achieved equally well (or better - editing code in a browser textbox sounds grim) with the combination of conventional Wikis, Web-based forums, email mailing lists, or other communications tools used with conventional IDEs and a source control management system?

In what way to "conventional IDEs and source control management systems" use the schema of hyperWords and the power and diversity that their linking and architecture provide? What is so grim about editing text in a textbox? It is easy, as this wiki demonstrates. Perhaps there are social and personal issues to be overcome which make implementation of a Wiki Ide more difficult than the technical issues which it presents. It seems to me that this concept is more than a simple curiosity, and that it is part and parcel of a new wave of thinking and implementation which makes the workplace mobile and virtual rather than fixed and physical. -- Donald Noyes.20080227.1613.mo6

What "power and diversity" does a "schema of hyperWords" and "their linking and architecture" provide?

This is adequately exposed in the proposal starting at the top of the page with "The virtues one would expect of any modern Wiki" and ending with "the entire wiki" (1400 words)

Aside from being an interesting curiosity, what would a Wiki Ide give you that couldn't be achieved equally well (or better - editing code in a browser textbox sounds grim) with the combination of conventional Wikis, Web-based forums, email mailing lists, or other communications tools used with conventional IDEs and a source control management system?

A page named Programming In Wiki discusses potential benefits. Editing in a textbox isn't so bad - I'm sure a little AJAX support could even give you syntax highlighting. Instead of anything technical, I'll perhaps offer some snapshot User Story(ies) and User Experience(s). Feel free to comment. Feel free to contribute if you can come up with something - taking ownership here might close discourse too much. Phrase your own in first person, if you wish.

I'd love to be able to program in a web-browser, from anywhere, and get real work done. I hate being tied to the computer that has installed upon it the expensive commercial IDE. I hate even more attempting to pipe a virtual desktop from one machine to another, which invariably results in delays (even worse: unpredictable delays) between action and response.

Explicit management of source code and revision control is a pain. Even after working with some of the best free alternatives (e.g. SVN), I'm left envious of the simplicity offered by modern Wikis. In Media Wiki, I can just click 'History' tab for any page and view past versions thereof. I want that degree of version-control to be part of my IDE, and available over the web. I'm certain this could be done in some of the nicer existing IDEs, but I don't want to have to think about it or do extra work for it to be there.

At the moment, to do serious work, I often need to have a bunch of different communication sources open or available at once: e-mail, bugzilla, the IDE, possibly IRC or telephone. I'm not particularly keen on adding web-based forums or conventional Wikis to the list. These are problems I want vanished:

One problem is that none of these are truly integrated with one another - I can't search the forums from my e-mail browser. Even where one can provide hyperlinks (e.g. e-mail linking to forums), these must be written explicitly (no Automatic Link Generation) and there are no automatically generated reverse-links, which are rather nice as features go.

Many other components (aside from content) are not integrated, either; consider management issues:

any security policy I attempt to apply to the e-mail policy is not automatically applied to the forums, the version control, etc. With Wiki Ide, I could demand capability-security to see or modify or interact with certain pages - just one feature to secure it all.

any insurance on the backup of the intellectual labor involved in them goes right out the window: phone stuff is lost anyway, but e-mail, IRC, bugzilla, the forum, the source control, the uncommitted source on user machines, etc. all have independent backup-policies from the wiki. Keeping things 'centralized' management-wise would be a huge advantage here, even if the policy is to securely distribute the wiki to dozens of machines with farms of runtime servers.

Another problem this panoply of sources has a high resource cost - if I'm going to work somewhere (e.g. while on travel, away from my office machine) I need to (a) get the e-mail working, (b) forward my phone number to a cell, (c) ensure I have a machine with the expensive IDE, (d) ensure my machine has the right version-control software installed, (e) ensure I have copies of any critical e-mail informing me of what I should be working on, etc.

I'd be delighted to have the ability to link bugs and requests directly to the appropriate parts of the code. Not only could I peruse the bugs and dive directly into coding, but I could use reverse links: I can take a piece of code, click on its Wiki Word, and find every mention of it in the Wiki Ide's integrated-equivalent to Bug Zilla, discussion pages, etc. With a proper Object Browser, I could even limit the results to just those pages with a 'To Do' tag or whatnot.

I'd rather enjoy seeing a group combining the notions of Wiki Pedia and Source Forge and creating one ginormous Wiki Ide for anyone and everyone to create Open Source projects. It's actually things of this scale for which I believe Wiki Ide would be best - though, as mentioned, good per-project source control regarding which versions of other code are being integrated locally are of high importance. Currently Source Forge is a bunch of almost entirely independent projects; this would change the formula entirely: projects would be integrated by default, and explicit per-project version-control management would be the only thing that allows them to diverge.

I want to be able to use Wiki Ide as a mashup-maker, a lot like IBM's Qed Wiki. I.e. I just create a project-spec that integrates algorithms, code, other services, etc. either already available on the Wiki Ide or through Wiki Words capable of integrating external sources. I put these together in one location, press a button (or not), and voila! a new service is available, running on some hidden location on the Wiki Ide's integrated server-farm (possibly duplicated and distributed worldwide).

Current IDEs are mediocre at best at handling distributed-programming projects - a truth even when leveraging a language designed for distributed programming (like Erlang). One often needs to port code or binaries and fire up services, by hand, on different machines - often a variety of different machines - to perform a test. If you've never done it, take my word for it: it's a royal pain. With some effort, one can start automating the shifting of code and services to different (often virtual) machines as part of a distributed test, but even then running any sort of debugger is a herculean task. I want this to be easy: I want an IDE that can integrate with whole farms of machines, if needed, and run dozens or hundreds of test instances. This would hardly be unique to WikiIDE... but for a WikiIDE to really work on a large scale, it would be a de-facto required feature. To make it beautiful, it'd just need to be implicit.

A place like Source Forge doing the Wiki Ide thing won't be able to freely give away arbitrary computation resources (space, cpu) - i.e. some limited amount per user and per project they could handle, but with massive farms of servers, people will want to run some computation-intensive concurrent programs... like full-blown WorldOfFreecraft games (for which nothing is charged to regular players, though renting GuildHalls and advertisement space and such). Wiki Ide could become a tool for me to sell further computations, since it so readily integrates such things as user-pages, security, project administration, capabilities and such... one could use it to represent non-repudiable capabilities and service-contracts that the server-farm can respect. Alternatively, someplace like Source Forge could provide the Wiki Ide themselves but allow independent server-farms to compete for these contracts - and they would; if the bar was set low enough and security set high enough, companies would even start siphoning off extra computer cycles on their work machines. Everyone wins. Computation and communication - even extreme computation and communication - would suddenly become available (cheaply, and very readily) to anyone with a web-browser, a decent education, and a wallet that could pay for any modern computer. As the Wiki Ide would be pretty much on the big servers instead of on the machine you're typing into, you could even get this power from a cell-phone. The ability to perform extremely intense computations in short bursts without paying for a server-farm would be a major boon to a wide variety of industries.

The ability to run final services in what is essentially a permanent IDE has its own considerable advantages. The services can be updated automatically as the connecting Wiki Words and services are updated. The Wiki Ide can track and recompile and re-optimize pieces that have dependencies that change. I can debug my services at any time I wish - e.g. open windows to hook into the state and messages in order to track things.

This rationale makes sense. The justifications [at the top] didn't convince me, - (Technically, the stuff at the top was a list of properties, not a justification for desiring said list of properties. But Donald Noyes has before expressed himself of the opinion that if hyperWord technology is built, justifications will later be found. He's probably correct.) - but this comes closer. However, I spend much of my day quite happily working in the Eclipse Ide, I grit my teeth when I have to edit in a browser textbox, and I've written a fair whack of browser-side Java Script, so I have a very hard time imagining that a browser textbox can provide anywhere near the same speed and functionality, even with (for example) a Tiddly Wiki-like festoonment of Java Script-based enhancement. I suspect the idea will be much more viable (as far as I can see, at least) when browser capability is considerably more advanced than it is now. I'm still skeptical, but I'm starting to like the idea, and would be very interested in exploring the minimal infrastructure necessary to support the essential attributes of a Wiki Ide. Toward that end, I'd be happy to collaborate on creating a Spike Solution, and can donate some server resources to trying it. If adequately designed, I believe it should be possible, over time, to evolve the spike into the system described above entirely via the Wiki interface.

Thinking Out Loud here... I would strive to make the initial kernel, if it can be called that, as minimal as possible, with the goal of moving into a purely Wiki-based editing environment as quickly as possible. This brings to mind the Forth Language; it can be ported to a new platform and bootstrapped by implementing only a tiny handful of low-level mechanisms. I would hope a Wiki Ide would be the same. -- Dave Voorhis

On AJAX and Grim Textbox Interfaces

It wouldn't be impossible to enhance the browsers, too, just to make things easier... i.e. have a Firefox extension for the highlighting or 'improved' view of the Wiki, even if one can get by with the default view. And, frankly, AJAX is something of a weak solution in the long run; if we have get this huge Wiki Ide running on some awesome language of Wiki Words and such, I can't imagine it would be impossible to create a client-side 'scripting' language that simply hooks back into the Wiki Ide, and eventually use it to replace the "text/java-script" stuff. (Providing browser plugins is relatively easy... the Wiki Ide could even host them so you can download them wherever you go.)

I've been studying up on AJAX and CSS, thinking about syntax highlighting and mechanisms for implementing a 'sweet' update interface. I'd like to see as follows on the Edit Page:

No Textbox. There are wikis where I am able to double-click at some location on a page and just start typing, and Wiki Ide should do the same. AJAX takes care of delivering the edits. After I start editing, other people should be able to see the edits in a reasonable period, possibly see that the area is being edited, and the versioning software should be able to very readily integrate several different users working on different sections of the same page. Versions are provided, and the versioning software must be designed for efficient versioning of micro-edits. Undo-capability should also be providable via the browser, but is undo-forward (meaning that the 'undo' is actually reflected in the history as well). Collisions, which should be reduced in probability due to the grain of the versioning and the software's ability to merge most edits, will just be handled by further editing. The history-view can collapse micro-edits temporally into large macro-edits.

It seems that even Flex Wiki's double-click-to-edit isn't what I'm looking for, which is the ability to edit inline without opening a textbox.

Flex Wiki uses "" to capture the clicks that forwards one to the proper Wiki Edit cgi with the given Wiki Page. Ultimately, this allows for a very efficient click-anywhere-to-edit, but has its costs - e.g. it doesn't track -where- you clicked, so your cursor isn't at the correct position immediately after clicking.

For inline-editing, one needs to use AJAX to essentially capture and send the updates... and also to report the locations on a page that other people are editing inline, and also to do a certain degree of collision management. However, these features require (fundamentally) the ability to identify the location -where- the user is attempting to edit.

I am not sure how much information can be captured regarding mouse location or highlighted area. At the very least, the div-identifier can be captured for editing purposes. I'll continue studying AJAX programming patterns until I come up with a practical approach to this.

It seems that this activity is nigh-trivial in a Gecko browser. One simply uses: domObject.selectionStart and domObject.selectionEnd (e.g. as part of a doubleclick). For a simple double-click without selection, selectionStart and selectionEnd are the same. Microsoft's IE is more difficult. See

The most fundamental rule for inline editing seems to be that absolutely no characters or codepoints be added to or removed from the view of the source. Every character that is part of the source MUST be visible to the user. There can't be any equivalent to the 'apostraphe' being used for italics or boldface, and there must not be any equivalent to Wiki Pedia's link format or {{template formats}}. The display may add characters, but these characters must not appear (to the user OR to the inline-editing tools) as part of the source. For anotations, one can use floating hover-text, popups, underlining, bold-facing, font-choice, font coloring, semi-transparent text that can't be edited, et cetera.

I believe that Wiki Ide may be better at fulfilling these rules, as the source IS the real content. (In a regular Wiki, the content is the formatted text, not the source, so displaying the source to everyone would be a pain.)

Be able to highlight a chunk of code and select an option to 'reserve' a section of a page (e.g. a block of lines) for editing. Other users can see that this section is being edited by you via shading or bracketing (and some AJAX). This reservation should be time-limited - if unedited for a few minutes, the current version would be saved (albeit not checkpointed) and the reservation dropped.

Chat-capability should automatically be added to the reserved part of the page, such that other people can right-click on that section of the page, raise a dialog, and begin to actively converse with the author without interrupting the code editing... and everybody else with edit-rights should be able to see ongoing conversations so they can decide whether to join in. Automatically saves conversation to the code page's respective Discussion Page, complete with signatures and times.

Just because I have it reserved doesn't mean other people can't still edit it (and, e.g., fix a spelling mistake or missed parens); it is primarily a courtesy capability and a communications support capability.

Syntax highlighting: Web Server provides underlying divisions ('div' items with 'class' properties) indicating syntactic components (keywords, operators, top-level components). Web Server takes preference-specifications from the User Page to provide a default CSS, but users can also utilize their own CSS to display these. The underlying divisions are not considered part of the source. In fact, purely display properties should rarely ever part of the source (since this Wiki is based on the semantics of the content). However, a language that provides arbitrary annotations capability could utilize such annotations to specify certain purely display annotations (whether or not they are discouraged). Essentially, the DOM/div components would contain annotation-information as values, identifying certain pieces of code as being 'keywords', 'operators', or even as evaluating to a known value at the time of last parse.

Potentially Tiddly Wiki navigational style of 'linking' between words - i.e. multiple words can appear on one page (using AJAX) and can be edited in this form - especially for 'small' words (since this makes 'small' words a lot easier to use in practice). Of course, I'd still want the ability to easily edit from multiple tabs, so perhaps capturing something like MOD_ALT+onClick would be good for opening a local division/frame. This would also be nice for Progressive Disclosure.

If the page is in a 'bad' state that won't parse or typecheck, and is left that way for long enough, the Wiki Ide server should go ahead and let the editors know (via squiggly lines and hover-text, for example) where the code appears to be in a bad state.

Suggestions and Code Completion: This requires a server-side capability to take a cursor position and a page and send back a list of suggestions... and should happen based on user-prefs. This can be done with Ajax in much the same way as spelling-errors can be highlighted. Doing it efficiently will happen eventually, too, which requires only that the server itself keep track of apparent edit-sessions and index to support suggestions. Hover-text based on automatic annotations of code would also be useful and doable.

Checkpointing Pages: Simply because a page was the last edited does NOT mean it starts getting compiled into projects. Pages must be checkpointed by users (or robots). Checkpointing a page involves (albeit probably not strictly - i.e. use warnings, not restrictions):

ensuring it parses,

ensuring it typechecks (or at least doesn't have any obvious internal type-errors).

any top-level assertions that don't depend on provisionals or aspected constructs pass their tests,

running any other by-hand unit-tests or code-proofs one might wish to perform.

inspecting the code (or at least the distance from the last trusted checkpoint) for any malign code injections.

Potentially 'signing' the checkpoint with a Public Key (stamp of approval) - most likely via use of a 'tag' property that gets dropped into a Wiki Ide database. This is the basis for a Web Of Trust, where a project can choose to use the most recent checkpoint given signature by someone it trusts, which was mentioned as a required feature (above) due to Assume Good Faith Limitations. The existence and support for signed and stamped checkpoints, along with the name spaces where necessary (for branching), should also reduce (though not eliminate entirely) the need to 'lock' pages or have bots maintaining them against the few random malicious arses that humanity has to offer...

Potentially tagging the checkpoint with other properties or predicates such that you can easily find it later as part of a query.

Starting a 'Checkpoint' of a page sort of 'freezes' a view of the page and prepares it for a rapid edit-compile-test cycle to get it into a 'stable state', mostly clearing out silly logic errors that invariably occur (where "compile" really means "ensure it parses and typechecks") - what one might call a 'pre-checkpoint' state that is initially owned by the programmer who started the checkpoint.

Because Checkpoints are also for security purposes, it must be possible to forbid other (particular) people from attempting to edit the pre-checkpoint page.

However, to keep things convenient, it should be possible for other editors to join in on the edit, and thus it must be possible to have the original pre-checkpoint author to decide to allow others to join the edit (e.g. to see a dialog listing those interested in helping, and anyone with edit priveleges can let anyone else join).

It should be possible to identify when somebody is in the act of checkpointing a page, thus to avoid accidental branching of the page. Perhaps 95% of the time, people will be willing to just let someone else finish their checkpoint before getting back to editing.

It should be possible for programmers sick of waiting on the checkpoint to finish to branch the page and just start editing again. This makes the 'checkpoint' a sort of 'side'-branch to the main page. Fortunately, the 'checkpoint' (once marked as completed) is a 'fixed' entity that will no longer see edits, so it isn't a 'true' branch. Unfortunately, it does mean that any changes made during the pre-checkpoint cycle will need to be ported back to the 'main' branch by hand (a merge-tool would help).

True (editable) branching must be discouraged. Branching is extremely difficult to handle elegantly in a collaborative environment such as a Wiki Ide. In particular, problems arise when attempting to figure out to which branch a particular Wiki Word should lead. Besides, branching is somewhat Anti Social, a violation of the Wiki Way, and tends towards anti-integration of projects. (See Mana Mana for a discussion of some of the problems, and as a demotivating example regarding the value of branching.)

I suggest that 'true' branching be limited to explicit efforts performed by, say, copying a particular version of a common page into a project's namespace then specifying to the linker (via the Project Page) that the specified page is to be prefered whenever a particular Wiki Word is to be linked. This has the Social Engineering effect of discouraging branching, since it isn't the laziest thing, while avoiding the politics problems that come with inability to branch when necessary. There is, of course, still a large cost of 'splitting' or 'combining' a page, but that's where access to backlinks and dependencies-lists are important.

Most of the time the checkpoint would appear in the trunk, uninterrupted by any eager programmers (just like in Wiki Wiki, I anticipate that most of the time there is only one user interested in performing an edit on a given page, minus the tendency to pounce onto Recent Changes; in that case, the 'courtesy' notice that someone is running a checkpoint, plus the possibility of joining the checkpoint effort, should avoid most branching). In those cases of 'branching', what it might look like (in the history) is a simple 'checkpoint' followed by somebody rolling-back the edits and initiating new edits. In a sense, use of checkpoints creates 'macro-versions' as distinct from 'micro-versions'. These macro-versions are the ones used for code, while micro-versions are for editing between macro-versions.

Special tags and predicates available distinct from content for associating extra tags or predicates with a page. I.e. instead of dropping a 'Category' into a page as content, you simply 'attach' it to a page. These go directly to a Wiki Ide-wide propositional database (like Data Log) and are usable (along with the stuff -inside- the pages) in queries, providing a fundamental approach to complex reflection and semantic-web association within the Wiki Ide. Keeping tags separate from content makes it much easier to:

tag 'fixed' pages including both particular or 'old' checkpoints or 'object'-pages that represent actual process and service objects.

use tags to sign versions of a page

create tags that associate two or more pages directly (PredicateOver(ThisPage,AndSomeOther) - appears in the 'tags' list of three pages: 'PredicateOver' (or possibly 'WikiTags:PredicateOver'), 'ThisPage', and 'AndSomeOther').

Tag components of the Wiki as 'deprecated' or perhaps tainted even while they are in use

User Story: If a patent violation or security risk is identified for a particular checkpoint of a page, it would be nice to mark it as 'tainted' then be able to ask, in a single query, for ALL 'tainted' project/objects currently running on the Wiki, plus for the e-mail address of the 'administrator' for that object/service, in order to deliver a mass e-mail regarding its ultimate shutdown or reset (e.g. 'at the end of the week, unless you take care of it first').

Access to this Data Base must be available as a service within the Wiki Ide so that the above User Story can be implemented as a service within the Wiki Ide.

Ability to rapidly select the history page, see the recent checkpoint page, see the exported 'values', go to the 'edit'-version of the page (between checkpoints), go to a 'data' page to see all Wiki Ide-global explicit predicates & tags attached to the page (or a particular version thereof), quickly find ALL direct backlinks, quickly find ALL dependencies (everything a checkpoint of the page actually and potentially depends upon, each marked potential vs. actual, along with degree of indirection, possibly represented as a graph), and find ALL pages that depend upon or potentially depends upon on a particular checkpoint of a page, and degree of indirection, potentially represented as a graph)

Ajax and CSS are powerful enabling technologies for this sort of stuff, and I have no doubt that ALL of the above can be done - if not right away, then as the Web Server is upgraded from within the Wiki Ide (I'm aiming for a Runtime Upgradeable Core). Actually, I bet that a good chunk of it could be done right away, and a good bit of that should be done right away, but I will not press for it. More advanced Object Browser or plugin-based approaches would allow for greater efficiency and flexibility and IO-support, but requirement for them should be kept away from the basic Web Server product, and should be utilized either through particular services (e.g. when running a page as a CGI) or over special ports (since creating services inside the Wiki Ide to handle inputs on special IP ports should also be possible). (i.e. make users log into special ports or hook into particular services running ON the Wiki Ide to get advanced stuff).

I'm currently split on the idea of interjecting a component or two (potentially Xslt Language) between the Web Server and the pages that has the general task of describing pages/URIs + a high-level 'purpose' as some sort of high-level Interactive Scene Graph, such that the Web Server acts as a 'backend' to compile these high-level Interactive Scene Graph back down into HTML & Java Script & XMLHTTPRequest. One one hand: You Aint Gonna Need It (yet) - it would eventually allow for more flexible client-end support (especially regarding use of Object Browsers). On the other hand, it might actually be simpler to inject two high-level transforms (with Xslt Language or similar) than it is to write one low-level transform by hand, and coming up with this intermediate layer would offer a fine place to think about how to do it properly. I guess the choice will depend upon whether the Spike Solution has a webserver that supports XML pipelining, and whether it seems the necessary output (which has Java Script - but not necessarily very dynamic Java Script) is easily done with the available tools.

I hereby claim dibs on the name "Wikid", even though at least two projects (bastards!) has already (mis)used the name for unrelated things. :) -- Dave Voorhis

Maybe Wiki Heart? I tossed it out as a name for the Wiki Ide open-port bindings page (in Runtime Upgradeable Core), but it'd make a neat name for a Wiki Ide, and it would have the benefit of providing a clear project-name for updating the 'kernel' of the Wiki Ide from within the Wiki Ide (whereas 'wikid' sounds more like something you'd manipulate primarily from a unix command-line).

Wiki Heart is a bit too squishy and lovey for my taste, but maybe it will appeal to others. I can imagine the "I Wiki -- Dave Voorhis

KingdomWikiHeart et al., eh? It is a bit Disney-esque. Open Ide and Wiki Forge would also be good (ForgeCore?).

Willing to fight for the memetic space? Wiki Forge is in use, but is not trademarked.

In a word, no. I'd rather not get into a battle over naming, nor lose mindshare to divided attention. Something more unique, perhaps? -- DV

We have a fair chance against our 'competion', I'd think ( ). I doubt we'd have a battle over naming, and there'd be no divided mindshare regarding the use of Wiki as a platform for programming.

WikiWankaAndThePageFactory (or just WikiWanka for short) -- Samuel Falvo

Ugh. But Wiki Factory isn't bad (though it sounds like some sort of Wiki Von Neumann Machine; OTOH, if we stick with the fully-bootstrapped approach - which I am not going to give up -, it would be fully capable of self-replicating). Focusing on alliterative forms: Wiki Workbench is good (though I'm leaning towards Wiki Forge)

Wiki Factory appears to be well-used. -- DV

Seems so.

I hope implementation won't have to wait until a particular is complete, lest this project be condemned to one of those infinite regresses where can't be built until it can run on which depends on which will only run on , which can't be written until is finished, and so on. (Or until it's proven that depends on ...) If it were me, I'd try to bodge together a quick-'n'-dirty minimal Spike Solution out of existing tools, use it long enough to hate it (but learn from it), then throw it away and start over properly. I.e., Build One To Throw Away. -- Dave Voorhis

Here's my Thinking Out Loud (written in some haste) re bootstrapping a usable Wiki Ide:

There are four fundamental subsystems:


This consists of, at minimum:

An HTTP server, consisting of:

The server binaries including executable file(s).

The server source code.

The server configuration files, configured to expose all source code in both subsystems, and to allow invocation of all scripts in all relevant subsystems.

Scripts to build, shut down, and restart the HTTP server.

Compilers and/or interpreters for all languages used in the system, including:

Source code to build the compiler or interpreter

Test scripts

Scripts to compile the source code to build the compiler or interpreter, and install the relevant binary files.

A Web-based text editor coupled with a versioning source control system (see below), configured to allow editing of any and all text files in the system, which works via the user's browser connected to the above HTTP server. Over time, this editor may be evolved by the users into the full Wiki Ide.

Source code and deployment scripts for the editor, as appropriate.

A versioning source control system, integrated with the Web-based text editor, such that any file saved by a user is recorded by the version source control system.

Source code to build the versioning source control system.

Scripts to compile the source code to build the versioning source control system, and install the relevant binary files.

A system of signed users to whom capabilities may be attributed. Not just anyone can be allowed to initiate a procedure that will send "rm -rf /" to the system.

Language and WikiIDE integrated support for easy use of capabilities; in particular, all actions should be performed with minimum capabilities, and users should be aware of which capabilities are going to be used when they execute a procedure or perform a build. Further, they should know which of their capabilities get 'embedded' (or 'signed') into services or procedures they create (and need tight control over it - ideally just those they add explicitly in the code AND give permission for in some dialog). This can't be made very annoying, of course, or we end up with something as bad as Vista - for the most part, procedures shouldn't require any special capabilities, or only use 'low security risk' capabilities.

We may wish to have a database of capabilities themselves, categorizing them by security risks, privacy risks, etc. This would help prevent annoying dialogs.

Primary heartbeat monitor.

This consists of, at minimum:

A collection of test scripts that verify correct operation of:

The core.

The secondary heartbeat monitor. (See below)

The primary heartbeat monitor shall continuously monitor the core and the secondary heartbeat monitor for new deployment of any of their components. When a change is deployed, the collection of test scripts will be executed. If any tests fail, the primary heartbeat monitor will roll back the changes (using the previous version of source control system, if it's what has just been modified) and reinstall the previous known-working version.

Secondary heartbeat monitor.

This consists of, at minimum:

A collection of test scripts that verify correct operation of:

The primary heartbeat monitor.

The secondary heartbeat monitor shall continuously monitor the primary heartbeat monitor for new deployment of any of its components. When a change is deployed, the collection of test scripts will be executed. If any tests fail, the secondary heartbeat monitor will roll back the changes and reinstall the previous known-working version.

User space.

This consists of files that may be freely modified by users, and scripts that may be edited and executed by users. This provides the basis for constructing any and all applications and content unrelated to the core or the heartbeat monitors. No component of the core, or either heartbeat monitor, may be dependent on any component living in user space. However, components developed in user space may be copied to the core or either heartbeat monitor.

The fundamental constraints are as follows:

When a change of the core is deployed, no changes may be made to the heartbeat monitors until the primary heartbeat monitor has validated (and possibly rolled back) the core.

When a change of the primary heartbeat monitor is deployed, no changes may be made to any subsystem until the secondary heartbeat monitor has validated (and possibly rolled back) the primary heartbeat monitor.

When a change of the secondary heartbeat monitor is deployed, no changes may be made to any subsystem until the primary heartbeat monitor has validated (and possibly rolled back) the secondary heartbeat monitor.

The compilers/interpreters used to implement the heartbeat monitors, and their associated scripts, are as changeable as the rest of the system. However, deployment of new versions of these must be deferred until associated automated tests have verified that their deployment will not break the core, the heartbeat monitors, or the source control mechanisms.

The source control system is as changeable as the rest of the system. However, deployment of a new version must be deferred until appropriate automated tests have verified that its deployment does not break the source control mechanisms.

This is very back-of-napkin at the moment; I've been writing it as I conceive it, so it may be fundamentally flawed. I worry, in particular, that there may be dependency chains where breakage in a single rung may bring down the system. However, I think it should provide an (essential) independence between the languages used to implement the system and the system itself, and permit changing these as required. It also means it can be deployed with a minimum of development and can use existing tools - such as an Apache HTTP server and its source code, the Subversion revision control system and its source code, Perl/Python/PHP/Ruby/C/C++/etc. compilers/interpreters and associated source code, and any standard OS capable of running these.

I really like the use of heartbeat monitors - a system with two hearts where each can fix the other and one can fix the system is a fairly good basis for stability. I'll add them to the approaches listed in Runtime Upgradeable Core (to be used in combination with, not instead of). Obviously you're depending on an external scheduler to support the heartbeat monitors, but (for the Spike Solution) I think that choice is appropriate; in the long run, I want the primary scheduler to be part of the Wiki Ide, which is necessary so it can be upgraded from within the Wiki Ide to support distributed operations and server-farms and bare-metal installation. This, of course, means that the heartbeat monitors will almost certainly be running on an upgradeable scheduler. Fun, eh? The scheduler + network reachibility (either of which is useless without the other) will ultimately be primary points-of-failure in the Wiki Ide system (whether it is distributed or not), which really means that we'll want to eventually support the construction of rescue-disks.

On the User Space issue, I believe that proper Project and ProjectPage control would be sufficient to the task. In particular, the Heartbeat and Secondary Heartbeat and Kernel projects can 'depend' on these files at build-time, but the pages utilized by the project shall be selected at a particular version-identifier - aka. a logical 'freeze'. All projects, not just the forge service project and its hearts, shall have this ability to (logically) freeze pages so they aren't upgraded whenever said page is modified. In a sense, I really want all projects to be first-class entities, on exactly the same logical level as the Wiki Ide core services (and even capable of adding to the open IP ports, albeit only through a particular (potentially capability-secured) page).

I'm not convinced of the value of attempting to support multiple languages; the ability to properly and meaningfully link and integrate and modify service (typically just the content of the page being served), not the ability to compile and run services, is what the 'Wiki' aspect is all about. The use of the OS executive environment creates a very fragile and largely unintegrated and poorly linked system that I fear will fall apart quite easily, and it will also have the failings for multiple-languages that I described in Programming In Wiki. And if the OS is used for some 'special' things and not for everything else, then the Wiki Ide really isn't a first-class service of the Wiki Ide.

Consider if our Wiki Ide did support upgrading Apache. This means supporting Makefiles, C++, a Filesystem filled with C++ '.h', '.hpp', '.cpp' pages (of which only the '.h' pages could essentially be referenced from other pages) possibly automake, and lots and lots of directories. I won't even consider trying to make the C++ language & compiler itself upgradeable - that doesn't get pretty. The 'Wiki Way' would involve describing services or algorithms or values or data on pages identified by Wiki Words on, as much as possible, a 'flat' namespace (excepting where projects need special implementations, common page-addendums like discussions, user:prefs, etc.). This avoids any need to:specify:words:like:this or worse (namespaces at least can be associated with 'using namespace' declarations and autogen disambiguation lists). In the Wiki Way, these external algorithms, values, and data are automatically integrated into the local project where the Wiki Word names are used, which requires using a language that knows how to do the integration (i.e. a common language, or a 'super'-language that is really smart about it). In addition to providing an example of what not to do (and one taking tons and tons of pages), the Apache implementation would also divide mindshare when it comes to fixing perceived 'webserver' problems: one can go about attempting to fix Apache or some component of it, or one can start building a Web Server that really is a First Class Wiki Ide product.

We may be better off with having the Web Server and such be considered 'external' and Second Class system until such a time as we are able to create a project within the Wiki Ide that can allow for porting of the service. Thus, instead of attempting to treat the webserver and scripts and heartbear monitors as some sort of First Class Wiki Ide project from the beginning, we simply use it as a scaffolding; and, while being supported by the scaffolding, we build the service to our own vision (so that later we can upgrade it from within the Wiki), run a bunch of tests, then carefully (by hand, and with a system backup available should we fail) remove the scaffolding and henceforth utilize the First Class implementation. and the various other services. I.e. we essentially bootstrap every single service by hand. And the core component we need to be there, from the beginning, is the language. (This is essentially the roadmap I possessed.)

Not that I'm dissing your design or the constraints, and I certainly like the heartbeat monitors. But, keeping in mind the bigger picture of the Wiki Ide goals, I foresee the attempt described above (even if you intend it be there only for the Spike Solution) to be ultimately harmful to my goals with the Wiki Ide. I also fear that it will become almost impossible, or at least a significant battle, to ever wean the Wiki Ide off these early influences. (Or perhaps I misinterpreted you when you said that, say, the web server could be upgraded just like any other part of the system? Let me know if it appears I've misunderstood your intentions.)

Security Concerns for Spike Solution

We can't really afford to expose 'system("rm -rf /")' (or any equivalent thereof) through our Wiki Ide Spike Solution... not without having to password protect everything and be extremely careful about who we allow to play in the sand. I think that we'll need to have a security solution built into our Spike Solution before we open anything to the public.

At the very least, we need to have signed users to whom we can then assign capabilities. It's somewhat difficult to give a capability to a user when they don't have an authorized handle.

I've addended the above to adjust for this.

Today, these first and second heartbeed monitors are named Continuous Integration.

No. Continuous Integration pre-dates the notion of using heartbeat monitors. Heartbeat monitors, as described above, are a specific technical architecture that prevents Wiki Ide authors from irretrievably breaking a live Wiki Ide via erroneous modifications to its own core code. Continuous Integration is a broad vision of development, at best a general strategy -- it is not a specific architecture.

It would be more accurate to say that the heartbeat monitors are technical infrastructure that makes it safe to perform Continuous Integration on a Wiki Ide's kernel via the Wiki Ide itself.

I've initiated a project

-- Mirko Blueming 21.05.2012

Anything happening here recently? Just wondering whether people have noticed Bespin Editor, which seems a good candidate to build a WikiDE.

Contributors: Dave Voorhis

# See also