Talk:Plone
From Organic Design wiki
Lets see what the developers say :-) I sent the following message to the Plone and ZODB developers lists:
Hi, I'm part of a development team who are helping an organisation to
architect a CMS based project that they want to work in a P2P network
rather than using a centralised web-server. We'd prefer to use an
existing popular CMS as a starting point so that it is mature, has a
large development community and a wide range of extensions/modules
available.
From our initial research it seems that Plone should be more capable of
moving in to the P2P space due to it using ZODB rather than SQL and that
ZODB seems able to be connected to a variety of storage mechanisms. I'm
wondering what you guys, the core developers, think of the
practicalities of Plone in P2P, for example could ZODB use a DHT as its
storage layer? what kind of querying is required on the DHT?
We have a good budget available for this and will be developing it as a
completely free open source component, so we'd also like to hear from
developers who may be interested in working on the project too.
Thanks,
Aran
First you should bring up arguments why the existing backends like ZEO,
Relstorage or NEO are not good enough in your case. Looking at the
development history of Relstorage or NEO: implementing an
enterprise-level storage for the ZODB seems to be hard and
time-consuming (and expensive). -- a j
I have looked at NEO which is the closest thing I've found to the
answer, in fact NEO is why I felt Plone was the best choice of CMS to
inquire further about
The problem is that it uses SQL for its indexing queries (they quote
"NoSQL" as meaning "Not only SQL"). SQL cannot work in P2P space, but
can be made to work on server-clusters.
We intend not to have any machines in our network other than the users
computers running the P2P application. So we would need to know exactly
what kinds of querying ZODB expects to be available in its interface to
the storage layer. DHT's can be slow for the first read but cache
locally after that. -- Aran
Yes, we use MySQL, and it bites us on both worlds actually:
- in relational world, we irritate developers as we ask questions like "why
does InnoDB load a whole row when we just select primary key columns", which
ends up with "don't store blobs in mysql"
- in key-value world, because NoSQL using MySQL doesn't look consistent
So, why do we use MySQL in NEO ?
We use InnoDB as an efficient BTree implementation, which handles persistence.
We use MySQL as a handy data definition language (NEO is still evolving, we
need an easy way to tweak table structure when a new feature requires it), but
we don't need any transactional isolation (each MySQL process used for NEO is
accessed by only one process through one connection).
We want to stop using MySQL & InnoDB in favour of leaner-and-meaner back-ends.
I would especially like to try kyoto cabinet[1] in on-disk BTree mode, but it
requires more work than the existing MySQL adaptor and there are more urgent
tasks in NEO.
Just as a proof-of-concept, NEO can use a Python BTree implementation as an
alternative (RAM-only) storage back-end. We use ZODB's BTree implementation,
which might look surprising as it's designed to be stored in a ZODB... But
they work just as well in-RAM, and that's all I needed for such proof-of-
concept.
Regards, -- V P [1] http://fallabs.com/kyotocabinet/
Thanks for the feedback Vincent :-) it sounds like NEO is pretty close
to being SQL-free. As one of the NEO team, what are your thoughts on the
practicality of running Plone in a P2P environment with the latencies
experienced in standard DHT (such as for example those based on
Kademlia) implemtations? -- Aran



