An abstract reading of the concept of universal policy-based network measurement is isomorphic to a version of superdistribution. Brad Cox has disagreed with me on this, whereas Richard Ballard agrees with my position. Brad and I have been on this discussion for over six years.
The "measurement" is required in order to create and control architecture for cyber security policy and other use policies.
Without measurement there can be no control, nor even a consistent understanding of real time events.
With the Cubicon back-plate (set of atoms with data specification sufficient for the generation of any desired behavioral design), we get any type of measurement that is agreed to by relevant communities, and also the generative process that allows the compression/encription regime I have talked about. The result is one more feature of Cubicon, the reduction of pipe size requirement for any specific transmission task.
We see several so far imperfect attempts by Industry to develop a data standard consistent with a behavioral standard based on a n*m or n*m*q framework. UML does not have an ordered framework under it, and thus creates a standard for "semantics" that has no deterministic mapping to the data footprints, as Cubicon does. Also, UML is not open to underlying modification given user needs, and thus forces the user to accept a class of less creative, non-agile, IT systems. UML specifications are known to have this non agility. OWL has a different set of limitations but has a promise to advance design. OWL, however also has no regular framework under the ontology reification process, and RDF is not n-ary. We need good genealogy not very difficult to use descriptive logics. In the W3C specifications, all resources, without categorization, by a framework creates a very large (billions) axiomatic base, from which no conceivable order, from combinatorics, may be found. The wild west of the Internet is preserved.
The health care industry is of course using the phrase "universal policy" and here the reason why this works as a phrase is because the coverage between two kinds of policies have to overlap in a way that fits down onto a a set of primitives. Again, the notion is about IF the set of primitives can be "correct" or not, as measured by Mill's logic - for example, and if correct does the set of primitives have the combinatorial span sufficiently rich to allow for any type of _expression_.
Cyber security (a field I have published in) has difficulties with wireless service where data interoperability and non-coherence between security policies created vulnerabilities. This set of difficulties is a limiting factor costing society a great deal of time and money. Silicon design and manufacturing is beset with similar limitations, again due to there not being a valid framework based meta-architecture (architecture for creating architecture).
The issue of creativity (of the programmer) is a polemic. True creativity is required for the user, and so far in history the creativity of programmers has been random and costly. The IT professionals have been spoiled, and falsely use this issue of creativity to be lazy. Frameworks, like the J2EE Spring Framework are so that the applications developed have some degree of design coherence.
see my work on general framework theory:
I will mention one more aspect of universal policy, in the context of topic map merge and separation.
The notion of a semantic prime (John Sowa, Richard Ballard) is so that all intended semantics is expressed as a composition of the set of primes. This subject is belittled by the first school thinking, who should read David Hume's work on skepticism a bit deeper. I also would suggest that the first school has false knowledge, which is not allowed to be examined.
This false knowledge is driven by the ungrounded assertion (by many in the funded science communities) of a completely random evolutionary drive in nature. A number of profound viewpoints, including a proper understanding of Hume, asserts otherwise. Intelligent design, from our human created information science, is what is needed by society.
The polemics against intelligence design have to be understood deeply, or else society will never benefit from what might be possible. Our use of the term intelligent design has nothing to do with the common mischaracterization of some religious fundamentalisms. The confusion of the IT markets will evolve into an order and then that order will be expressed in a community driven way so as to not become authoritative. The Cubicon infrastructure has the possibility to be very similar in nature to how natural language in social systems arise and are used.
Community based social intelligence is to be enabled by a meta framework that allows the agile design of intelligent processes (by users). Wow!!! What is the business plan for this?
Back to the concept of "universal policy".
The concept of a linguistic inter-lingua in machine translation theory and applications has a similar root in philosophy and in practical science. We enter the disciplines of linguistic schools, founded by Ben Whorf and demolished by Chomsky with a (in my opinion) incomplete notion of deep structure. Those who would assert the first school, would often make assertions based on a complete rejection of the concepts that make up the second school, including our positive modification of of both linguistic paradigms.
These are my opinions, and not necessary the opinions of Sandy or anyone else I have mentioned. I hope I have not mis-stated anything.
Paul Prueitt's work on ontology models.
Paul Prueitt's work on ontology models.
Paul Prueitt's work on ontology models.
Paul Prueitt's work on ontology models.
Paul Prueitt's work on ontology models.
Paul Prueitt's work on ontology models.
Paul Prueitt's work on ontology models.
Paul Prueitt's work on ontology models.
Brand Niemann works at EPA Brand Niemann works at EPA Brand Niemann works at EPA
|