Wednesday, 16 September 2015

Answer Sources: from Fluents to Interactors

This is a follow-up to my previous post "Answer Sources in Prolog (SWI) - Preview" [1]. Here I present the flow diagram for the worker loop with support for the return operation. Implementation of the return operation upgrades our answer sources from fluents [2] to interactors [3].
[1] My post, "Answer Sources in Prolog (SWI) - Preview":
http://seprogrammo.blogspot.co.uk/2015/09/answer-sources-in-prolog-swi-preview.html

[2] Paul Tarau, "Fluents: A Refactoring of Prolog for Uniform Reflection and Interoperation with External Objects":
http://www.cse.unt.edu/~tarau/research/LeanProlog/RefactoringPrologWithFluents.pdf

[3] Paul Tarau and Arun Majumdar, "Interoperating Logic Engines":
http://www.cse.unt.edu/~tarau/research/LeanProlog/InteroperatingLogicEngines.pdf

Thursday, 3 September 2015

Answer Sources in Prolog (SWI) - Preview

I have implemented an initial version of Answer Sources in SWI-Prolog [1], now submitted for preliminary discussion to comp.lang.prolog [2]. For the rationale and design, I have followed Paul Tarau on "fluent sources" [3], although with some important differences.

[1] Code preview with answer sources and the basic combinators:
https://gist.github.com/jp-diegidio/2914cac8b5cfb2b6a95e

[2] (Short) presentation and discussion on comp.lang.prolog:
https://groups.google.com/d/msg/comp.lang.prolog/JToeE7Read8/sZ0xg1-cBgAJ

[3] Paul Tarau, "Fluents: A Refactoring of Prolog for Uniform Reflection and Interoperation with External Objects":
http://www.cse.unt.edu/~tarau/research/LeanProlog/RefactoringPrologWithFluents.pdf

Wednesday, 15 July 2015

Symmetric Twins Paradox

MinkScale2
In the context of special relativity, we present a twins experiment that is symmetric between the twins, so that a paradox appears inescapable, in the form of a violation of the principle of causality.

Here is the experiment (we would argue that effects of acceleration can be made arbitrarily small in our setup):

  == The twins, call them L and R, are each given a clock at birth and get the clock fixed to their body: the two clocks have been previously synchronised.  Assume, for simplicity, that the common origin of space-time for the clocks is set to the moment and place of the twins' birth, as well as a common choice of coordinates is made, such that the twins, at that very moment, share the same frame of reference (up to arbitrary precision).
     Just after birth, the twins (with their clocks) are each embarked on a rocket.  Assume collinear motion for simplicity.  The two rockets start in opposite direction relative to the origin, carrying L and R respectively.  The plan is for the rockets to fly away from each other at some fixed (appropriately high) constant speed for some fixed (appropriately long) proper time, then simply invert course and fly back at opposite speed to the point of origin (in space), and stop there.  (Effects of acceleration can be minimised by making the total proper time the rockets are subject to acceleration appropriately small relative to the total proper time of the journey.)
     The twins at that point rejoin, again sharing a common frame of reference (up to arbitrary precision).  We ask what the two clocks measure for each twin in this common frame. ==

To compute results, we apply special relativity.  (We advise readers go through this little but essential exercise by themselves.)  We find that R's clock looks late (i.e. back in time) to L, but, *at the same time* (twins and clocks are in a common frame of reference), L's clock looks late to R!  This conclusion is an inescapable consequence of the theory, and now the problem becomes how to make sense of these results.  In fact, since all clocks are affected, including the biological ones, at the end of the journey each twin effectively finds the other younger than himself: then, per *reciprocity* (causality), i.e. that in the same frame of reference, if I am older than you, you must be younger than me, we can argue that each twin is at the same time younger and older than the other, which is indeed absurd!

Note that the absurdity would not be so patent if we just let the two clocks travel, hence we sent the twins, too: not because the presence of the twins is necessary for relativistic effects to occur, the clocks must be late relative to each other regardless of anyone observing them, rather because only the twins can "testify" that the conclusion of the experiment, so special relativity theory, is indeed at odds with the principle of *causality*.

Bottom line: if special relativity is correct, it must be incomplete.

Saturday, 18 April 2015

Cheryl's birthday in Prolog

Cheryl's Birthday - The Math Question That Went Viral
"One word problem from a Singaporean school exam briefly became the talk of the Internet last weekend."
http://www.theatlantic.com/education/archive/2015/04/the-math-question-that-went-viral/390411/
https://groups.google.com/d/msg/comp.lang.prolog/G6yuaIsI-Cs/SVOlumZUiQoJ

I have put together two solutions in Prolog (in SWI-Prolog, last updated 2015-04-20T19:40:00+01:00), the source code can be downloaded here: http://julio.diegidio.name/Share/CherylsBirthdays.zip

The first solution (cheryls-birthday.aggr.pl) uses lists and aggregates:
?- problem(cbdate, BDate).
% 563 inferences, 0.000 CPU in 0.000 seconds (?% CPU, Infinite Lips)
BDate = [7, 16].

?- time((between(1,1000,_),p_cbdate_input(CBDates),p_cbdate(CBDates, BDate),fail;true)).
% 563,001 inferences, 0.218 CPU in 0.220 seconds (99% CPU, 2577827 Lips)
true.
The second solution (cheryls-birthday.lists.pl) uses lists:
?- problem(cbdate, BDate).
% 5,163 inferences, 0.000 CPU in 0.000 seconds (?% CPU, Infinite Lips)
BDate = [7, 16].

?- time((between(1,1000,_),p_cbdate_input(CBDates),p_cbdate(CBDates, BDate),fail;true)).
% 5,164,001 inferences, 1.513 CPU in 1.520 seconds (100% CPU, 3412614 Lips)
true.
The first solution, i.e. the implementation leveraging aggregates, turns out to be significantly faster...

Wednesday, 26 February 2014

Hilbert's impossible hotel

"Consider a hypothetical hotel with a countably infinite number of rooms, all of which are occupied. [...] Suppose a new guest arrives and wishes to be accommodated in the hotel. Because the hotel has infinitely many rooms, we can move the guest occupying room 1 to room 2, the guest occupying room 2 to room 3 and so on, and fit the newcomer into room 1." (*)

But we can prove that, if the hotel is full, accommodating new guests is in fact impossible:

Indeed, to say that the hotel is fully occupied is to say that, for all n in N, room n is occupied. Thus, a fortiori, room 1 is occupied and, for all n in N, if room n is occupied, room n+1 is also occupied. Which in turn is equivalent to saying that there is no n in N such that room n+1 is available. Hence, no more guests can be accommodated. QED.

Note that what I am actually showing is that, granted the usual inductive definition for the natural numbers (a sequence), the hotel simply can never be full; conversely, that there can be no such thing as a hotel (a set) that is potentially infinite. In other words, the simply endless is just not of the same (logical) quality of the actually infinite, that is what Hilbert's hotel is showing. Consequently, nothing peculiar about infinite sets can be proven via plain finite induction on sequences.

(*) http://en.wikipedia.org/wiki/Hilbert's_paradox_of_the_Grand_Hotel

Thursday, 26 August 2010

Se programmo...

Professional, un-professional, de-professional, re-professional.

Otherwise, the ins and outs of the pros and cons.

Publicly private.

Bah, intanto lei dorme.

Friday, 10 July 2009

A semantic space-dimension for the Web

How to give a *sensible* space-dimension to the Web.

This is an open project.

Discussion at:
"A space-dimension to the Web: a combinatorial optimisation problem"
http://groups.google.com/d/topic/sci.math/LNI4DAvSRes/discussion

=== Setting:

Let G be a weighted, directed graph.
Let S be a lattice space for G.
Let M be a physical model for G.
Let U be (the absolute value of) the potential energy (in M over S, given G)

=== Problems:

Problem 1 (optional): Express U.
Problem 2 (optional): Minimize U.
Problem 3: Express and minimize U, given the following constraints:

- Constraint 3.CG1: Weights in G have positive rational values.

- Constraint 3.CG2: G is sparse.

- Constraint 3.CG3: G is dynamic, i.e. nodes and edges change (appear, desappear, change their weight). The dynamic is by discrete singular events, changes are smooth.

- Constraint 3.CS1: S is a diophantine circle where positions start from zero along the circumference, and the distance function x is:

    let c be the circumference (i.e. number of nodes in G)
    let x' = x1 - x2 (absolute distance, integer >= 0)
   
    x := x'      , if x' <= c/2
         c - x'  , otherwise
    (i.e. distance along the shortest arc, integer >= 0)

- Constraint 3.CM1: Within model M, the force F is:
    let i,j be non-negative integers indexing nodes in G
   
    f_ij = k_ij * x_ij , if exists in G edge i->j with weight k_ij
           0             , otherwise
    (i.e. absolute elastic force, rational >= 0)
   
    F = sum_i sum_j f_ij
    (total force, rational >= 0)

- Constraint 3.CU1: Given that G is dynamic (see CG3), we want to minimise U and keep it minimised!

=== Solutions:

Our solution to Problem 3 at the moment consists in a "local approach".  We build a graph that is near-to-optimal (by inserting any new node at a location such to minimise the total energy change), plus we have a process that keeps iterating the configuration space for local improvements (by swapping adjacent nodes).  The idea is that this process should be able to keep up with changes (which are smooth, see 3.CG3 and 3.CU1), and this together with the strategy of insertion should be enough to keep the system (at least!) at a near-to-optimal minimum.  Simulated annealing can also be easily implemented.

Incidentally, in the setting of Problem 3 there is no role for node weights.  This is a choice, not a simplification, related to semantic considerations.  This can be discussed: we are after a *sensible* way to give a space-dimension to the Web.

Tuesday, 11 March 2008

Radical contradiction

Technically speaking,

We have blown up the "root of contradiction" with an _atomic_ bomb.

To the casual philosophers we are, I will show the links below.

To the professional programmer: a crack is not a hack, but a hack is a crack.

Keep up the go(o)d work.

==============================================

Julio Di Egidio
Re: Question about proof by contradiction
Posted: Mar 10, 2008 11:12 AM
http://mathforum.org/kb/message.jspa?messageID=6131405

[...]

BTW, here is an extract from a letter from Wittgenstein to Russel, 1921, which I think sheds some light (I'm

afraid I'll have to traslate, I've got it in Italian):

"I believe our problems track down to _atomic_ propositions. You'll see it if you try to precisely explain

how the Copula is such propositions has meaning. I cannot explain it and I believe that, once an exact answer

is given to this question, the problem of <> and of the apparent variable will be _much_ nearer its

solution, if not solved. Now I think above <> (the good old Socrates!)."

The good old Ludwig!!

Explaining that meaning, by means of the empty set "we" are, is indeed what I have shown (again, until dis-

proved).

Julio

==============================================

Marshall
Re: Question about proof by contradiction
Posted: Mar 10, 2008 5:46 PM
http://mathforum.org/kb/message.jspa?messageID=6131836

> >> Proof by contradiction can be formalized as
>
> >> (P -> (A and not(A))) -> not(P).
[...]
> The proof in question, in fact, does not even use proof by
> contradiction. It has the form
>
> (P -> not(P)) -> not(P).
>
> This is not a proof by contradiction.

[...]

Does anything interesting happen if we transform them somewhat?

(P -> (A and not(A))) -> not(P)
(P -> false) -> not(P).
(not(P) or false) -> not(P)
not(P) -> not(P)

Well, I seem to have destroyed the formula's essential nature
by these manipulations. How did THAT happen? Apparently
truth-value-preserving transformations don't preserve some things
that aren't truth values.

Marshall

==============================================

Sunday, 9 March 2008

The unified theory of all we can do with it (seriously)

I have posted the following message to the Mathforum yesterday at 4:43 AM (GMT+00). Starting from around 6:00 AM today, the Mathforum has become less and less responsive, until at 8:00 AM it has finally stopped working.

I am reposting the message here (untouched), because at the moment it doesn't appear in any other publicly available sci.math repository.

I have also attached an exchange with Mr Aiya-Oba, who was so kind to support in dis-closing the proof.

WARNING: MIND YOU!!
This is no fucking around, handle with care and take at your own risk.

===================================================

Julio Di Egidio
The unified theory of all we can do with it (seriously)
Posted: Mar 8, 2008 4:34 AM
http://mathforum.org/kb/thread.jspa?threadID=1708310

[I wish to just warn you I am not a professional in any of the fields I am going to touch, I am rather some kind of abstract logician plus a professional programmer, so please mind the step(s). What I'm after is for (dis-)proofs to the following chain of assertions. As to the "discussion", that might very well start later, and I do mean it. OTOH, questions are always welcome. Again, please mind the step(s), there are none.]

Subject: The unified theory of all we can do with it (seriously).

Freely mentioning Russell, Cantor, Goedel, Turing, Complexity, Walster, Golden, and Di Egidio.

Below, "dis-x" stands for "x and only x", where the connotations are thought to be even more interesting.

>> Walster shows[*] that a number is a set, and that the empty set is a number.

Walster extends interval arithmetic with the empty interval and intervals with one or both infinite end-points. He first defines operations on the empty interval; from there, he closes arithmetic to any operator and function combinations on the entire domain. In his system, the empty interval is dominant to any other, including the entire interval, i.e. {}.rel.X = {} and X.rel.{} = {}, for all X in IR*, for all interval relation. (More from Walster later.)

(1) We now can say: a number dis-is a set, and the empty set is dominant to any number, dis-including itself.

>> Russell asks what is the set of all sets not having themselves as elements.

(2) We now can say: the set of all sets not having themselves as elements dis-is the empty set, i.e. the paradoxical set of paradoxical sets not having any elements, dis-including themselves.

>> Cantor wanders how the diagonal argument leads to entities outside the domain.

(3) We now can say: the diagonal argument leads to the all-encompassing void of the empty set, i.e. the paradoxical number of paradoxical numbers (or, more simply, the "without" (outside) of the domain, as seen from "within" (inside) the domain).

>> Goedel proves that self-referential entities must exist, yet undecidably.

(4) We now can say: the purely self-referential set dis-is the empty set, by foundation; in fact, a progression up the chain of provability systems can be seen from "I can't be proved", to "I am not true", up to "You fail", where this last sentence dis-is the empty set and expresses the limit ad infinitum of goedelization.

Incidentally, "You fail" might express more than the abstract sentence ~Bf ("consis"; can read "not-believe-that"), where belief is the foundational meaning behind logical negation and falsehood. I cannot say (for lack of specific knowledge) what this strictly entails on the "incompleteness arguments in a general setting" (Smullyan, 1992), but the introduction of the empty set as a full fledged and foundational entity, and the closedness of algebraic systems it brings, seem to suggest a profound impact. For instance, it seems quite evident that belief should itself be founded on "us", the subjects, and aren't we, with respect to the system, just its very external domain? In a sense, ultimately, "we" are the empty set and the self-referential dis-proof of all proofs.

Indeed, OTOH, I must note that, as far as "real" systems are concerned (i.e., in any form of "engineering"), the introduction of the empty set is in itself enough to formally found our very daily "practices", where we are used to discard apparently incorrect answers (i.e., dis-answers within our accepted domain), and where we, ultimately, improve throw failure (i.e., breaking out of the boundaries of the accepted). This is in the lights of what straight follows.

>> Turing, in shades of Hilbert's tenth problem, restates the question in terms of the "halting problem".

(5) We now can say: All machines indeed stop, sooner or later; in the worst case, it is "us" (see my preceding note) stopping them, and, in any meaningful sense, the machines "we" stop have failed, and dis-belong to the void of the empty set, which happens to be the void "we" are.

In simpler terms, the set of failing machines is the set of machines that are "not machines" at all, with respect to the bounds "we" impose on the accepted domain.

Incidentally, for all this to be (believed) true, we must accept that classical mathematics (pardon my lack of "sharpness" on that, but, strictly speaking, we could go back to Aristotle), from its very logical foundations onward, is rooted on a fallacy. I wish to stress here that correcting that fallacy doesn't mean throwing to the bin all of the great accomplishments so far, it rather means we could enlarge our perspectives beyond what is today believed to be out of our reach.

As an anticipation, Walster talks about an "exception free system", which is a way to show that this "new" system (in the sense just given) must be simpler, not more complex than our current systems. This "new" system must still show some kind of instrinsic limit, though, and that is what I am (at a naive level) dis-expressing within the very last sentence in this document. However, let's finish the tour first.

>> Within Complexity, NP problems are said to be "intractable", and it is yet an open question whether or not P = NP.

Going back to Walster: "The use of interval methods provides computational proofs of existence and location of global optima. Computer software implementations use outwardly-rounded interval (cset) arithmetic to guarantee that even rounding errors and bounded in the computations. The results are mathematically rigorous." For example, the Kepler conjecture has been proved after 300 years by means of computers and outwardly-rounded cset arithmetic.

(6) We now can say: NP is too in the tractable domain, though NP is not P in that they represent the two opposite approaches to problem solving, the latter finding the correct solutions, the first discarding the incorrect ones.

Incidentally, if NP happened to be intractable, in real life we would forever be stuck in undecidedness, which is apparently not the case (and here I mean, apart from any heuristics: human beings don't need heuristics in every-day life, though every-day life is full of undecidable questions, rooted into the very structure of natural language).

>> Di Egidio asks what then is a "number" (or a "set") for the sake?

(7) We now can say (extensionally): a number (or a set) dis-is all we can do with-in and with-out it, i.e. the closed number system exhausts the whole domain of tractability, by tautological-within-self-referential foundation.

>> Golden shows[**] a family of number systems called "polysign numbers", having a natural number of signs.

(8) We now can say: Walster shows the "natural" closure of real numbers (avoiding undefined numbers and bounding computational errors); Golden shows the "natural" interplay between natural and real numbers (avoiding the asymmetries introduced with the imaginary numbers[***]); finally, Di Egidio shows (yet to be dis-proved), the general "natural" meaning of "number", as rooted into the concept of a subject which is the empty set (avoiding undefined reasoning).

>> Di Egidio screams, what's the outcomes then?

(9) We now can say: the outcome is the unified theory of all we can do with it (seriously).

>> Di Egidio cries, come on, you're saying "seriously"!?

(10) The outcomes are still to be inspected and worked out, but... even if only half of what is stated here has some reasonable foundation, then its incidence on everybody's lives could be dramatic, and to the better(!!). For instance, we have here the foundation for a final convergence of natural and logical languages, and, if you cannot imagine how that could only and only only change our lives for the better, then "You" fail.

If you managed to get to this point, I'll be looking forward to your knowledgeable dis-proofs.

(As you may guess, I do really need your feed-back.)

Thank you very much,

Julio

--------------------------
Julio Di Egidio
Analyst/Programmer
http://julio.diegidio.name

[*] More on Walster's Closed Sets in this thread (it's the sci.math group): http://mathforum.org/kb/message.jspa?messageID=6126425&tstart=0

[**] More on Golden's Polysigned Numbers on his web site: http://www.bandtechnology.com/PolySigned/index.html

[***] The asymmetries are not completely avoided, but I guess Mr Golden might not have heard of closed number systems, yet.

===================================================

Anthony A. Aiya-Oba
Prime Two Exclusiveness Conjecture
Posted: Mar 8, 2008 11:33 AM
http://mathforum.org/kb/message.jspa?messageID=6128731

Other than two, there exists no prime, whose sum of its factors equals prime. -Aiya-Oba (Poet/Philosopher).

Thus, P/1 + P/P = 3, is solely, and solely, P = 2.
Q E D

===================================================

Julio Di Egidio
Re: Prime Two Exclusiveness Conjecture
Posted: Mar 8, 2008 12:30 PM
http://mathforum.org/kb/message.jspa?messageID=6128733

2 dis-is Q E D

-LV

===================================================