The computational power of infinite precision measurements
Transcript of The computational power of infinite precision measurements
The computational power of infinite precisionmeasurements
Luis Filipe Barreiros Fonseca
Master Dissertation in
Mathematics and Applications
Júri
Presidente: Prof. Maria Cristina de Sales Viana Serôdio SernadasOrientador: Prof. José Félix Gomes da CostaVogais: Prof. Isabel Oitavem
June 2020
ii
Acknowledgments
This Master thesis is dedicated to my family that always supported me in all of my decisions. A special
word to my advisor professor Jose Felix Costa that showed complete availability and dedication towards
my work and to Instituto Superior Tecnico for giving me the opportunity to study what I like the most.
To my colleagues and friends that I met during these 5 years, in particular a especial thanks to Maria
Ribeiro and Catarina Costa for helping me whenever I needed. Lastly but not least, to my friend Dennis
Klein that made me have a great time in Germany and showed me that no matter how much work one
has to do, it is always possible to find spare time and do other activities.
iii
iv
Resumo
Nesta dissertacao estudamos o poder computacional de Maquinas de Turing Analogica-Digitais que
usam experiencias fısicas como oraculos, conseguindo assim fazer medicoes de numeros reais e por-
tanto ultrapassando a barreira de Turing. A experiencia fısica que consideramos e a Smooth Scatter
Experiment, cujo tempo que demora a devolver a resposta depende do tamanho da query word. E
possivel limitar o tempo que a maquina de Turing espera pelo oraculo usando uma time schedule.
Apesar de existirem tres tipos de protocolos de comunicacao entre a parte analogica e a parte
digital, apenas estudamos o protocolo de precisao infinita e apresentamos uma prova alternativa para
o lower bound do poder computacional destas maquinas quando usam um time schedule exponencial.
Alem disso, analisamos a influencia da natureza do time schedule quando fixamos a posicao do vertice
da Scatter Machine em y = 1/2, caracterizando o poder computacional em termos de classes de
complexidade nao uniforme e completando a prova actual de quando o time schedule e um funcao
computavel.
Palavras-chave: Maquinas de Turing, Maquinas Analogica-Digitais; Oraculos fısicos; time
schedule; Complexidade nao uniforme; Protocolo de precisao infinita; Funcao total; Funcao computavel;
Funcao constructıvel no tempo.
v
vi
Abstract
In this dissertation we study the power of analogue-digital Turing Machines that use physical experiment
as oracles, allowing them to making measurements of real numbers and thus breaking the Turing barrier.
The physical experiment we consider is the smooth scatter experiment, which takes some time to return
an answer depending on the query word. It is possible to limit the time the machine waits for the oracle
by using a time schedule.
Eventhough there are three types of communication protocols between the deterministic Turing ma-
chine and the experiment, we only study the infinite precision protocol and in particular, we present an
alternative proof of the known lower bound for the computational power of such machines when using
an exponencial time schedule. Furthermore, we analyze the influence of the nature of the time schedule
when we set the vertex position to 1/2, caracterizing these machines in terms of non-uniform complexity
classes and completing the current proof of when the time schedule is a computable function.
Keywords: Turing Machines; Analogue-digital machines; Physical oracles; Time schedule;
Non-uniform Complexity; Infinite precision protocol; Total function; Computable function; Time-constructible
function.
vii
viii
Contents
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Resumo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
1 Introduction and main contributions 1
2 The Scatter Experiment 5
2.1 Time, protocols and advices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Boundary numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Simulating the experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 A study of y = 1/2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5 Computing zk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3 Changing time schedules 21
3.1 Relation between oracles and advices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Alternative caracterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Consequences on the Church-Turing thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4 The smooth scatter machine 32
4.1 The Cantor set C3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2 Upper Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3 A new advice function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.4 A particular time schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.5 Lower bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5 Conclusions and further work 53
Bibliography 55
ix
x
Glossary
APF : We denote byAP (F) the class of sets decidable in polynomial time by an analogue-
digital machine using the physical oracle with some time schedule f ∈ F , infinite precision and unknown
vertex at position y = 1/2.
Ω(g): We say that a function f : N→ N1 is in Ω(g) iff ∃M∈R+∃x0∈N1∀x≥x0
f(x) > Mg(x).
Θ(g): We say that a function f : N→ N1 is in Θ(g) iff ∃M1,M2∈R+∃x0∈N1∀x≥x0
M1g(x) >
f(x) > M2g(x).
O(g): We say that a function f : N→ N1 is in O(g) iff ∃M∈R+∃x0∈N1∀x≥x0 f(x) < Mg(x).
o(g): We say that a function f : N→ N1 is in o(g) iff ∀r ∈ R+∃p∈N∀n≥p g(n) < rf(n).
log: We say that a function f : N→ N is in log iff |f | ∈ O(log(n)).
log(a): We say that a function f : N→ N1 is in log(a) iff |f | ∈ O(log(a)(n)).
poly: We say that a function f : N → N is in poly iff there exists a polynomial p such
that |f | ∈ O(p(n)).
|z|: It usually defines how many bits z have, however, sometimes it can define the
norm of z.
P(S): Sets that can be decided in polynomial time by a deterministic Turing machine
on help by an oracle S.
prefix: Let z be some word, we say that y is a prefiz of z if there exists some word w
(possibly the empty word) such that z = yw.
recursive set: Set with a computable characteristic function.
sparse set: Set such that the number of words with size n is bounded by a polynomial p(n).
tally set: Set such that all the words are of the form 0k for some k ∈ N0.
xi
xii
List of Figures
2.1 Sharp Scatter Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Smooth Scatter Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Existence of lk and rk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Number of zk ’s when T (k) = k. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 Number of zk ’s when T (k) = k2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6 Number of zk ’s when T (k) = k3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.7 Number of zk ’s when T (k) = klog(k). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.8 Number of zk ’s when T (k) = 2k−1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.9 Number of zk ’s when T (k) = 2k. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.10 Number of zk ’s when T (k) = 2k+1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.11 Precision of zk when T (k) = 2k. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.12 Precision of zk when T (k) = k/2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.13 Precision of zk when T (k) = k/3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.14 Precision of zk when T (k) = k2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.15 Precision of zk when T (k) = k3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.16 First bits of zk ’s after the comma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.1 Representations of rkk for T1(k). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2 Representations of lkk for T1(k). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.3 Representations of rkk for T2(k). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4 Representations of lkk for T2(k). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.5 Size of f1(n) when T (k) = k2 × 2k. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.6 Size of f2(n) when T (k) = k2 × 2k. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.7 Size of f1(n) when T (k) = log(k)× 2k. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.8 Size of f2(n) when T (k) = log(k)× 2k. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.9 Size of f1(k) when T (k) = klog(k). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.10 Size of f2(k) when T (k) = klog(k). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.11 Size of f1(k) when T (k) = 2k/2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.12 Size of f2(k) when T (k) = 2k/2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.13 Size of f1(n) using T1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.14 Size of f2(n) using T1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
xiii
4.15 Size of f1(n) using T2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.16 Size of f2(n) using T2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.17 Size of f1(n) using T3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.18 Size of f2(n) using T3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.19 Size of f1(n) using T4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.20 Size of f2(n) using T4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.21 lkk for 8 ≤ k ≤ 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.22 rkk for 8 ≤ k ≤ 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
xiv
Chapter 1
Introduction and main contributions
The ideia of non computable sets is highly related to our notion of computers, afterall, they are defined
by the lack of a computer programme that can decide them. But what if we could give a help to the
computer and tell it that some set of words belongs or not to some set? By giving a non-computable
set it becomes straight forward to show that these machines could decide more than an usual computer.
We shall call this set of words an oracle and define an oracle Turing machine to be a standard Turing
machine coupled with an oracle. These machines have an additional tape, the query tape, and an
additional special state, the query state. When the machine enters the query state, it reads the word
written in the query tape and performs a transition to either the state YES or the state NO, depending if
the query word is an element in the oracle set or not. This consultation to a standard oracle is done in a
single time step.
These oracles work as black boxes and in the first years after their idealization by Alan Turing,
researchers questioned if there could be some computing model based in a current physical theory
that could break the Turing barrier. In 1999 Hava Siegelmann introduced in [1] the Analogue Recurrent
Neural Network (ARNN ) which can be considered as a first model of the brain and consists in a neural
networks with real valued weights. Hava Siegelmann proved that if at least one non-rational weight is
used, then this model can in fact decide non-computable sets1. This is of particular interest as every
language over a finite alphabet can be codified into a real weight and we could use some mean of
measurement to fetch that same real weight, bit by bit.
Since then the academic community introduced several theorys on whether this model could be in
fact be implemented as the used activation function for the neurons has a discontinuous derivative. While
some as Sielgelmann, Younger and Redd claim to have engineered an implementation of this model,
others as Martin Davis in [2, 3] are very critic on any physical implementation of super-Turing systems,
arguing that the reason why models such as the ARNN are able to decide super-Turing sets is because
they are provided with non-computable parameters in the first place (in the case of the ARNN these
parameters are the real weights). Furthermore, Davis claims that even if a computer could produce a
non-computable sequence of natural numbers, its non-computability could not be verified, which is a
1If only rational weights are used, then the ARNN is equivalent to an usual Turing machine.
1
problem related to the existence (or lack of) non-computable numbers in nature.
Beggs, Costa and Tucker offer a different computational paradigm in which Turing machines commu-
nicate with some physical device that aim to measure some unknown quantity (see [4–6]). In this model,
the Turing machine uses as an oracle a physical experiment that is based in some theory of Physics
and so we may call it a physical oracle, and similarly to oracle Turing machines, it is possible to execute
queries over it. However, the fact that these physical experiments are ruled by the laws of Physics imply
that these experiments take more than some fixed time to return an answer. Take for example a balance,
if two weights are very different then the heaviest will go down almost immediately, but if the two weights
are almost similar the heaviest will take a lot more time to go down the same distance. Thus, we must
have some way of telling the machine how much time it will wait for the physical experiment before timing
it out. This amount of time that the machine waits is given by a time constructible function that we will
call the time schedule, which must strictly increase with the size of the query word. If the experiment
does not halt before timing out, the machine cannot go to the YES state or the NO state, so we must
add to the Turing machine a third state: the “timeout” state .
This computational model has therefore two main components: the Turing machine wich is the digital
part, and the physical experiment which is the analogue part. This is why these machines are called
analogue-digital Turing machines, or ADTM ’s . Many experiments have been considered for the ana-
logue part, such as the broken balance experiment studied in [7] and the Wheatstone bridge discussed
in [8] among others. All of these experiments could be used to achieve the same results as they all are
equivalent to the ARNN model, however, in this thesis we consider the smooth scatter experiment. We
call smooth scatter machine (SmSM ) (see [9]) to an ADTM equipped with this physical oracle.
One topic of interest are the upper and lower bounds for the computational power of a Turing machine
on help by a physical measurement as oracle when working with time schedules of different nature, a
problem studied in [7]. Three types of communication protocols between the analogue and the digital
part were already deeply studied ([4, 8, 10]), but in this thesis we mainly focus on the infinite precision
protocol, asking ourselves what is the influence of the time schedule on the power of these machines if
we fix the vertex position. In particular, we study the impact of time schedules by considering as time
schedules total increasing functions (IN ), computable increasing total functions (CI) and increasing
time constructible functions (TC). The computational bounds were already completely characterized
for the first and third cases (see [11]), however the result for the upper bound when using computable
increasing functions heavily relies on further assumptions on the nature of the oracle, mainly if a given
tally set is recursive or not. It remained as an open problem to know whether or not the same result
holds without knowing the recursivity of such set (see [12, 13]), yet, in Chapter 3 we will show the result
remains true as when it comes to deciding recursive sets in polynomial time, recursive tally oracles have
the same power as non-recursive tally oracles (Proposition 8).
Concerning time-constructible time schedules, the upper bound of the computational power of such
machines working with an exponential time schedule, infinite precision and a random vertex position
was shown in [12] to be the class P/log?, however, the current proof for error-prone protocols requires
an explicit use of the expression of the time schedule and it is an open problem to know whether we can
2
achieve the same upper bound for these protocols without using the said expression. In Chapter 4 we
will show an alternative proof to the case of infinite precision where we use no information given by the
formula of the time schedule (Theorem 4.5.2) with the idea of providing a new insight on how to attack
the problem for unbounded and finite precision.
3
4
Chapter 2
The Scatter Experiment
To potentially boost the power of Turing machines, Alan Turing proposed that these machines could
be aided by an oracle which consisted on some physical experiment, therefore capable of giving non-
computable information to the machine. Examples of such oracles can be found in nature. In the last
decades several experiments were considered, however, the intrisic time necessary to perform the said
experiences has never been considered. It is therefore our aim to study the power of oracle Turing
machines on help by physical oracles while always having in mind the necessary time to consult the
oracle.
We start this chapter by introducing some concepts as the smooth scatter experiment and the bound-
ary numbers, which we will use to devise a Turing machine that, when given an advice function, it can
simulate a physical experiment. The aim of the oracle is to retrieve an approximation of a real number
y up to some number of bits by making a measurement, therefore we can view the Turing machine as
an experimenter measuring a physical magnitude to decide a set. It is believed that any measurement
machine obeying the laws of Physics will always take at least exponencial time on the desired precision
of the quantity we want to measure, known as the BCT Conjecture (see [14]), and furthermore, one can
see in [15] that the computational power of Turing machines do not depend on which method of mea-
surement it uses, therefore we chose to use a very simple physical experience: the scatter experiment.
This measurement device consists on a cannon, a wedge and two collecting boxes (see [9, 15]). The
wedge contains a vertex in a (possibly non-computable) position y ∈]0, 1[ and whenever the oracle is
called, the cannon is placed at a position z ∈ [0, 1] according to a protocol, it shoots a particle towards
the edge and if the particle is reflected on the wedge above the vertex, then it is detected by the right
collecting box and the machine performs a transition to the “right” state, otherwise it is detected by the
left collecting box and the machine enters the “left” state. These states can be seen as the YES and NO
states1 of an ADTM . We consider two different versions of this experiment: the sharp and the smooth
versions which consist of a scatter experiment with a sharp or a smooth wedge, depicted in Figures 2.1
and 2.2 respectively.
In both versions we consider y to be the position of the vertex and z the position of the cannon. Re-
1We assume all transitions of the Turing machine to take the same amount of time which is simply defined as a unit of time.
5
d = 5m
d′ = 5m
LEFT COLLECTING BOX
RIGHT COLLECTING BOX
cannon
sample trajectoryv = 10m/s
0
1
z
limit of traverse of cannoncannon aims at dyadic z ∈ [0, 1]
0
1
y
limit of traverseof point of wedge
Figure 2.1: Sharp Scatter Experiment
d = 5m
d′ = 5m
cannon
sample trajectoryv = 10m/s
0
1
z
limit of traverse of cannoncannon aims at dyadic z ∈ [0, 1]
0
1
y
limit of traverseof point of wedge
V
φ
φ
w
x
LEFT COLLECTING BOX
RIGHT COLLECTING BOX
Figure 2.2: Smooth Scatter Experiment
6
garding the sharp version, the sharp wedge makes a 45 angle with the cannon line and both collecting
boxes so the time it takes for a box to collect a particle is constant unless z = y.2 The probability of this
case is zero because the cannon is set to a position according to one of three protocols discussed in
Section 2.1 and so either z is dyadic or is choosen uniformly on some interval with positive measure.
We can then set up the parameters v, d and d′ of the machine in such a way that a call to the oracle
takes exactly one unit of time. The idea is to have the vertex y to represent the concatenation of the
words of a possibly non-computable set, so if a machine runs in polynomial time, we can retrieve at
most a polynomial number of words from the oracle thus the sharp version of the scatter experiment is
equivalent to a usual sparse oracle.
In Nature it is not possible to construct a scatter machine with a perfectly sharped wedge, so we give
special attention to the smooth version of the experiment where we assume some axioms:
• the wedge is given by a function g ∈ Cn with n ≥ 1;
• ∃y ∈]0, 1[ such that g is strictly increasing in [0, y] and strictly decreasing in [y, 1];
• g|[0,1] is symmetric on y;
• ∃m ≤ n such that g(m)(y) 6= 0;
• ∀z ∈ [0, 1] \ y,∃t ∈ N such that a particle fired from position z will be detected by a box in less
than t units of time;
• no energy is lost in the reflection of the particle.
Similarly to the smooth scatter machine, there exists a vertex at position y ∈]0, 1[ and the cannon is
placed on a position z ∈ [0, 1] where the probability of having z = y is zero for analogous reasons. The
vertex position again codifies the concatenation of the words in the usual oracle and so, in polynomial
time, it is only possible to fetch at most a polynomial number of words of the oracle. This implies that,
just as the sharp scatter experiment, this oracle represents a sparse oracle. However, the big interest
of the smooth scatter machine, as we will see, is that we can only retrieve at most a logarithmic number
of words if we require the deterministic Tuting machine to run in polynomial time, therefore reducing the
computational power of a SmSM when compared to a ShSM .
As stated, the experimental time for the smooth Scatter machine cannot be bounded by any constant
(see also Proposition 1), therefore we need some way of telling the machine how much time it will wait
untill it resumes its computation. This can be done via a function which we will call a time schedule.
Definition 1. A time schedule T is an increasing time-constructible function T : N→ N.
Given some word q in the query tape, when the deterministic Turing machine enters the query state
it queries the oracle with some word and waits at most T (|q|) units of time for a response. If no answer
is given from the oracle, then the Turing machine resumes its computations in the “timeout” state. Since
the time schedule is a time-constructible function, the Turing machine can count internally how many
units of time have passed since it has entered the query state.2The collecting time would still be constant if the wedge made any other angle with the collecting boxes and the cannon.
7
2.1 Time, protocols and advices
We can now determine the behaviour of the time function texp : I → R that represents the time the
machine takes to consult the oracle when queried with some word z:
• I = [0, 1];
• ∃!y∈I |texp(y) =∞;
• ∀x∈Itexp(x) > 0;
• texp|[0,y[ is strictly increasing;
• texp|]y,1] is strictly decreasing;
• texp is continuous in [0, 1] \ y.
These properties can be seen in Figure 2.3. Since a Turing machine only works with natural numbers,
we make the convention that when the machine queries the oracle, if the answer is not “timeout” then
the machine receives the answer in dtexp(z)e units of time where z is the position of the cannon. We also
assume that setting up the parameters is done instantaneously, so when the oracle is called the cannon
is already placed on some position.3
We now show that, just as the others physical experiments, the scatter experiment takes exponential
time on the desired level of precision.
Proposition 1. (Beggs et all 12 in [9]) If we define texp(z, y) as the physical time the experiment takes
with unknown y and setting the cannon at position z, and g(x) ∈ Cn describes the shape of the wedge,
then
A
|z − y|n−1≤ texp(z, y) ≤ B
|z − y|n−1(2.1)
for some real numbers A,B > 0 when |y − z| is sufficiently small.
Proof. We will only prove the proposition when the particle is shot from a position z > y, the other case
follows from symmetry. Suppose the shape of the wedge is given by a function g(x) with maximum
value at x = 0. This value lies on the w axis (see Figure 2.2) so y is the distance between the vertex
position and the cannon line. Let φ be the angle of incidence of a particle to the normal of the curve at
the point of impact. If the cannon fires at x coordinate, where x = z − y, we have that tan(φ) = −g′(x).
After being deflected, the particle travels at an angle of 2φ so the vertical component vx of the velocity
is proportional to sin(2φ) = 2 tan (φ)/(1 + tan2φ) and the time taken for the experiment is porpotional to
1
vx=
1 + tan2φ
tanφ= − 1
g′(x)− g′(x) (2.2)
Suppose now that g(x) is n ≥ 2 times continuously diferentiable near x = 0. Since g has a maximum
we conclude that g′(0) = 0. Take n such that g(n)(0) 6= 0 and g(k)(0) = 0 for 1 < k < n. We can
approximate the value of g′(x) using Taylor series:3From this point on, whenever it is implicit that a number 0.z is in the interval [0, 1], we can replace it by z.
8
g′(x) =g(n)(a)
(n− 1)!xn−1 (2.3)
for some a between 0 and x. Thus, for some neighbourhood of x = 0 we have constants E,F > 0 such
that
E|x|n−1 ≤ g′(x) ≤ F |x|n−1 (2.4)
By replacing these inequalities in equation 2.2 we have that
− texp =1
g′(x)+ g′(x) ≤ 1
E|x|n−1+ F |x|n−1 ≤ 2
E|x|n−1(2.5)
For small enough |x|. We can conclude that there exists a constant A = −(2/E) such that texp ≥A
|x|n−1 . By observing that x = z−y we get the desired result. Analogously there exist a constant B sucht
that texp ≤ B|z−y|n−1 .
We notice that if we want the machine to use polynomial time we cannot allow the clock to tick more
than a polynomial number of times, which is expressed by limiting the query words to a logarithmic size.
The results regarding complexity are independent of the constants A,B and n, so from now on for
simplicity we will always assume without loss of generality that A = B = 1 and n = 2 defining completely
texp as
texp(z) =1
|z − y|. (2.6)
In each version of the machine, given a word q in the query tape, if the machine performs a transition
to the query state it computes z = 0.q and sets the cannon to some position in ]0, 1[ according to one of
three protocols.
Protocol 1. Error-free infinite precision protocol
The cannon position is set to the real number z (using an infinite precision).
Protocol 2. Error-prone arbitrary precision protocol
The cannon position is set to a real number in the interval ]z−2−|q|, z+2−|q|[ using a uniform distribution.
Protocol 3. Error-prone fixed precision protocol
The cannon position is set to a real number in the interval ]z − ξ, z + ξ[ with uniform distribution, where
ξ is a fixed positive real number.
Remark 1. In any of these protocols it is important to analyse the case where the queried word is the
empty word ε. The deterministic Turing machine gives no information to the oracle, so it is logical that
the oracle should not give any information back to the Turing machine. For this reason, we make the
convention that the oracle returns “timeout” as the answer imediately after reading the query tape.
These protocols are divided into error-free and error-prone protocols, each with a diferent condition
to decide a set.
9
Definition 2. Let A ⊆ 0, 1∗. We say that an error-free ADTM M, clocked with runtime τ : N → N,
decides A if, for every w ∈ 0, 1∗,M accepts w in at most τ(|w|) steps if w ∈ A andM rejects w in at
most τ(|w|) steps if w /∈ A.
Definition 3. Let A ⊆ 0, 1∗. We say that an error-prone ADTM M, with arbitrary or fixed precision,
clocked with runtime τ : N→ N, decides A if there exists γ, 1/2 < γ ≤ 1, such that, for every w ∈ 0, 1∗,
M accepts w in at most τ(|w|) steps with probability γ if w ∈ A andM rejects w in at most τ(|w|) steps
with probability γ if w /∈ A.
In short, an ADTM is made of three components: a deterministic Turing machine with a query tape,
the physical experiment of measurement used as the oracle, and the interface. This interface consists
on the choosen protocol of communication between the analogue and the digital part and also on a time
schedule T (k) which is an increasing time-constructible function that clocks the oracle measurement so
the Turing machine knows when to resume its computation.
From this point on, we use the notation zn where z ∈ 0, 1ω to indicate the prefix of z of size n. If
|z| < n, then zn denotes the prefix of size n of the word z0ω. Lastly, if z = 0.z′ where z′ ∈ 0, 1ω then
we define zn = z′n since in this thesis we will only be interested in the digits after the comma. For
example,
1101014 = 1101 10116 = 101100 0.100114 = 1001
If |z| > n as in the first example, we say we truncate z and if |z| < n as in the second example, we
say we pad z with 0’s.
Although the bounds for the three types of precision were already studied, in this thesis our main
focus will be machines operating with the infinite precision protocol. Readers that are interested in the
other two communication protocols are adressed to papers such as [13, 16, 17]. The fact that the infinite
precision protocol sets the cannon position exactly where we want to, allow us to retrieve the vertex
position by performing a binary search, i.e. diving the interval of possible positions of y by 2 in each
step. We call to this search method the linear search method, explicited by Algorithm 1.
Algorithm 11: procedure LINEAR SEARCH METHOD2: l← input . desired precision3: x0 ← 04: x1 ← 15: z ← 06: while x1 − x0 > 2−l do7: z ← (x0 + x1)/28: s← state after calling the oracle with zl . It may be needed to pad z with 0’s9: if s == “qr” then
10: x1 ← z
11: if s == “ql” then12: x0 ← z13: else14: x0 ← z15: x1 ← z
return x0
10
Remark 2. Assuming the algorithm never returns “timeout”, it allows us to measure the vertex position
bit by bit. If there is a timeout, then it must be that texp(zl) > T (l) and from Equation 2.1 we have
|zl − y| < BT (l) . Regarding complexity, the distance x1 − x0 is divided by 2 in each step so the algorithm
can only do at most l steps. Each of those steps consists on assigning values which takes constant time
and a simulation of the SmSM that takes at most T (l) units of time, since T is an increasing function by
definition of time schedule. Therefore this algorithm runs in O(l.T (l)) units of time.
At this point, the reader might be wondering about the feasibility of such device with this protocol,
afterall, there are at least two big problems when implementing a SmSM using the infinite precision pro-
tocol: the first problem is that is does not seem possible to place a vertex on a non-computable position in
finite time and the second problem is that when we want a large enough precision, it will not be possible
to place the cannon exactly at the desired position simply because of the obstacle of atomic structure.
For these reasons we can only conceive this measurement method as a Gedankenexperiment, term
used to refer to a conceptual experiment.
Definition 4. An advice function f : N → Σ? is a total function which assigns a word for each (input
size) n ∈ N. A prefix advice function is an advice function f such that, for any n,m ∈ N, if n < m, then
f(n) is a prefix of f(m).
Definition 5. Let C be a class of sets and F a class of advice functions. We denote by C/F the class
of sets A for which there exist a set B ∈ C and an advice function f ∈ F such that, for every w ∈ Σ∗,
w ∈ A if and only if 〈w, f(|w|)〉 ∈ B.
Definition 6. Let C be a class of sets and F a class of advice functions. We denote by C/F? the class
of sets A for which there exist a set B ∈ C and a prefix advice function f ∈ F such that, for every
w ∈ Σ∗, w ∈ A if and only if 〈w, f(|w|)〉 ∈ B.
These classes that use advice functions are called non-uniform complexity classes and will help
us define the computational power of a SmSM clocked in polynomial time (see Theorems 4.2.1 and
4.2.2 where we use P/log?). When a vertex position can be computed in polinomial time (for example
y = 1/2 which is used in Chapter 3), if an advice functions that can also be computed in polynomial
time, then it is of no interest to our work since a machine that witnesses A ∈ P/log? can be modified to
compute first f and only then decide A. This machine is also clocked in polynomial time, therefore, for
these special type of advice functions and vertex positions, P/log? and P/log2? collapse to P. In this
thesis we consider mostly non-computable advice functions since the information we want to include in
the function in usually non-computable.
2.2 Boundary numbers
We define as boundary numbers the two closest numbers to the vertex position y for which a particle
shot from that positions will collide with one of the boxes before timing out. In other words, if l and r are
the two boundary numbers, then the experience will timeout if and only if z ∈]l, r[. We are assuming that
11
the wedge is symmetric with respect to y so it is expected that l = y−t and r = y+t for some value t that
depends on the time schedule. The assumptions on the time schedule made in Section 2.1 and Figure
2.3 imply the existence of such numbers: for any t1 ∈ 1∪]texp(0),+∞[ and t2 ∈ 1∪]texp(1),+∞[
there exists z1 ∈]0, y[ and z2 ∈]y, 1[ such that texp(z1) = t1 and texp(z2) = t2. On a practical point of
view, a finite approximation of these numbers with size k may be computed simply by quering the oracle
with all the 2k words with size k and seeing which ones return some answer before timing out.
Definition 7. Let y be the vertex position of a SmSM working with time schedule T . For a fixed query
size k, we define the boundary numbers as 0 < lk, rk < 1, such that lk < y < rk and
texp(lk) = T (k)
texp(rk) = T (k)
(2.7)
We call lkk∈N and rkk∈N the left and the right boundary numbers respectively. The existence of
such numbers is depicted in Figure 2.3:
Figure 2.3: Existence of lk and rk.
Remark 3. Using Equations 2.6 and 2.7, we can define the boundary numbers as the numbers lk < y <
rk such that
y − lk =1
T (k)= rk − y. (2.8)
It is possible to induce a non-computable character to lk and rk simply by taking y to be non-computable,
however, we can also have non-computable boundary numbers with a computable y if we don’t resctrict
the definition of time schedule to time constructible functions. This idea is very important for Chapter 3
where we only have the restriction that time schedules are total increasing functions, therefore having the
possibility of having a non-computable time schedule4 and hence also having non-computable boundary
numbers.
4Take for example the Busy Beaver function, which is total and strictly increasing but not computable.
12
2.3 Simulating the experiment
In order to compare the power of usual Turing machines with machines using a physical experiment as
oracle, we need the Turing machine to able to simulate the experiment on its own. Thus, we must tell
the machine which are the boundary numbers for the word written in the query tape so it can compare
the numbers and reach some answer, which can be done via a prefix advice function. In this Section we
show how a Turing machine can simulate the oracle by giving it some information beforehand.
Remark 4. For any experiment where the cannon is set to a position z, we can have three possible
cases:
1. lk < z < rk
As mentioned, in this case we have T (k) = texp(rk) = texp(lk) < texp(z) and the experience will
time out.
2. z ≤ lkSince z ≤ lk < y, we have that texp(z) ≤ texp(lk) = T (k). The experiment will not timeout and the
particle will be detected by the left collecting box.
3. z ≥ rk Very similar to the previous case: the particle will be detected by the right collecting box.
Although now it seems easy to simulate an oracle when having access to the boundary numbers, it
is not possible for a Turing machine to have access to the infinite representation of a non-computable
boundary number, hence the machine must be able to simulate the experiment given the prefixes of the
boundary numbers. We now give a more detailed description of how a machine can simulate an oracle
given z and the prefixes of the boundary numbers.
Proposition 2. Take a SmSM with left boundary number lk for some fixed k. If we have access to lkk
then, for any query z with |z| = k we can determine if the SmSM will perform a transition to state ql with
query z, in linear time on k.5
Proof. Take query z with |z| = k and lkk. Since |z| = |lkk| = k the comparison of these words will be
done in time O(k). Suppose now that z ≤ lkk. Since lkk ≤ lk we have that z ≤ lk. From case 2 of
remark 4 we conclude that SmSM will resume in state ql. If on the other hand z > lkk then z must differ
from lk in at least one of the first k bits which implies that z > lk. From remark 4 we conclude that the
machine will not resume its computations in ql.
The same results for rk shown below are not proved in such a direct way because if z = rkk we
cannot decide whether z = rk or z < rk and therefore we cannot distinguish between case 3 and the
other cases in remark 4. However, if we know additional information about rk, simulating the oracle
becomes feasible.5We will be interested in simulating the experience given a list of prefixes of boundary numbers instead of a single boundary
number, therefore we first need to fetch the boudary numbers and only then determine the outcome of the experience. Performingthis two operations will not take linear time on k.
13
Definition 8. Given a fixed k ∈ N and a SmSM with rk a right boundary number of order k, we define
σk as:
σk =
1, if rkk = rk
0, otherwise(2.9)
Intuitively we have that σk = 1 if and only if rk can be expressed as a dyadic rational of size k,
possibly padded with zeros.
In order to simulate the experiment, the deterministic Turing machine needs to have access to these
values so it can distinguish between z = rk and z < rk. It is therefore necessary to include them in
the advice function. Similarly to the boundary numbers, these values will also give a non-computable
character to our advice functions since if rk is non-computable then σk is also non-computable.
Proposition 3. Take a SmSM with right boundary number rk for some fixed k. If we have access to
rkk and σk then, for any query z with |z| = k, we can determine if the SmSM will perform a transition to
state qr with query z, in linear time over k.
Proof. Take query z with |z| = k, rkk and σk. Since |z| = |rkk| = k, the comparison of these words
can be done in time O(k). The cases where z < rkk and z > rkk are done similarly to what was done
in the proof of Proposition 2. Suppose now that z = rkk. We have two cases, either σk = 1 or σk = 0.
In the first case, by Definition 8, we have that rk = rkk, therefore z = rk and by Remark 4 the machine
performs a transition to state qr. If on the other hand σk = 0, by Definition 8 rk has a “1” in some position
k′ > k and so rk > rkk, implying that rk > z. By Remark 4, the machine will perform a transition to
either ql or the “timeout” state.
Proposition 4. Take a SmSM with boundary numbers lk and rk, for some fixed k ∈ N. If we have
access to lkk, rkk and σk then, for any query z with |z| = k, we can determine which state the SmSM
will perform a transition to, after executing the SmSM with query z, in linear time over k.
Proof. It follows directly from Remark 4 and Propositions 2 and 3
2.4 A study of y = 1/2
The smooth scatter machine can work with any vertex position y ∈]0, 1[ however, in Chapter 3 we study
the influence of time schedules in the computational power of SmSM when we fix y = 12 as the vertex
position. In this Section, we state a very important auxiliary Proposition for Chapter 3 (see Proposition
5) that will allow us to decide a tally set that is used as an oracle (see Theorems 3.2.1 and 3.2.3) based
on the existence of a sucession, followed by a study of the relation between the time schedule and the
existence of query words for which the experiment times out. The computations for the elements on this
sucession will be discussed in Section 2.5 and in particular, Remark 8 ilustrates its existence and the
convergence to 1/2.
Proposition 5. There exists a succession zkk∈N such that |zk| = k and for large enough k, the smooth
version operating with infinite precision will take experimental time 2k < texp(zk, 1/2) < 2k + 1.
14
Proof. By replacing y = 1/2 in Equation 2.6 we get
texp(zk) =1
zk − 1/2. (2.10)
In order to compare zk with 1/2 we compare them bit by bit and count in how many bits after the
comma they coincide, but since we are considering the finite representation of dyadic rationals6, we
cannot approximate zk from below, or else they cannot have any digit in common after the comma.
In order to have 2k < texp(zk) < 2k + 1, it is now clear that it suffices to have
2k <1
zk − 1/2< 2k + 1⇔ 1
2k+
1
2> zk >
1
2k + 1+
1
2.
The next step is to notice that diference between the bounds is given by
( 1
2k+
1
2
)−( 1
2k + 1+
1
2
)=
1
4k2 + 2k(2.11)
and that for large enough k we have 4k2 + 2k < 2× 2k which implies that 14k2+2k >
12×2k . By choosing
zk = (1
2k+
1
2)k − 2−k (2.12)
we get the desired outcome.
Propositions 6 and 7 study the existence of possible query words for which the experiment times out,
according to the chosen time schedule.
Proposition 6. If y = 1/2 and there are no z ∈]lk, rk[ such that |z| = k and lkk < zk < rkk, then the
time schedule grows at least with exponential rate.
Proof. Due to symmetry of the wedge, we will assume without loss of generalization that z ≥ 1/2. By
equation 2.10 and equation 2.7 we conclude that
1
rk − 12
= T (k) (2.13)
If we take z = 1/2 (which is always contained in ]lk, rk[), then rkk = 1/2. In other words, rk coincides
with 1/2 in the first k bits which can be expressed as rk− 12 < 2−(k+1). Inserting this inequality in equation
2.13 we obtain T (k) > 2k+1.
Proposition 7. If the time shedule is at most subexponential, then for large enough k ∈ N there exists
z ∈]lk, rk[\1/2 such that |z| = k.
Proof. We can rewrite equation 2.13 as
rk −1
2=
1
T (k)(2.14)
6For example, y = 0.1 instead of y = 0.0111...
15
Since T (k) is at most subexponential by hypothesis, there exists k′ such that for k > k′ we have
T (k) < 2k. By replacing this expression in equation 2.14, we get rk − 1/2 > 2−k. Therefore, a number z
that fulfills the conditions of the proposition is
z = rkk − 2−k (2.15)
Remark 5. In order to better understand Propositions 6 and 7, we present graphs that relate k with the
numbers z ∈]rk, lk[ such that |z| = k.7 As mentioned, these can be seen as the z’s such that if written in
the query tape, the machine will perform a transition to the “timeout” state.
5 10 15 20
10000
20000
30000
40000
Figure 2.4: Number of zk ’s when T (k) = k.
5 10 15 20
500
1000
1500
2000
Figure 2.5: Number of zk ’s when T (k) = k2.
5 10 15 20
20
40
60
80
100
120
Figure 2.6: Number of zk ’s when T (k) = k3.
5 10 15 20
50
100
150
200
250
Figure 2.7: Number of zk ’s when T (k) = klog(k).
20 40 60 80 100
1
2
3
4
5
6
Figure 2.8: Number of zk ’s when T (k) = 2k−1.
20 40 60 80 100
0.5
1.0
1.5
2.0
Figure 2.9: Number of zk ’s when T (k) = 2k.
7We chose to present the results for k ≥ 3 simply to avoid having boundary numbers outside of the interval [0, 1].
16
20 40 60 80 100
-1.0
-0.5
0.5
1.0
Figure 2.10: Number of zk ’s when T (k) = 2k+1.
As expected, the number of zk ’s decreases as T (k) grows faster which is explained by the fact that for
the same k the boundary numbers get closer to 1/2. When T (k) is subexponential, we can observe that
not only there are zk ’s such that zk ∈]lk, rk[, but the aumount of such zk ’s seems to grow exponentially.
On the other hand, when T (k) is exponential, the number of zk ’s is constant and for any time schedule
T ′(k) such that there exists k0 ∈ N : ∀k > k0, T′(k) ≥ 2× 2k for large enough k, we have rkk = 0.10k−1
and lkk = 0.01k−1. The left boundary numbers increase towards 1/2 but can never be equal because
the first digit after the comma is always a 0, however, the prefixes of the right boundary numbers are
constant and equal to 1/2. This means that we cannot approximate y = 1/2 from above as any zk ∈
[1/2, 1] will yield the same answer from the oracle so an experiment in these conditions is equivalent to
a threshold comparison experiment.
Remark 6. Although we are studying the concrete case where the vertex is placed at position y = 1/2,
it is important to notice that the results of Propositions 6 and 7 can be generalized for any vertex position
y ∈]0, 1[.
2.5 Computing zk
The succession zk obtained in Proposition 5 will prove to be very important to show some of the main
results of this thesis which are presented in Chapter 3, namely, we will need this sucession to decide
some tally oracle S with resource to the physical experiment (see Theorems 3.2.1 and 3.2.3). However,
if for any k ∈ N we cannot compute zk in polynomial time, then this succession will be of no use as there
is no Turing machine with running time bounded by a polynomial that can compute this succession. In
this section, we will prove that we can indeed compute any zk in polynomial time. We start by giving
some operations that can be done in polynomial time:
• The sum, subtraction, multiplication and division between two real numbers with size n can all be
done in O(n2);
• Comparing two dyadic numbers is done in linear time on the size of the number with less bits;
• The function Floor is done in O(n);
17
• Given a dyadic number, to obtain a list with its bits;
• Given a list of bits l = l1, l2, ..., ln, obtain the number l1l2...ln.
Remark 7. The above operations allow us to do other operations in polynomial time, in particular, the
size of a binary word is the length of the list that contains its bits. To add bits in the beginning or end of
a number, we can tranform it into a list, add the element and turn it back into a number.
We now present the algorithms used to compute zk.
Algorithm 21: procedure EXTRACTDIGITS2: x, k ← input . x ∈ Q and k ∈ N3: if x < 0 or x ≥ 1 then return “x not valid”4: w ← xk × 2k . shift the comma k spaces to the right5: w ← bwc . we are no longer interested in bits after the comma6: l← list of the bits of w7: while length[l] < k do8: l← 0#l . pad the list with 0’s in case x starts with at least two 0’s
return l
Algorithm 2 is given a number x and some natural number k. First it decides if 0 < x < 1 returning an
error message if this is not the case. Then it takes x and multiplies it by 2k shifting the comma exactly k
places. This is not enough to get the first k bits because if x has 0’s immediately after the comma, then
these 0’s will not be a part of the new number. To solve this problem, we turn the number into a list and
pad the list with 0’s in the beginning until it has size k. The output is the list l that represents the first k
bits of x, not counting with the zero before the comma. Every operation performed by the algorithm is
one of the operations described above as polynomial, therefore this algorithm also runs in polynomial
time.
Algorithm 3procedure COMPUTINGZ
2: k ← inputa← 1
2k + 12
4: l← 1# Extractdigits[a, k] . we need the 1 to preserve the 0’sz ← concatenation of elements of l
6: z ← (z × 2−k)− 1− 2−k
return z
The value of a is a rational number hence it has a periodic binary expansion, so it is possible to
compute all the bits just with the information of the sequence of bits that is repeating. The input k
bounds the size of such sequence, therefore it takes polynomial time to compute a. The algorithm then
uses algorithm 2 to get the list of the first k bits of a. It is important to notice that when k = 1 we have
a = 1 and line 4 will return an error message, but this does not matter as we only need the algorithm
to work for large enough k. Algorithm 3 then obtains zk by concatenating those bits with a 1 in the
beginning, multiplying by 2−k and subtracting 1 to turn it into 0.zk and finally subtracting 2−k to obtain
the expression in the proof of Proposition 5. Concatenation, multiplying and subtracting are done in
polynomial time and k is arbitrary, hence, any zk can be computed in polynomial time.
18
Remark 8. It is not intuitive that there is a succession that produces linearly increasing values of texp(zk),
because texp increases exponentially in the number of bits of precision of the queries. The only expla-
nation is that zk converges very slowly to 1/2. We can see in figure 2.11 that when using T (k) = 2k,
z2000 only coincides with 1/2 in the first 11 bits:
500 1000 1500 2000
6
7
8
9
10
11
Figure 2.11: Precision of zk when T (k) = 2k.
This type of approximation of the vertex position can be used as alternative to the linear search
algorithm (see Algorithm 1), however it is a much slower approximation since it is logarithmic unlike the
linear search algorithm which divides the possible positions of y by 2 in each step, giving it a linear
approximation. Comparing both methods, it is important to refer that in order to obtain a precision of
at least 2−l, the linear search algorithm needs O(l) steps, each one requiring O(2l) units of time to
perform, while the method of computing the sequence zkk∈N requires O(2l) steps, each one requiring
O(l2) units of time to perform.
As expected, when we use a (still linear) time schedule such as T (k) = k/2 or even T (k) = k/3, the
boundary numbers are much further from 1/2 than in the previous case because there is less time to
perform the experiment which results in a much worse approximation of zk, as it is illustrated by figures
4.11 and 4.12 respectively:
50 100 150 200
1
2
3
4
5
6
7
Figure 2.12: Precision of zk when T (k) = k/2.
50 100 150 200
1
2
3
4
5
6
7
Figure 2.13: Precision of zk when T (k) = k/3.
The next natural question one would ask is how would this rate of approximation change if we used
some polynomial as time schedule instead of a linear time schedule. Figures 2.14 and 2.15 illustrate the
approximation of zk to 1/2 when the time shedules are T (k) = k2 and T (k) = k3 respectively.
19
50 100 150 200
5
10
15
Figure 2.14: Precision of zk when T (k) = k2.
50 100 150 200
5
10
15
20
25
Figure 2.15: Precision of zk when T (k) = k3.
Unlike the previous examples, the time schedules now grows a lot faster than T (k) = 2k and so the
boundary numbers are much more close to 1/2 which results in a better approximation of zk. Further-
more, by analysing these last two figures one could conjecture that when T (k) = kd, zk coincides with
1/2 in approximately d times more bits than the zk ’s obtained with a linear time schedule. Indeed, we
can see that this is true for the right boundary numbers: let T1(k) = k and T2(k) = kd. If r1k and r2k are
their respective right boundary numbers, then by Equations 2.10 and 2.7 we have
1
r2k − 1/2= T2(k) = kd = (T1(k))d =
( 1
r1k − 1/2
)d ⇔ r2k − 1/2 = (r1k − 1/2)d (2.16)
So, if r1k coincides with 1/2 in N bits, then r1k−1/2 < 2−N and by Equation 2.16 we get r2k−(1/2) <
2−dN i.e., r2k coincides with 1/2 in at least dN bits.
An analogous arguments shows that if l1kN = 0.01N−1 then l2kdN = 0.01dN−1.
Eventhough it is clearly faster to use larger time schedules, any polynomial time schedule still needs
an exponential increase in k in order for zk to obtain a new bit of 1/2.
For completness, we provide some of the first few elements of a sucession zkk∈N satisfying the
properties of Proposition 5. We show the bits after the comma of zk for 5 ≤ k ≤ 21 so, for example we
have z5 = 0, 10010 and z6 = 0, 100100.
Figure 2.16: First bits of zk ’s after the comma
20
Chapter 3
Changing time schedules
Although we have defined time schedules as time-constructible functions so that the ADTM can inter-
nally count the time, this is somehow restrictive. In this section we will see how the power of scatter
machines change when we fix the vertex position y to be 1/2 and allow the experiment to use as time
schedule a possibly non-computable function, characterizing the power with non-uniform complexity
classes.
3.1 Relation between oracles and advices
In order to relate oracles with non-uniform complexity classes, we need to have an equivalence between
the advices used and which type of oracles can grant the machine the same type of power. We thus
present Theorems 3.1.1 and 3.1.2 which can be seen in [18]: the proof we present for Theorem 3.1.1
was adapted from the one presented in [18] and the proof of Theorem 3.1.2 was left as an exercise
to the reader. We then introduce the class of sets characterizing the power of ADTM ’s working with
infinite precision and vertex at position y = 1/2.
Theorem 3.1.1. (Balcazar et all 88 in [18]) Polynomial advice Turing machines operating in polynomial
time are equivalent to oracle Turing machines bounded in polynomial time using a sparse oracle. In
other words,
P/poly =⋃
S sparse
P(S).
Proof. Suppose A ∈ P/poly and let f be the advice function and B ∈ P be the set that witness it.
Define a set S as follows:
S = 〈0n, x〉 ∈ 0, 1? : x is a prefix off(n).
We use 3 bits to codify every symbol, so every word in S must have a size multiple of 3. For any
given m ∈ N, a word in S with size m, if it exists, must be of the form 〈0n, x〉. There are at most m−33
possibilities for n (obtained when x is the empty word) and, for each of them, there are at most m−33
21
different prefixes of f(n) (obtained when n = 0). The total number of words of size m in S is therefore
bounded by O(m2) concluding that S is sparse. To show that A ∈ P(S), consider Algorithm 4.
Algorithm 41: procedure Deciding A2: x← input3: n← |x|4: z ← ε5: loop:6: if 〈0n, z0〉 ∈ S then7: z ← z08: if 〈0n, z1〉 ∈ S then9: z ← z1
10: else11: exit loop12: end loop13: if 〈x, z〉 ∈ B then14: Accept15: else16: Reject
The first thing to notice is that this algorithm runs in polynomial time: for each n the guard of the “if”
statements is built in time O(n+ |f(n)|) and S is an oracle so these guards are verified in constant time.
Furthermore B ∈ P so the last “if” also takes polynomial time, and the loop will take at most O(|f(n)|)
evaluations since no word with size bigger than 3(n+|f(n)|+1) can belong to S. This machine computes
f(n) digit by digit beginning with z = ε and adding either 0 or 1 depending on which digit appears next
on f(n). Since f(n) is finite, when z = f(n) the machine exits the loop as it cannot possibly add digits
and still be a prefix of f(n). When this happens, the machine accepts x if and only if 〈x, f(n)〉 ∈ B.
Since A ∈ P/poly by hypothesis, it follows that this deterministic machine decides A.
Conversely, assume that A ∈ P(S) for some sparse set S and let p(n) be the polynomial that bounds
the running time of the machineM deciding A with oracle S. We use an advice function such that f(n) is
the enconding1 of the concatenation of the words in S with size up to p(n) when ordered by lexicographic
order. This enconding has polynomial size due to the fact that for each n all the words have the same
size and the number of words of this size is bounded by some polynomial. To decide A, we consider a
deterministic machineM′ that receives the word 〈x, f(|x|)〉 as input, it separates x from f(|x|) using the
symbol # and simulates the computation ofM with input x on help by oracle S. WheneverM reaches
the query state,M′ uses the set enconded in f(|x|) to answer the queries. Therefore A ∈ P/poly.
Theorem 3.1.2. (Balcazar et all 88 in [18]) Polynomial advice Turing machines operating in polynomial
time are equivalent to oracle Turing machines bounded in polynomial time using a tally oracle. In other
words,
P/poly =⋃
S tally
P(S).
Proof. Let A ∈ P/poly witnessed by f ∈ poly and B ∈ P. Let p(n) = nk be a polynomial such that
|f(n)| ≤ p(n) andM the deterministic Turing machine that decides B in time nk′. Consider the tally set
1We assume the encoding of a word with size n has a size polynomial on n.
22
T = T ′ ∪ T ′′ where
T ′ = 0n : n = 〈i, j, 3〉 and f(i) has at least j bits
T ′′ = 0n : n = 〈i, j, 4〉 and f(i) has at least j bits with the jth bit a 0
To show that A ∈ P(T ) we devise a deterministic Turing machine that runs according to Algorithm 5.
We build z bit by bit in a way such that at the end of the “for each” loop we have z = f(|x|) and the Turing
Algorithm 51: procedure Reduction to a tally set2: x← input3: z ← ε4: for each j between 0 and p(|x|) do5: n← 〈|x|, j, 3〉6: s1 ← answer of T when queried with 0n
7: if s1 = “yes” then8: m← 〈|x|, j, 4〉9: s2 ← answer of T when queried with 0m
10: if s2 = “yes” then11: z ← z012: else13: z ← z114: end for15: if 〈x, z〉 ∈ B then16: Accept17: else18: Reject
machine accepts x if an only if 〈x, f(|x|)〉 ∈ B which is evaluated in time O(nk′) by hypothesis. Each call
to the oracle is done in constant time and j is bounded by p(|x|) so m and n have size polynomial on
|x|, hence we also have |z| ∈ O(p(|x|)). We thus conclude that |〈x, z〉| ∈ O(p(|x|)), therefore A can be
decided in polinomial time in the size of the input using oracle T .
Conversely, let A ∈ P(T ) for some tally set T witnessed by the deterministic Turing machineM. This
oracle is also a sparse oracle since it has at most one word for each size and so a direct application of
Theorem 3.1.1 allow us to conclude that A ∈ P/poly.
Definition 9. Let f be an increasing total function. We denote by AP (f) the class of sets decidable in
polynomial time by an analogue-digital scatter machine using the physical oracle with time schedule f ,
infinite precision and unknown y = 1/2. If F is a class of increasing functions then we define AP (F) =⋃f∈F AP (f) .
By choosing the vertex position to be y = 1/2, if we assume all the query words with size m have a
probability of 1/2m of appearing on the query tape, the scatter experiment can be seen as an approx-
imation of a fair coin toss. Let z be the query word, if 0.z < 1/2 then the outcome is “heads” and if
0.z > 1/2 then the outcome is “tails”. If on the other hand we have 0.z = 1/2, the outcome could be
seen as “the coin landed on its side”. However, this event always has a probability of 1/2m as z = 10m−1
is the only option for the query word. This probability clearly goes to 0 as m increases its value so we
23
can expect this event to occur with negligible probability for query words with large enough size, just as
in real life we don’t expect for a coin to land on it’s side. One could argue that even though 1/2m goes
to 0, it is still a positive probability and therefore it is always possible that this event occurs, but despite
this fact, if we denote by P (Am) the probability of z = 10m−1, then we have
∞∑m=1
P (Am) =
∞∑m=1
1
2m= 1 <∞ (3.1)
A direct application of the first Borel-Cantelli lemma (see [19]) shows that even if we call the oracle
infinitely many times with at most one word of each size, it is certain that we will have 0.z = y only for a
finite number of times. The same result holds even if we allow the deterministic Turing machine to query
the oracle at most a finite number of times for each m: if the number of possible queries for any size is
upper bounded by a constant C and P (Am) denotes the probability of the query word being of the form
10s in the mth call to the oracle, then
∞∑m=1
P (Am) ≤ C∞∑m=1
1
2m= C <∞. (3.2)
The main difference between this experiment and the fair coin toss is that unlike the latter, the smooth
scatter machine takes a probabilistic time to return an answer since this intrinsic time grows as z ap-
proaches y (see Equation 2.1). The fact that it is possible to use a an advice function that is not com-
putable in polynomial time grants a boost in the speed of these deterministic Turing machines and the
non-computable advice functions allow us to leave the domain of classic computation, thus explaining
how we can decide a super set of P in polynomial time (see Theorems 3.2.1 and 3.2.4).
3.2 Alternative caracterizations
Recalling Chapter 1, we notice that TC ⊆ CI ⊆ IN , so it is also true that AP (TC) ⊆ AP (CI) ⊆
AP (IN). In this section we describe these three classes in terms of non-uniform complexity classes
and in particular, the current proof of the lower bound of AP (CI) showed in [11] can only be done with
the restriction that the used tally oracle is recursive, remaining an open problem whether or not the
same caracterization can be achieved for any tally oracle. In Proposition 8 we show that any recursive
set decidable by a Turing machine running in polynomial time using a tally oracle T can also be decidable
by another Turing machine using a recursive tally oracle T ′, therefore solving the open problem.
It is important to notice that although y = 1/2 is computable, the non-uniform classes P/poly,P/log?
and P/log2? do not collapse to P because we can have non-computable time schedules and so the
boundary numbers still might not be computable (recall Remark 3) implying that the corresponding ad-
vice functions can also be non-computable.
Theorem 3.2.1. AP (IN) = P/poly.
Proof. Assume first that A ∈ P/poly for some set A. Then, by definition, A is decidable by a polynomial
advice Turing machine operating in polynomial time. However, by the Theorem 3.1.2, we may assume
24
that A is decidable in polynomial time by a deterministic Turing machine M with some tally set S.
Consider now an analogue-digital machine operating with infinite precision, unknown y = 1/2 and time
schedule
T (k) =
2k + 1, if 0k ∈ S
2k, if 0k /∈ S
Clearly T (k) is total and increasing. Furthermore, by Proposition 5 from Chapter 2 there exists a se-
quence zk such that |zk| = k and 2k < texp(zk, 1/2) < 2k + 1 for large enough k, which means that
the returing time of the oracle when queried with zk is 2k + 1 units of time. We can decide whether
0k ∈ S or not by consulting the oracle with zk: if “yes” or “no” is returned, then the time it took for the
oracle to give the answer was less than the time schedule, in other words, texp(zk, 1/2) < T (k) and so
T (k) = 2k + 1 and 0k ∈ S. Otherwise we have texp(zk, 1/2) > T (k) and so T (k) = 2k and 0k /∈ S.
For small values of k we may assume the machine performs a transition to the “no” state. This machine
decides A in polynomial time sinceM runs in polynomial time by hypothesis and the time it takes to run
the experiment is also bounded in polynomial time (texp ≤ 2k + 1).
Conversely, let A ∈ AP (IN). Then there exists a deterministic analogue-digital scatter machine M
working with a time schedule T (k) ∈ IN that decides A in polynomial time with vertex position 1/2 and
infinite precision. Since for any input with size n the time is bounded by a polynomial p(n) and the size of
the word in the query tape is at most the number of transitions, its size is also bounded by p(n). Consider
the advice function f such that
f(n) = l11#r11#σ1#l22#r22#σ2#...#lp(n)p(n)#rp(n)p(n)#σp(n)
where lkk∈N and rkk∈N are the boundary numbers defined in Expression 2.7 and σkk∈N are the
bits given by Expression 2.9. This function is clearly in poly since f(n) has size O(p(n)2) and it can
be used to extract the respective boundary numbers for every query word with size at most p(n) which
allows the machine to simulate a call to the oracle in polynomial time in the size of the query word z
(recall Proposition 4). The deterministic machine that given an input emulates M except when it calls
the oracle, simulating the experience as described in Section 2.3 instead, can decide A in polynomial
time since M runs in polynomial time and comparisons between two numbers are polynomial on the
size of the smallest. It follows that A ∈ P/poly.
As stated in the beginning of this section, the same type of proof for AP (CI) is not yet possible
since if a set A is decidable by a Turing machine using an arbitrary tally set S as oracle, we cannot
assume at once that S is recursive. To solve the open problem left on [11] and complete the proof of the
lower bound of AP (CI) (see Theorem 3.2.3), we will show that given some recursive set A decidable
by a deterministic Turing machine bounded in polynomial time using a non-recursive tally set as oracle,
it is possible to obtain a recursive tally set for which there exists another deterministic Turing machine
working in polynomial time with that recursive oracle that can decide the same set A. This is enough
since we can simply substitute an arbitrary tally set T for some other recursive tally set T ′. To prove this
25
fact, first we need to order finite tally sets. More especifically, we want to list them so that given some
finite tally set we can perform a test on all of its subsets. Let S = 0τ1 , 0τ2 , ... be a tally set with τm < τn
for m < n. We can order the finite subsets of S using the following enumeration e:
• e(∅) = 0;
• e(ε) = 1;
• e(0k1 , 0k2 , ..., 0kn) =∑ni=1 2ki .
It easy to show that this enumeration is actually a bijection between N and finite tally sets since we
can simply represent these sets as a list of the form k1, ..., kn, therefore, if we want to perform some
test on all subsets of some tally S up to a fixed size n, we compute e−1(0), e−1(1), e−1(2), ..., e−1(2n)
desregarding tally sets that are not subsets of S. The enumeration is also monotonic in the sense that
if A ⊂ B then e(A) < e(B). In particular, if S is itself finite, then ordering its subsets can be done in
finite time as we only need to compute at most e(S) subsets. The overall complexity of ordering the
subsets of S is not relevant as we only intend to use this ordering on the proof of Proposition 8 to show
the recursivity of a tally set.
We are now in conditions of showing the result that solves our open problem: given a scatter machine
running in polinomial time and using an advice function f ∈ poly that decides a recursive set A, to find
a polinomial time deterministic Turing machine using a recursive tally set T ′ as oracle that can also
decide A.
Proposition 8. If A ∈ P/poly ∩REC, then A ∈ P(T ′) for some recursive tally set T ′.
Proof. Let A ∈ P/poly ∩ REC andM the Turing Machine that witnesses the fact that A ∈ REC. Fur-
thermore, since A ∈ P/poly and by Proposition 3.1.2 we have P/poly =⋃T∈tally P (T ), we conclude
that A ∈ P (T ) for some tally set T . We will now construct a recursive tally set T ′ such that A ∈ P (T ′).
LetM′ be the machine bounded in polynomial time by p(n) = nk that decides A with oracle T ⊆ 0?.
We assume that only 0’s are written in the query tape and notice that the biggest word that might be
queried to the oracle is 0|w|k
where w is the input, since the running time is bounded by nk and the
machine can write at most one “zero” per transition.
Consider the function
f(n) = 1 +
n−1∑r=1
(rk + 1). (3.3)
We construct a recursive tally set T ′ the following way: for each i ∈ N we choose the least li such
that M′ working with oracle Sli agrees with M in every input with size i.2 Such an index li must exist
because we know that in the worst case scenario we can stop testing tally sets when Sli contains the
(finite number of) words in T that are needed for answering queries when the input has size i. We then
define
2It is not needed to agree in inputs of different sizes sinceM′′ is built to consult a different Sl for each input size.
26
T ′i = 0f(i)y : y ∈ Sli and T ′ =
∞⋃i=0
T ′i
.
We want to show that T ′ is a recursive tally set and A ∈ P(T ′). T ′ is clearly a tally set because for every
i ∈ N we have that Sli is a tally set and so T ′i is also a tally set. To show that A ∈ P(T ′), we devise a
machineM′′ according to Algorithm 6.
Algorithm 6procedure BEHAVIOUR OFM′′
2: x← inputn← |x|
4: s′ ← Start state ofM′ on input x . state ofM′s′′ ← s′ . state ofM′′
6: loop:if s′ = halting state then return output ofM′
8: if s′ 6= query state thens′ ← next state ofM′ . continue the computation
10: s′′ ← s′ . updating the state ofM′′else
12: y ← word in query tape ofM′p← f(n) . number of 0’s to pad
14: w ← 0p · y . padding the words′′ ← result of asking w to T ′ . simulating the oracle onM′′
16: s′ ← s′′ . continue the computation asM′
end loop
M′′ simulates the computation ofM′; however, wheneverM′ enters the query state,M′′ pads the
query word with f(|x|) 0’s and performs its own call to T ′, proceeding as if M′ would have the same
answer. The only detail that could prevent M′′ from running in polynomial time was if the padding of
the query words could not be done in a polynomial number of steps. This is not a problem since f(n) is
a polynomial of degree k which is fixed by M′, therefore M′′ decides A in polynomial time on help by
oracle T ′.
The function f(n) used as a padding function grows fast enought to separate the words queried to
T ′ for each input size: for any input w, the biggest word that M′′ can write in the query tape has size
f(|w|) + |w|k which is smaller than f(|w+ 1|), which is the size of the smallest word than can be queried
to the oracle with an input with size |w| + 1. In other words, if T ′im and T ′iM are the smallest and the
biggest wordM′′ can write in the query tape for an input of size i, then we have
|T ′1m | < |T′1M | < |T
′2m | < |T
′2M | < ...
In particular, for every i 6= j we have T ′i ∩ T ′j = ∅. It is also important to notice that⋃ni=1 T
′i is a strict
subset of⋃n+1i=1 T
′i since we can only add words that are larger than the ones we already had. We need
the padding function to have this characteristic because there can be the case that the same tally word
is asked byM′ with inputs of different sizes. This raises a problem when building T ′ because we would
27
have to append a word that could not be in T ′ or vice-versa.
It remains to show that T ′ is recursive. Given a tally word y′, we can rewrite it as y′ = 0f(i)y where i
and y are unique since we have T ′i ∩ T ′j = ∅ for i 6= j. We search for the respective Sl|w| using Algortihm
7 and by definition of T ′, we have that y′ ∈ T ′ if and only if y ∈ Sl|w| .
Algorithm 71: procedure SEARCH OF Sli2: i← input3: A← ∅4: m← 2i . 0i is the (2i)th word in lexicographic order5: while m ≤ 2i+1 − 1 do . 1i is the (2i+1 − 1)th word in lexicographic order6: w′ ← mth word in lexicographic order7: ifM accepts w′ then8: A← A ∪ w′9: m← m+ 1
10: l← 011: bool← false12: while bool == false do13: j ← 2i
14: A′ ← ∅15: l← l + 116: while j ≤ 2i+1 − 1 do17: w′′ ← jth word in lexicographic order18: ifM′Sl accepts w′′ then .M′Sl is the machineM′ working with oracle Sl19: A′ ← A′ ∪ w′′20: if A == A′ then21: bool← true22: output Sl
Theorem 3.2.2. AP (CI) ⊆ P/poly ∩REC.
Proof. Let A ∈ AP (CI). Since AP (CI) ⊆ AP (IN) and AP (IN) = P/poly, we conclude that A ∈
P/poly. We now show that A ∈ REC. Consider the machineM and the computable time schedule T
that witnessA ∈ AP (CI). Similarly to the previous proposition, we devise a deterministic Turing machine
M′ that for any k ∈ N it computes zkk where zk is the boundary number such that texp(zk, 1/2) = T (k).
To decide A,M′ just has to simulateM and whenever it reaches a query state, it computes zkk using
the fact that both T (k) and texp are computable and compares it with the value in the query word. It
follows that A ∈ REC.
Theorem 3.2.3. P/poly ∩REC ⊆ AP (CI).
Proof. Let A ∈ P/poly∩REC. By Theorem 3.1.2 we can assume that A is decidable in polynomial time
by a Turing machine using some tally set S as advice. Furthermore, by Proposition 8 we can assume
that S is recursive. Consider again the time schedule
T (k) =
2k + 1, if 0k ∈ S
2k, if 0k /∈ S
28
A similar reasoning to the one used in the proof of Theorem 3.2.1 shows that A can be decided in poly-
nomial time by a deterministic analogue-digital Turing machine using the physical oracle with unknown
y = 1/2 and infinite precision and time schedule T . The only detail that differs is that S is recursive and
therefore we must restrict T to be a total computable function.
Theorem 3.2.4. AP (CI) = P/poly ∩REC.
Proof. It follows directly from Theorem 3.2.2 and Theorem 3.2.3.
Theorem 3.2.5. AP (TC) = P.
Proof. Suppose A ∈ P, then A is decidable in polynomial time by a deterministic oracle Turing machine
on help by the empty oracle. The same machine with any oracle witnesses A ∈ AP (TC) since no calls
are made.
If A ∈ AP (TC), then A is decidable by an analogue-digital machineM in polynomial time using the
physical oracle with vertex at position y = 1/2, working with infinite precision and an increasing time
constructible function T as a time shedule. We devise a deterministic machineM′ that, given an input
of size n, behaves like M and simulates any oracle query of size k in polynomial time as described in
Section 2.3. We notice that given a query word of size k, it is possible to compute T (k) in polinomial
time since T is time constructible by assumption and there exists a polynomial p(n) bounding the running
time ofM, so we have T (k) < p(n). Then, it computes zkk where zk is the boundary number such that
texp(zk, 1/2) = T (k). Finally, it compares this value with the query word z of size k. The computation of
zkk and the comparison z = zkk are done polynomial time, soM′ also decides A in polynomial time.
It follows that A ∈ P.
3.3 Consequences on the Church-Turing thesis
In the 20th Century, Alan Turing and Alonzo Church conjectured that “Every effectively calculable function
(effectively decidable predicate) is general recursive”. In other words, every computable function (in
an informal sense) can be described as a general recursive function, which in turn coincide with the
functions that Turing Machines are able to compute. This conjecture, also known as the Church-Turing
thesis, has since then had several formulations: for example, it can be seen in [20] that Oddifreddi wrote
In this case, Church’s Thesis amounts to saying that the universe is, or at least can be
simulated by, a computer. This is reminiscent of similar attempts to compare Nature to the
most sophisticated available machine, like the mechanical clock in the 17th Century, and the
heat engine in the 19th Century, and it might soon appear as simplistic.
Some people think that it is possible to compute something above the power of Turing machines,
for example, one can argue that even something as simple as differential equations with computable
coefficients can give arise to non-computable solutions as shown by Pour-El and Richards by giving
examples such as the three-dimensional wave equation (see [21]).
29
One example of a simple to understand but very important problem is the halting problem, i.e, given
a Turing machine, to know wheter or not the machine halts with input 0. This set can be represented
as H = 0n : n is the encoding of a Turing machine that halts with input 0 and it can be shown that is
belongs to P/poly by considering the advice function f = 〈0n1#0n2#...〉 where ni are such that 0ni ∈
H. If we modify this set to H ′ = 02n : n n is the enconding of a Turing machine that halts on input 0
then it can also be shown that H ′ ∈ P/log.
Roger Penrose has the opinion that the wave equation cannot be considered since the boundary
conditions are not smooth enough to be able to describe a possible real physical system. In [22],
Penrose discusses the importance and the usefulness of having non-smooth initial conditions on these
type of problems:
Now, where do we stand with regard to computability in classical theory? It is reasonable
to guess that, with general relativity, the situation is not significantly different from that of
special relativity — over and above the differences in causality and determinism that we
have just been presenting. Where the future behaviour of the physical system is determined
from initial data, then this future behaviour would seem (by similar reasoning to that we
presented in the case of Newtonian theory) also to be computably determined by that data
(apart from unhelpful type of non-computability encountered by Pour-El and Richards for the
wave equation, as considered above — and which does not occur for smoothly varying data).
Indeed, it is hard to see that in any of the physical theories that we have been discussing so
far there can be any significant “non-computable” elements. It is certainly to be expected that
“chaotic” behaviour can occur in many of these theories, where very slight changes in initial
data can give rise to enormous differences in resulting behaviour. But, as we mentioned
before, it is hard to see how this type of non-computability — i.e. “unpredictability” — could
be of any “use” in a device which tries to “harness” possible non-computable elements in
physical laws.
It is now common sense on the Computer Science community to believe that it is not be possible
to compute something that a Turing machine cannot compute, which has become known as the Turing
barrier.
During this thesis we studied how to use a real world machine to break the Turing barrier by measur-
ing non-computable quantities, ideia popularized by Penrose in [22] and [23] who conjectured that “The
Universe has non-computable information which may be used as an oracle to build a hypercomputer ”,
also known as the Oracle Conjecture. However, experiments that are equivalent to the Sharp Scatter
Experiment are not physically feasible due to matter properties so for the same reasons as the differen-
tial equations, it cannot be considered a counter-example to the Church-Turing thesis. We thus turn to
the Smooth Scatter Experiment which can perform the same measurements as the sharp version but
it does not take constant time to return an answer, therefore limiting the power of time bounded Turing
machines working with the Smooth Scatter Experiment as oracle.
30
By introducing non-uniform complexity, it was possible to come up with a conjecture that is analogous
to the Church-Turing thesis but applied to physical systems, stating that “no analogue device comput-
ing in polynomial time has more power than P/poly.” (see [24]), conjectured by Hava Siegelmann.
However, this conjecture does not take into account the several problems of building an ARNN with real
values, so a weaker (but more reasonable) conjecture was proposed: if we require the computation to
be aided by a physical measurement device, we can conjecture that “physical systems combined with
algorithms cannot compute more in polynomial time than P/log?”. (see [9]). The droping of power from
P/poly to P/log? comes from the fact that any physically feasible measurement device cannot take
less than exponential time on the desired precision, hence considering only advice functions in log.
31
Chapter 4
The smooth scatter machine
In this section we return to time-constructible time-schedules and present the upper and the lower
bounds for the computational power of smooth scatter machines that work with infinite precision. Most of
the results were already obtained in [17] however we will introduce a new type of advice function that can
be used to achieve the same results in hope that such a modification might possibilitate a similar proof
for the upper bound in the case of unbounded precision, which is an open problem to know whether it is
possible or not (see [12]).
4.1 The Cantor set C3
We define the Cantor set of base 3 (C3) as the set of ω−sequences x of the form
x =
+∞∑k=1
xk2−3k (4.1)
for xk ∈ 1, 2, 4. This can be seen as the set of numbers 0.z where z is obtained by concatenating
infinitely many triples of the form 100, 010 or 001. The first thing to notice about this set is that it cannot
contain dyadic numbers since these numbers can only have finitely many 1’s in their binary representa-
tion and every number in C3 must have a 1 in every group of three bits. The fact that numbers in this set
have a strongly restricted binary representation gives this set a very nice property that will prove to be
very useful for computing the lower bounds of a SmSM using infinite precision.
Proposition 9. (Adapted from [7]) For every x ∈ C3 and for every dyadic rational z ∈ [0, 1] with size
|z| = m, if |x− z| ≤ 2−(i+5), then x and z coincide in the first i bits and |x− z| > 2−(m+10).
Proof. Take any x ∈ C3 and z ∈ [0, 1] such that it coincides with x in the first i− 1 bits but differ in the ith
bit. We show that |x− z| > 2−(i+5). We must consider two cases:
• z < x
In this case the ith bit of z and x are 0 and 1 respectively. The smallest value of |x − z| occurs
when the binary expansion of z continues with 1’s and the binary expansion of x continues with 0’s.
32
Because x ∈ C3, there can be no more than four consecutive 0’s in its binary representation, hence
in the worst case, the binary representations of z and x from the ith bit onwards are respectively
011111... and 100001.... Therefore |x− z| > 2−(i+5).
• z > x
In this case the ith bit of z and x are 1 and 0 respectively. The smallest value of |x−z| occurs when
the binary expansion of z continues with 0’s and the binary expansion of x continues with1’s.This
time, there can be no more than two consecutive 1’s in its binary representation, hence in the worst
case, the binary representations of z and x from the ith bit onwards are respectively 1000... and
0110.... Therefore |x− z| > 2−(i+3) > 2−(i+5).
We proved that if |x − z| ≤ 2−(i+5) then x and z coincide in the first i bits. It remains to prove that if
|z| = m, then |x− z| > 2−(m+10).
If z 6= xm then |x−z| ≥ 2−m > 2−(m+10). Suppose now that z = xm. Since x can have at most four
consecutive bits with value 0 and z after the mth bit only has 0’s (recall that |z| = m), we have that in the
worst case the representations of z and x coincide in m+ 4 bits and differ in the m+ 5th bit. By replacing
i with m+ 5 in the first part of this proof we conclude that |x− z| > 2−((m+5)+5) = 2−(m+10).
Given a word w ∈ 0, 1∗, it is possible to code it into a word in the Cantor set using the following
coding: we denote by c(w) the coding of w which consists on replacing every 1 by 010 and every 0 by
100. We note that for every w ∈ 0, 1∗ we have that c(w) ∈ C3 and furthermore, given c(w) it is easy to
fetch w: one just needs to look into c(w) in triples and decide whether that triple represents a 1 or a 0.
We can go a step further and enconde prefix advice functions: given a prefix advice function f :
N → 0, 1∗ we denote its enconding by a real number y(f) = lim y(f)(n) where y(f)(n) is defined
recursively as follows1 : y(f)(0) = 0.c(f(0)) and
y(f)(n+ 1) =
y(f)(n)c(s), if n+ 1 is not a power of 2
y(f)(n)c(s)001, if n+ 1 is a power of 2
(4.2)
where s is the word such that (f)(n+ 1) = f(n)s.
It is important to introduce the separator when n is a power of 2 since this coding will be used in
Section 4.5 where the coding of an advice function is used as the vertex position (Proposition 22 and
Theorem 4.5.1). To decode f(|w|) where 2m−1 < |w| ≤ 2m, we need to read y(f) in triples until we
find the (m+ 1)th separator. To reconstruct f(2m) we simply eliminate the separators and replace every
triple by its corresponding bit. The number of bits needed for this resconstruction is linear in m since
|c(f(2m))| = am+ b for some constants a and b.
1This limit is defined not as function but as a sequence: ∀n, y(f)(n) is a prefix of y(f).
33
4.2 Upper Bounds
We now present upper bounds for the computational power of the SmSM clocked in polynomial time.
The intrisic time of the experiment is exponencial on the size of the query word (see Proposition 1) but
this machine must run in a polynomial number of units of time. This means that the query words cannot
have more than logarithmic size, otherwise the machine would require at least superpolinomial time
to consult the oracle. For an arbitrary time schedule, the upper bound is proven to be P/log2? (see
[17]) however, several articles such as [7] show that this bound can be improved to P/log? if the time
schedule is exponencial and the SmSM is working with infinite precision. It remains an open problem
to know whether a similar result can be obtained without the explicit use of the formula of the time
schedule for a SmSM working with unbounded precision. Section 4.3 proposes a new type of advice
function which can show the same results already obtained for the infinite precision and that we hope
it provides some new ideias on how to tackle the open problem for the unbounded precision. Although
this function does not allow (yet) any new results, it has a big advantage over the functions used so far:
the existing proofs show that it is possible to codify the boundary numbers in linear size on the size of
the query word, however, if we do not have the explicit formula of the time schedule it is not possible
to specify the advice function to the machine, in other words, there exists an advice function but we
cannot write it. Our advice function, on the other hand, can be used with any time schedule and it can
be written using no information whatsoever of the formula of the time schedule. Furthermore, we show
that if the time schedule used is exponential, then the advice function has logarithmic size, just as the
others previously used. This particularity allows us to write the advice function even if we only have a
list of the prefixes of the boundary numbers.
Theorem 4.2.1. If A ⊆ 0, 1∗ is decidable by an infinite precision SmSM with time schedule T (n) ∈
O(2n) when clocked in polynomial time, then A ∈ P/log2?.
Proof. LetM be the infinite precision SmSM that decides A in polynomial time O(nk) for some k, and
T be its time schedule such that T (n) ∈ O(2k′n) for some fixed k′. Let s = a blog nc+ b be the bound for
the query sizes ofM and take the following advice function:
f(n) = l11#r11#σ1#l22#r22#σ2#...#lss#rss#σs# (4.3)
where lk and rk are the left and right boundary numbers respectively as defined in Definition 7, and σk
are the bits given in Definition 8.
It is enough to have the first s boundary numbers in the advice function sinceM cannot write more
than s bits in the query tape and therefore, applying Proposition 4, we conclude that if a machine has
access to f , then for any possible query word with size up to s it can simulate the experience in linear
time in s. Furthermore, it is easy to see that |f(n)| ∈ O(s2) = O(log(n)2).
Take n′ such that n′ > n and a Turing machine M′ that recieves 〈w, f(n′)〉, with |w| = n. Since
n′ > n,M′ has access to the prefixes of the first s′ = a blog n′c+ b boundary numbers, and in particular,
it has access to the prefixes of the first s boundary numbers so it can simulateM and whenever in the
34
query state with query word z (with |z| ≤ s), it retrieves l|z||z|, r|z||z| and σ|z| from f(n′) simulating the
scatter experiment. Emulating M is done in polynomial time because M runs in polynomial time by
hypothesis, obtaining the boundary numbers from the advice function can be done in O(log(n)2) 2 and
simulating the experiment can be done in time O(s) = O(log (n)), soM′ also runs in polynomial time.
Thus, A ∈ P/log2?.
If we use a time schedule such that T (k) ∈ Ω(2k), it is possible to codify the boundary numbers in an
advice function f with |f(n)| ∈ O(n) because the boudary numbers are close enough for us to write all
the information we need in a linear sized advice function. This idea is formalized in Proposition 10.
Proposition 10. Given the boundary numbers for a SmSM with time-constructible schedule T (k) ∈
Ω(2k), it is possible to define a prefix advice function f such that f(n) codifies the prefixes of the bound-
ary numbers with size up to n and |f(n)| ∈ O(n).
Proof. Take a SmSM with vertex at position y, and schedule T (k) ∈ Ω(2k). By definition of Ω(2k), there
exists k0 ∈ N and α ∈ R such that ∀k≥k0T (k) ≥ α2k. Let c ∈ N be such that α > 2−c and fix a k ≥ k0.
By definition of rk, we have that
|rk − y| =1
T (k)≤ 1
α2−k < 2c−k (4.4)
Now take rkk = 0.vkwk where |vk| = k − c and |wk| = c. It is easy to see that y ≤ 0.vk ≤ rk and thus
we have:
|0.vk − y| ≤ |rk − y| < 2c−k. (4.5)
A string vk can be of one of three following forms, for some m:
• vk = ...10m
From Equation 4.5 we conclude that either yk−c = ...10m or yk−c = ...01m. Therefore, since we
have |vk+1 − y| < 2c−(k+1) (again from Equation 4.5), vk+1 can be of one of the following forms:
...10m+1 ...10m1 ...01m+1 ...01m0;
• vk = ...01m
With the same reasoning as the case above, we conclude that either yk−c = ...01m or yk−c =
...01m−10, and thus vk+1 can be of one of the following forms:
...01m0 ...01m+1 ...01m−100 ...01m−101;
• vk = 0k−c
This case is even simpler because yk−c = 0k−c and so we are left with just two possible cases for
vk+1 :
2This time represents the time that the machine takes to fetch the boundary numbers from the advice function, not the time thatit takes to simulate the experiment.
35
0k−c0 and 0k−c1.
Thus, for k ≥ k0, if we know vk, we have at most four possible cases for vk+1, so just two bits of
information are needed to write it. We will denote these bits as bk0 and bk1. Furthermore, if we have
vk+1 we just need c addtional bits (which correspond to wk+1) to write rk+1. Hence, for all k ≥ k0, we
just need rk and (c+ 2) bits to write rk+1.
The same conclusion can be drawn for the lower boundary numbers lk, using an analogous proce-
dure, splitting these numbers as lkk = 0.xk · uk, with |xk| = k − c and |uk| = c, where c is such that
|y − rk| < 2c−k and |y − lk| < 2c−k. We will denote the two bits of information that we need to compute
xk+1 from xk by bk2 and bk3. Thus the required advice function can be given by:
f(n) =
l11#r11#σ1#l22#r22#σ2#...#lnn#rnn#σn , n < k0
f(k0 − 1)#xk0#uk0#vk0#wk0#σk0 , n = k0
f(n− 1)#bn2bn3#un#bn0bn1#wn#σn , n > k0.
(4.6)
Since wn and un have constant size c for all n > k0, |f(n)| will be asymptotically linear.
Remark 9. As expected, the needed information c is inversely proportional to the parameter α. In other
words, the faster the time schedule grows, the closest the boundary numbers are to each other and less
information we need to write the boundary numbers.
Using Function 4.6, a similar proof to the proof of Proposition 4.2.1 shows that it is possible to reduce
the upper bound of infinite precision, polynomially clocked SmSM ’s, for time schedules in Θ(2k).
Theorem 4.2.2. Is A is decidable by a SmSM with infinite precision and exponential protocol T (k) ∈
Θ(2k), clocked in polynomial time, then A ∈ P/log?.
For the next theorem it is important to notice that given lkk and rkk for some k, if we cannot
bound the difference two consecutive boundary numbers as in Proposition 10, then to write lk+1k+1
or rk+1k+1 we can only use as information the bits of lkk and rkk that we know will be the same in
lk+1k+1 and rk+1k+1 respectively, since when the bits stop coinciding, any continuation is a possible
boundary number due to the unknown y in Equation 3. Hence, if lkk only coincides with lk+1k+1 in
the first k − c bits, then it is necessary to write the last c + 1 bits of lk+1k+1 in the advice function, and
analogously for rk+1k+1.
Theorem 4.2.3. There exists a set decided by a SmSM working with infinite precision and time schedule
T (k) /∈ Ω(2k) bounded in polynomial time that cannot be decided by a deterministic Turing machine
bounded in polynomial time receiving 〈w, f(|w|)〉 as input where f is a prefix advice function such that
|f(n)| ∈ O(n).
Proof. Let M be a SmSM running in polynomial time with a non-computable vertex position y ∈]0, 1[
working with infinite precision and time schedule T (k) /∈ Ω(2k). Let A be a set decided by M that, in
order to be decided, it requires queries to the oracle with arbitrary large query words. Let now M′ be
a machine running in polynomial time that witnesses A ∈ P/poly by emulatingM except that when it
36
enters the query state, it simulates the experiment with resource to a prefix advice function f ∈ poly. It
is necessary to provide f in the input since y is non-computable by hypothesis so the boundary numbers
are also non-computable by Remark 3 and thereforeM′ is not able to compute the boundary numbers
in polynomial time on its own. It is our aim to show that |f(n)| /∈ O(n).
We only prove for the right boundary numbers rk. First we prove an auxiliary result: ∀n ∈ N0,∃k′ ∈ N
such that for every k ≥ k′ we have rk − rk+1 > 2−(k−n). Suppose this is not the case and fix n such
there is no k′ satisfying our hypothesis. Then, for infinitely many k we must have
rk − rk+1 < 2−(k−n) ⇔ 1
T (k)− 1
T (k + 1)< 2−k2n ⇔
⇔ T (k + 1)− T (k)
T (k)T (k + 1)< 2−k2n ⇔
⇔ T (k + 1)− T (k) < T (k)T (k + 1)2−k2n
(4.7)
Now, by hypothesis we have T (k) /∈ Ω(2k) and 2n is constant because n is fixed, so the last inequality
can be written as
g(k) < h(k)c2−k (4.8)
where g(k) and h(k) are subexponential functions. We impose that T (k) is a strictly increasing time-
constructible function, hence for any k ∈ N we have T (k + 1) − T (k) ≥ 1, so inequality 4.8 cannot hold
for arbitrary large k since 2−k decreases exponentially fast, concluding the proof of our auxiliary result.
For each n ∈ N0 we define k′n as the k′ that whitnesses our auxiliary result for the given n, thus
creating the sequence k′nn∈N0 . From the auxiliary result, rk′n and rk′n+1 must differ on the (k′n−n)th bit
or before, hence, if we do not know the time schedule, to write rk′n+1k′n+1 from rk′nk′n we need at least
n + 2 bits in the advice function. Therefore, any advice function f that can possibilitate the simulations
of the experiments done by the oracle ofM would need pair of bits after f(k′0), then triples of bits after
f(k′1) and so on. It is now easy to see that it does not exist an assymptotically linear sized function in the
size of the query words capable of containing all the necessary information to write the prefixes of the
boundary numbers. Hence, the set A cannot be decided by a polynomial time bounded Turing machine
using an advice function f such that |f(n)| ∈ O(n).
In particular, if we only allow the experiment to run for a number of steps that is subexponencial on
the size of the query words, Theorem 4.2.3 states that in some cases the best we can do is to use an
advice function such that |f(n)| is superlinear. An advice function such that |f(n)| ∈ O(n2) is always
achievable since we can simply use
f(n) = l11#r11#σ1#l22#r22#σ2#...#lnn#rnn#σn. (4.9)
When we take the general expression for the experimental time given by Inequality 2.1, it becomes
quite clear that it only makes sense to study the cases where n > 1, which gives us an exponent with
a value of at least 1, thus we study the properties of SmSM ’s when n = 2. It is however important
to notice what happens when we consider the exponent to be some value 1λ , as this has a significant
37
impact on which time schedules allow for a logarithmic sized advice function. Indeed, if we assume that
A = B = 1, then we have
T (k) =1
|ry − y|1λ
⇔ rk − y =1
T (k)λ(4.10)
We have seen in Proposition 10 and Theorem 4.2.3 that we can codify the boundary numbers in
an advice funtion whose size is linear on the size of query words if we have T (k)λ ∈ Ω(2k). In other
words, there exists a logarithmic sized advice function if T (k) ∈ Ω(2kλ ). The particular case of λ = 2 was
already studied in depth and its results and proofs can be seen in [7], where it was assumed that the
boundary numbers are the values such that |lk − y| = |rk − y| = 1/T (k)2.
Proposition 10 allows us to code all the necessary information in logarithmic size and thus we can
improve the upper bound for the SmSM clocked in polynomial time and using a time schedule T (k) ∈
Ω(2k). However, this proposition has one big setback: we can only write such an advice function if
we previously know the explicit formula of T (k) since to write wk we need to know c which requires the
knowledge of an α such that T (k) ≥ α2k for large enough k. We now define a new type of advice function
f that can be used with every time schedule and furthermore, if T (k) ∈ Ω(2k), then |f(n)| ∈ O(n). This
shows that we don’t even need to know that T (k) ∈ Ω(2k) to write an advice function with linear size on
the size of the input of the function.
4.3 A new advice function
Let T (k) be a time-constructible time schedule and rk and lk its right and left boundary numbers re-
spectively for some vertex position y. For each k ∈ N, we rewrite lkk = 0.xk · uk and rkk = 0.vk · wkwhere
• x1 = v1 = ε;3
• for any k > 1, xk is the biggest common prefix of lkk and lk−1k−1;
• for any k > 1, vk is the biggest common prefix of rkk and rk−1k−1.
The prefix advice function is given by
f(n) = u1#w1#σ1#u2#w2#σ2#...#un#wn#σn#. (4.11)
The idea behind this function is roughly the same idea of Proposition 10: if two numbers are very
close to each other, then we should not need many new bits of information to get one of them.
Remark 10. It is very important to notice that Proposition 10 does not put an upper bound on the
number of digits that are needed between two consecutive #’s because we could have for example3In this case we have l11 = 0.u1 and r11 = 0.w1.
38
rkk = 0.1010k−3 and rk+1k+1 = 0.1001k−2. While in Proposition 10 we would only need to add c + 2
bits to the function, in this new function we need to add k − 1 bits with k arbitrarly large. We will prove
however that if T ∈ Ω(2k), even though we cannot bound the number of bits to add, we still can show
that the advice function has linear size on its input.
We have now a function that at first sight uses few bits to code the boundary numbers, the next step
is to show that given such an advice function we can actually retrieve lkk, rkk and σk with an algorithm
running in polynomial time on the size of the input.
Proposition 11. Let T (k) be a time-constructible time schedule and f(n) be the advice function defined
above in 4.11. Then it is possible to retrieve rnn, lnn and σn in time O(|f(n)|).
Proof. Suppose the query word is z such that |z| = n. To obtain lnn, rnn and σn, consider Algorithm 8.
Algorithm 81: procedure RETRIEVEBOUNDARIES2: f(n)← input3: l← list of symbols of f(n)4: p← Length [l] . pointer5: il ← n, ir ← n . the positions of the digits we are looking for6: c← n+ 1, k ← n+ 1 . a coding and current k7: r ← 1 . used to differentiate between lk, rk and σk8: σ ← l[p− 1] . σn9: ln ← ε, rn ← ε . prefixes of lnn and rnn
10: while p > 0 do11: if l[p] = # then12: switch r do13: case 114: r ← 315: k ← k − 116: case 217: r ← 118: case 319: r ← 220: c← k21: else22: switch r do23: case 124: if c = il then25: ln ← l[p]ln26: il ← il − 127: c← c− 128: case 229: if c = ir then30: rn ← l[p]rn31: ir ← ir − 132: c← c− 1
33: p← p− 1
34: rn ← 0, rn35: ln ← 0, ln36: return [ln, rn, σ]
This algorithm runs in time O(|f(n)|) since the “while” does |f(n)| steps and in each step there
are only equality evaluations which can be all done it at most log(n + 1) units of time since this is the
39
maximum size of the variables c and k. During a “while” step there is also atributing a value to a variable
and appending one bit to a number, so the time each step takes is upper bounded by some constant.
We now show that the output are the boundary numbers and the bit σn introduced in Definition 8,
needed to simulate the scatter experiment. The variable c represents a coding of the bits in f(n) done
in the following way: the mth# receives the natural number dm/3e, any bit that is not a σk immediately
to the left of a # receives the same natural number. If a symbol different than # already has a natural
number m′ assigned, the bit directly to its left receives the natural number m′−1. This coding represents
the position of a bit in rkk or lkk. However, there are several bits which are assigned to the same natural
number, in this case, we simply chose to use the digit that is more to the right since it represents the last
possible change in that position. At the end of this procedure, we have the decimal bits of rnn and lnn,
thus we still need to turn the numbers into a number between 0 and 1. The variable σ is clearly σn due
to Expression 4.11.
To better understand the proof of Theorem 4.4.1 we divide it into two parts: on Propositions 12 and
13 we assume the SmSM has a vertex at position y = 1/2 and, afterwards, we generalize for y ∈]0, 1[.
When the vertex is placed at y = 1/2, the boundary numbers and σk are computable since both T (k)
and y are computable and we can simply use Equation 2.7 to compute their prefixes. This implies that
the upper and lower bounds of the computational power of these SmSMs collapse to P4, however, we
wish to addapt the proofs to non-computable vertex positions, so we choose to continue expressing the
upper and lower bounds in terms of non-uniform complexity classes.
Proposition 12. Consider a SmSM working with a time-constructible time schedule T (k) ∈ Ω(2k) and
vertex at position y = 1/2 and let f1(n) be the prefix advice function
f1(n) = u1#u2#...#un (4.12)
where ui1≤i≤n are given in Expression 4.11. Then |f1(n)| ∈ O(n).
Proof. For every m ∈ Z, we define
Tm(k) := 2k × 2−m. (4.13)
and lmk to be the left boundary numbers for Tm(k). We start by showing that if T (k) = Tm(k) for some
m, then |f1(n)| ∈ O(n). If m ≤ −1, then Tm(k) ≥ 2k × 2 and for any k > 3 we have by Equation 2.7
that 1/2 > lk ≥ 1/2 − 2−(k+1) = 0.01k, hence, it must be the case that lmk k = 0.01k−1 and therefore
we have uk+1 = 1 for k > 3. If on the other hand m > −1, by Equation 2.7 we have for k ≥ m + 2
that lk = 1/2 − 2−(k−m) = 0.01k−m−1, hence, lmk k = 0.01k−m−10m and similarly to the previous case,
uk+1 = 10m for k ≥ m+ 2.
Suppose now that T (k) ∈ Ω(2k). By definition of Ω(2k), there exists αT ∈ R+ and k0 ∈ N such that
for every k > k0 we have T (k) ≥ αT 2k. By choosing m to be the smallest integer such that αT > αTm
4The power of an SmSM could collapse to P/log? ∩ REC, however, the fact that y = 1/2 is computable in polynomial timeallow us to say that the power collapses to P.
40
(notice that αTm = 2−m), we ensure that T (k) grows faster than Tm(k) and therefore for each k we have
1/2 > lkk ≥ lmk k. (4.14)
In the especial case where T (k) is such that 2k ∈ o(T (k)) we have
limk→∞
T (k)
2k= +∞ (4.15)
and so we define αT = ∞ as this will make our proofs clearer. If m > −1, it was already seen above
that lmk k = 0.01k−m−10m and by Equation 4.14 we must have lkk−m = 0.01k−m−1 which allows us
to conclude that for k ≥ m + 2 we have |uk+1| ≤ m + 1. For the case when m ≤ −1 or αT = ∞
we can simply notice that αT > αT 0 and a similar argument shows that for large enough k we have
uk = 1. Hence, for large enough k we have that |uk| is always bounded by some constant and therefore
|f1(n)| ∈ O(n).
Proposition 13. Consider a SmSM working with time schedule T (k) ∈ Ω(2k) and vertex at position
y = 1/2. Let f2(n) be the prefix advice function
f2(n) = w1#w2#...#wn (4.16)
where wi1≤i≤n are given in Expression 4.11. Then |f2(n)| ∈ O(n).
Proof. This proof is very similar to the proof of Proposition 12. Let rmk be the right boundary numbers
for Tm(k) defined in Equation 4.13 and suppose that T (k) = Tm(k) for some m.
If m ≤ −1, then T (k) ≥ 2k+1 and by Equation 2.7 we have 1/2 < rk < 1/2 + 2−(k+1) = 0.10k−11. It
is now clear that for any k > 3 we have rmk k = 0.10k−1 and therefore we have wk+1 = 0 for k > 3. If
m > −1, for k ≥ m + 2 we have again by Equation 2.7 that rk = 1/2 + 2−(k−m) = 0.10k−m−11, hence,
rmk k = 0.10k−m−110m and similarly to the previous case, wk+1 = 010m for k ≥ m + 2. Suppose now
that T (k) ∈ Ω(2k) is an arbritary time-constructible time schedule. Again we chose m to be the smallest
integer such that αT > αTm . Thus T (k) grows faster than Tm(k) and therefore for each k we have
1/2 ≤ rkk ≤ rmk k. (4.17)
If m > −1, it was already seen above that for large enough k we have rmk k = 0.10k−m−110m and by
Equation 4.17 we must have rkk−m = 0.10k−m−1 which allows us to conclude that for k ≥ m + 2 we
have |wk+1| ≤ m + 1. For the case when m ≤ −1 or αT = ∞ we can recall that αT > αT 0 and so for
large enough k we have wk = 0. Analogously to the previous case, for large enough k, we have that
|wk| is bounded by some constant and therefore |f2(n)| ∈ O(n).
Proposition 14. Given the boundary numbers for a SmSM without explicit time schedule T (k) ∈ Ω(2k)
and vertex at position y = 1/2, it is possible to define a prefix advice function f such that f(n) codifies
the prefixes of the boundary numbers with size up to n and |f(n)| ∈ O(n).
41
Proof. We consider the advice function introduced in Expression 4.11 and apply directly Propositions
12, 13, and the fact that for every k ∈ N, |σk| = 1.
In order to better understanding Propositions 12 and 13, and to ilustrate that the boundary numbers
are in fact as said in the proof, we take two different time schedules and present the first bits of the
boundary numbers after the comma. The chosen expressions for the time schedules are
T1(k) =
k if k < 10,⌈π(1− k− 1
2 )2−7 × 2k⌉
otherwise(4.18)
and
T2(k) =
k if k < 10,
2−6 × 2k otherwise(4.19)
however, for computational porposes we present the results for the following time schedules which yield
the same results.
T1(k) = π(1− k− 12 )2−7 × 2k, and T2(k) = 2−6 × 2k. (4.20)
The reason we choose these cases is because T1(k) should induce boundary numbers that at first
seem to have no pattern and so it is easier for us to understand the idea, and T2(k) is such that αT2<
αT1. We will only consider k ∈ 8, 7, ..., 15 because for these expressions of time schedules, introducing
k < 8 in Equation 2.7 will return boundary numbers outside of ]0, 1[.
Figure 4.1: Representations of rkk for T1(k). Figure 4.2: Representations of lkk for T1(k).
Figure 4.3: Representations of rkk for T2(k). Figure 4.4: Representations of lkk for T2(k).
Observing the figures, we conclude that no matter how badly the boundary numbers of T1(k) may
behave, its right boundary numbers must coincide with the right boundary numbers of T2(k) in the first
42
k− 7 bits, and the left boundary numbers must agree with the left boundary numbers of T2(k) in the first
k − 6 bits. This follows from the fact that T1 grows faster than T2 and both types of boundary numbers
are bounded by 1/2. To visualize the size of f1(n) and f2(n), we present a graphic that relates the size
of such functions with n when the time schedule is given by T (k) = k2 × 2k. Note that this function is
such that 2k ∈ o(T (k)).
200 400 600 800 1000
500
1000
1500
2000
Figure 4.5: Size of f1(n) when T (k) = k2 × 2k.
200 400 600 800 1000
500
1000
1500
2000
Figure 4.6: Size of f2(n) when T (k) = k2 × 2k.
In this case we have αT = ∞ and so it is expectable that with every k we only need one more bit of
information to fetch rkk and lkk. Observing Figures 4.5 and 4.6 we conclude that this is true: the slope
on both graphs is equal to 2 where one symbol corresponds to # and the other is the bit that represents
uk or wk. The same argument is valid for other time schedules that grow slightly faster than 2k, such as
T (k) = log(k)× 2k:
200 400 600 800 1000
500
1000
1500
2000
Figure 4.7: Size of f1(n) when T (k) = log(k)× 2k.
200 400 600 800 1000
500
1000
1500
2000
Figure 4.8: Size of f2(n) when T (k) = log(k)× 2k.
When the time schedule is subexponencial, Theorem 4.2.3 tells us that the size of the advice func-
tions should be superlinear on k. Figures 4.9 to 4.12 confirm our hypothesis:
200 400 600 800 1000
100000
200000
300000
400000
Figure 4.9: Size of f1(k) when T (k) = klog(k).
200 400 600 800 1000
100000
200000
300000
400000
Figure 4.10: Size of f2(k) when T (k) = klog(k).
43
200 400 600 800 1000
50000
100000
150000
200000
250000
Figure 4.11: Size of f1(k) when T (k) = 2k/2.
200 400 600 800 1000
50000
100000
150000
200000
250000
Figure 4.12: Size of f2(k) when T (k) = 2k/2.
Remark 11. Although the number of bits in yk and wk clearly depend on the time schedule, the sum
of the sizes of yk and wk is upper bounded by some function g = ax + b where a and b are constants
that appear naturally in the advice function, in other words, the expression of T (k) does not contain
any necessary information to write f(n). This is the big difference between this advice function and the
advice function given in Expression 4.6.
We can now write a weak version of Proposition 10 that shows that, if y = 1/2, then it is possible to
make an explicit advice function with linear size on its input, even without any information given by the
formula of the time schedule:
Theorem 4.3.1. If A is decidable by a SmSM clocked in polynomial time with infinite precision, vertex
at position y = 1/2 and time schedule T (k) ∈ Ω(2k), then A ∈ P/log?.
Proof. The proof of this theorem is very similar to the proof of Theorem 4.2.1 except we use the advice
function f of Section 4.3 to code the boundary numbers and the σk ’s. TakeM to be the infinite precision
SmSM that decides A in polynomial time andM′ to be the machine that emulatesM except it simulates
the experience wheneverM is in the query state. Given input n,M′ obtains the boundary numbers using
Algorithm 8 which runs in polynomial time on |f(|z|)| = O(log(n)) where z is the query word, and by
Proposition 3 the Turing machine can simulate the experiment in timeO(log(n)), so we conclude thatM′
runs in polynomial time. By Proposition 14 we have that |f(n)| ∈ O(log(n)) and thus A ∈ P/log?.5
To show the same results hold for any y ∈]0, 1[ is a lot trickier than it looks at first sight. One may
think that a similar proof to the case of y = 1/2 might solve the problem, however in general it is not easy
to find a formula for the boundary numbers since they heavily depend on the vertex position which can
have almost every binary representation possible, aside from infinite representantions with only a finite
number of 0’s. From this point on, we consider for every y ∈]0, 1[ its infinite representation
y = 0.0β11γ10β21γ20β31γ3 ... (4.21)
To avoid any ambiguity, if y is dyadic then there exists n ∈ N such that
5Idem.
44
βn′ =∞, if n′ = n
βn′ > 0, if 1 < n′ < n
βn′ = 0, if n′ > n
γn′ > 0, if n′ < n
γn′ = 0, if n′ ≥ n.
(4.22)
Otherwise, βn > 0 for every n ≥ 2 and γn > 0 for every n ≥ 1.6
For a real number x we define xi to be the ith most significant bit of x. If x ∈]0, 1[ then we only start
counting after the comma and if i > |x| then xi = 0. For example:
10103 = 1, 110.0114 = 0, 0.11114 = 1, 0.11115 = 0.
Finally we define clk : [0, 1[→ N and crk : [0, 1[→ N as two family of functions such that clk(z) is the
largest natural number k′ smaller or equal to k such that zk′ = 0 and crk(z) is the largest natural number
k′ smaller or equal to k such that zk′ = 1. We will use lkk as input for clk and rkk as input for crk.
Remark 12. The first important thing to notice is that for small k, these functions might not be defined,
however we know that the vertex position y ∈]0, 1[ must have at least one 0 (say in position a0) and a
1 (say in position a1). It is not possible by definition to have lk > y and rk < y, therefore clk(lkk) must
be defined at least for every k > a0 and crk(rkk) must be defined at least for every k > a1. The second
important remark is that we will allways do separate proofs for the left and the right boundary numbers,
so for simplicity we replace clk(lkk) by ck in the proofs regarding the left boundary numbers and replace
crk(rkk) by ck in the proofs regarding the right boundary numbers.
Again we divide the main result (Theorem 4.4.1) into smaller propositions. Proposition 15 states that
the general case T (k) ∈ Ω(2k) can be reduced to show that the advice function has linear size on its
input if T (k) = 2k. Propositions 19 and 20 already assume that T (k) = 2k and show that the advice
functions that codify the left and the right boundary numbers respectively also have linear size.
Proposition 15. Let f be the advice function introduced in Expression 4.11 obtained with arbitrary
vertex at position y ∈]0, 1[ and time schedule T (k) = 2k. If |f(n)| ∈ O(n), then for any time schedule
T ′(k) ∈ Ω(2k) we still have |f(n)| ∈ O(n).
Proof. This proof follows the same ideia as the proofs from Propositions 12 and 13. Suppose f has
linear size on its input if T (k) = 2k and consider the time schedules Tmm∈N0defined in Expression
4.13. Let rk and lk be the boundary numbers for T (k) and rmk and lmk be the boundary numbers for
Tm(k). For each m and large enough k we have rk = rmk+m and lk = lmk+m. Recalling Expression 4.11,
this means that we have xk = xmk+m and by definition we have |umk+m| ≤ |uk| + m. Analogously we get
|wmk+m| ≤ |wk|+m. Hence, if gm is the advice function when the time schedule is Tm, then asymptocally
we have |gm| ∈ O(n).
6If y ∈ [1/2, 1[ then β1 = 0.
45
Let T ′(k) ∈ Ω(2k) be any time schedule and αT ′ defined as in Proposition 12. Again we take m to
be the lowest natural number such that αT ′ > αTm so that for large enough k we have y > l′kk ≥ lmk k.
Assuming gm has linear size on its input, if the advice function f for the time schedule T ′ had superlinear
size, then the sequence |uk| − |umk |k∈N could not be upper bounded and so, there are infinitely many
k such that cmk > c′k where these numbers are defined in Proposition 17. Let now k ∈ N be large
enough to witness both cmk − c′k > m+ 1 and y > l′kk ≥ lmk k. We consider the following four numbers:
y > l0k > l′k > lmk where all of the inequalities occur by definition and define c0k, c′k and cmk to be the
values of ck for l1k, l′k and lmk respectively. By Proposition 17 we have that l0kc0k−1 = yc0k−1 and since
|umk+m| ≤ |uk|+m, we must also have lmk c0k−1−m = yc0k−1−m. We will reach a contradiction to this fact.
The first thing to notice is that cmk ≥ c0k −m and because we set cmk−1 − c′k−1 > m + 1, we have that
c′k < c0k − 1 −m. Analogously, the inequalities l0k > l′k > lmk allow us to conlude that l′kc′k−1 = yc′k−1.
Suppose that yc′k = 0, then since l0kc0k−1 = yc0k−1 it must be the case that l0kc′k
= 0, however, l0k has
a 0 in the position c0k and by definition of c′k we would have l′k > l0k which is a contradition. Similarly, if
ycmk = 0 then lmk > l0k and so we have yc′k = ycmk = 1. Since lmk < l′k, it must happen that lmkc′k = 0
which is a contradiction since we know that yc′k = 1 and lmk c0k−1−m = yc0k−1−m. Hence, f cannot be
superlinear which means that |f(n)| ∈ O(n).
Figures 4.13 through 4.20 depict the size of the advices functions f1 and f2 described in Propositions
12 and 13 respectively using vertex at position y = 1/e and time schedules T1(k) =(1− 1
k
)2k, T2(k) =
k3
2(k−1)(k2−2)2k, T3(k) = log(k)(k+1)
k·π·log(k+2)2k and T4(k) = log(k)2
4·log(k+1)log(k−1)2k.
200 400 600 800 1000
500
1000
1500
2000
2500
3000
Figure 4.13: Size of f1(n) using T1.
200 400 600 800 1000
500
1000
1500
2000
2500
3000
Figure 4.14: Size of f2(n) using T1.
200 400 600 800 1000
1000
2000
3000
4000
Figure 4.15: Size of f1(n) using T2.
200 400 600 800 1000
1000
2000
3000
4000
Figure 4.16: Size of f2(n) using T2.
46
200 400 600 800 1000
1000
2000
3000
4000
Figure 4.17: Size of f1(n) using T3.
200 400 600 800 1000
1000
2000
3000
4000
Figure 4.18: Size of f2(n) using T3.
200 400 600 800 1000
1000
2000
3000
4000
5000
Figure 4.19: Size of f1(n) using T4.
200 400 600 800 1000
1000
2000
3000
4000
5000
Figure 4.20: Size of f2(n) using T4.
Although we have not proved yet that the size of f(n) is linear, Figures 4.13 to 4.20 certainly sugest
that fact. But more important we can see that if we increase the value of m by 1, then we should need 1
more bit in average to compute f1(n) and f2(n) as shown in the proof of Proposition 15. Lastly, for large
enough k we have T3(k) > T4(k) and indeed the size of f(n) using time schedule T3 is upper bounded
by the size of f(n) when using T4 as time schedule.
The problem is now reduced to show that if we use the time schedule T (k) = 2k, then for any y ∈]0, 1[
the new advice function has linear size on its input.
4.4 A particular time schedule
In Section 4.3 we present a new type of advice functions capable of codifying the needed information
to simulate the physical experiment and state that this function has linear size on the query word (see
Theorems 19 and 20). To prove this result, we need to prove that the statement holds in particular
for when the time schedule is given by T (k) = 2k, however, the proof of this particular case is based
in several auxiliary results. In this section we present these results, which give some insight on the
behaviour of left and right boundary numbers and how well they behave even if we consider a non-
computable vertex postion y, as long as we use time schedule T (k) = 2k. We close the chapter by
showing that the size of our new advice function is indeed linear on the input.
We prove Propositions 16, 17 and 18 by repeated use of a contradiction argument and taking advan-
tage of the fact that infinite representations with a finite number of 0’s are not allowed.
Proposition 16. Let T (k) = 2k and y ∈]0, 1[. Then, for any k > 2, we have lkk = rkk 6= yk.
47
Proof. Suppose first that lkk = yk holds and recall that for any k ∈ N we have y− lk = 2−k. Consider
the following cases:
• lkk = yk:
We consider the largest possible value of lk − y and show that it is smaller than 2−k. This value is
achieved when lk = lkk and y = yk · 111.... Since the infinite representation of y cannot have a
finite number of 0’s, there exists an infinite amount of numbers a ∈ N such that yk+a = 0. Let a′ be
the least of these numbers. We obtain y+2−(k+a′) < lk+2−k, that is, y− lk < 2−k−2−(k+a
′) < 2−k
which is a contradiction since we must have y − lk = 2−k.7
• lkk 6= yk:
In this case we consider the smallest possible value of lk − y and show it is larger than 2−k. This
is achieved when we have simultaneously y = yk, lkk−2 = yk−2, lkk−1 = 0, yk−1 = 1 and
lk = lkk · 111.... Again there exists a least a′ such that lkk+a′ = 0 and similarly to the previous
case we have y > lk + 2−(k+a′) + 2−k, that is, y − lk > 2−k + 2−(k+a
′) > 2−k. Again we get a
contradiction since y − lk = 2−k.
Suppose now that rkk = yk holds and recall that rk − y = 2−k . Again we consider two cases and
reach a contradiction similar to the previous two:
• rkk = yk:
We consider the largest possible value of rk − y which is achieved when y = yk and rk =
rkk · 111.... We thus conclude that rk < y + 2−k, that is, rk − y < 2−k, contradicting the fact that
rk − y = 2−k.
• rkk 6= yk:
We now consider the smallest possible value of rk − y, achieved when we have simultaneously
rkk = rk, rkk−2 = yk−2, rkk−1 = 1, yk−1 = 0 and y = yk · 111.... We have rk > y + 2−k, that
is, rk − y > 2−k, which cannot happen because rk − y = 2−k.
Since there is only two possible values for the bits (0 and 1) and we have proved that lkk 6= yk
and rkk 6= yk, then it must happen that lkk = rkk .
This Proposition has a very powerfull consequence: if we want to read the value of an (possibly
non-computable) unknown vertex position y ∈]0, 1[ it is enough to set the time schedule as T (k) = 2k
and, for each k > 2, we read lkk, see the value of lkk and finally simply compute yk = 1− lkk .
Proposition 17. Let T (k) = 2k and y ∈]0, 1[. Then, for k > a0 we have lkck−1 = yck−1.
Proof. In order to get a contradiction, we suppose that lkck−1 6= yck−1. Consider the smallest value
possible of y − lk, which happens when ck = k, lkk−2 = yk−2, lkk−1 = 0 6= yk−1, y = yk and
lk = lkk · 111.... By taking a′ to be the smallest of the natural numbers such that lkk+a′ = 0, we
conclude that y > 2−k which is a contradiction since we must have y − lk = 2−k. Hence, we conclude
that lkck−1 = yck−1.
7In order to have an equality we would need to sum over all a’s such that yk+a = 0.
48
Proposition 18. Let T (k) = 2k and y ∈]0, 1[. Then, for k > a1 we have rkck−1 = yck−1.
Proof. Similarly to Proposition 17, suppose that rkck−1 6= yck−1. We now consider the smallest value
possible of rk−y, which happens when ck = k, rkk−2 = yk−2, rkk−1 = 1 6= yk−1, yk = 1, rk = rkk
and y = yk ·111.... Taking a′ to be once more the smallest of the natural numbers such that yk+a′ = 0,
we conclude that rk > y + 2−(k+a′) + 2−k, that is, rk − y > 2−(k+a
′) + 2−k > 2−k which is again a
contradition since we must have rk − y = 2−k. Thus, it must hold that rkck−1 = yck−1.
Remark 13. Intuitively, Propositions 17 and 18 state that lk coincides with y at least until the bit that
comes before the last 0 that appears in the first k bits and rkk coincides with y at least until the bit that
comes before the last 1. Since the sequences lk and rk are increasing and decreasing respectively, we
must have ck ≤ ck+1 for any k.
We start by noticing that |uk| and |wk| in Expression 4.11 are both bounded by k − ck−1 + 1 (recall
Propositions 17 and 18). However this does not solve our problem since there exist real numbers such
that the sequence k−ck−1 is not bounded.8 Even though the numbers of bits needed cannot be bounded,
we will show the linearity of |f(n)| using T (k) = 2k by showing that
n∑k=a0
|uk| ≤ 2n− 2(a0 − 1) andn∑
k=a1
|wk| ≤ 2n− 2(a1 − 1). (4.23)
where a0 and a1 are constants mentioned in Remark 12.
Before we state Propositions 19 and 20, we first depict the sequences lkk8<k<15 and rkk8<k<15
when y = 1/e = 0.010111100010110...
Figure 4.21: lkk for 8 ≤ k ≤ 15. Figure 4.22: rkk for 8 ≤ k ≤ 15.
These two figures will be of incredible help to understand the several steps of the folowing two
propositions and is advised to keep them in mind when reading the proofs of these propositions.
Proposition 19. Consider a SmSM working with time schedule T (k) = 2k and vertex at position y ∈
]0, 1[. Let f1(n) be the prefix advice function
f1(n) = u1#u2#...#un (4.24)
where ui1≤i≤n are given in Expression 4.11. Then |f1(n)| ∈ O(n).
8Take for example y = 0.10120130140150... which does not bound |uk|.
49
Proof. Fix k > 2 and assume first that lkk = 1. We have two options for lk−1k−1 : if this value is 0, then
taking ck−1 = k − 1 in Proposition 17 we conclude that lk−1k−2 = yk−2. Furthermore, by Remark 13
we have |uk| ≤ 2 and by Propostion 16 we get yk−1 = 1 and yk = 0 so if |uk| = 2 then we would have
lkk > y which cannot happen. Hence, |uk| = 1. If on the other hand lk−1k−1 = 1, then by definition of
ck every bit of lk−1k−1 from ck−1 + 1 to k− 1 is a 1. Since lkk = 1, if |uk| > 1 we would need to change
lkck−1to keep the sequence lkk from decreasing but then even if we change every 1 to a 0 except
for the last one, we get y < lkk. In conclusion, if lkk = 1 then |uk| = 1.
Assume now lkk = 0 implying ck = k. By definition of ck−1, every bit of lk−1k−1 from ck−1 to k − 1
is a 1. Furthermore, by Proposition 16 and the first part of this proof we know that yck−1+1 = ... =
yk−1 = 0 and since ck = k we must have lkck−1+1 = ... = lkk−1 = 0. We also need that lkck−1= 1
in order to have lkk ≥ lk−1k−1. Thus, if lkk = 0 then |uk| = k − ck−1 + 1.
To show that its needed the size of f1 is linear on n is now easy. Let y ∈]0, 1[ have an infinite binary
representation according to Expression 4.21. If y is dyadic, then there exists k such that yk′ = 0 for
every k′ > k. By Proposition 16 this implies that lk′k′ = 1 and using the first part of this proof we
conclude that |uk′ | = 1. Therefore, f1(n) ∈ O(n). If y is not dyadic, then for each n > 2 we focus
on the string sn = 0βn1γn and see how many bits we need to codify the prefixes of the respective
boundary numbers. Let mn be the number such that ymn is the first bit of sn, let m′n = mn + βn and
m′′n = m′n + γn − 1. According to Proposition 16, lmm = ... = lm′n−1m′n−1= 1 and lm′nm′n
= ... =
lm′′nm′′n= 0, so, by the first part of this proof, |um| = ... = |um′−1| = 1, |um′ | = m′ − (m − 1) + 1 and
|um′+1| = ... = |um′′ | = 2. Hence, the total number of needed bits is given by βn + (βn + 2) + 2(γn− 1) =
2(βn + γn).
Hence, we haven∑
k=a0
|uk| ≤ 2n− 2(a0 − 1)
and when considering the separators and the first a0 boundary numbers we conclude that
|f1(n)| ≤ a0(a0 + 1)
2+ 3n
which implies that |f1(n)| ∈ O(n).
Proposition 20. Consider a SmSM working with time schedule T (k) = 2k and vertex at position y ∈
]0, 1[. Let f2(n) be the prefix advice function
f2(n) = w1#w2#...#wn (4.25)
where wi1≤i≤n are given in Expression 4.11. Then |f2(n)| ∈ O(n).
Proof. Similarly to the proof of Proposition 19, we show that Equation 4.23 holds. Fix k > 2 and assume
that rkk = 0. If rk−1k−1 = 1, then taking ck−1 = k − 1 in Proposition 18 we conclude that rk−1k−2 =
yk−2. Furthermore, by Remark 13 we have |wk| ≤ 2 and by Propostion 16 we get yk−1 = 0 and
yk = 1 so if |uk| = 2 then we would have rkk < y which cannot happen. Hence |uk| = 1. If on the
50
other hand rk−1k−1 = 0, then by definition of ck every bit of rk−1k−1 from ck−1 +1 to k−1 is a 0. Since
rkk = 0, if |uk| > 1 we would need to change rkck−1to keep the sequence rkk from increasing but
then even if we change every 0 to a 1 except for the last one, we get y > rkk. In conclusion, if rkk = 0
then |wk| = 1.
If rkk = 1 then ck = k. By definition of ck−1, every bit of rk−1k−1 from ck−1 + 1 to k − 1 is a 0.
Furthermore, by Proposition 16 and the first part of this proof we know that yck−1+1 = ... = yk−1 = 1
and since ck = k we must have rkck−1+1 = ... = rkk−1 = 1. We also need that rkck−1= 0 in order to
have rkk ≤ rk−1k−1. Thus, if rkk = 1 then |wk| = k − ck−1 + 1.
Finally, similarly to the proof of Proposition 19, if y is dyadic then there exists k such that for every
k′ > k we have yk′ = 0, meaning that rk′−1k′−1 = 1 and so |wk| = k−ck−1 +1 = k− (k−1)+1 = 2. If
y is not dyadic, then for any n > 1 the number of bits needed to encode the prefixes of the right boundary
numbers represented by 1γn0βn+1 is given by γn + (γn + 2) + 2(βn − 1) = 2(γn + βn+1).
Similarly to the previous proof we have
n∑k=a1
|wk| ≤ 2n− 2(a1 − 1)
and when considering the separators and the first a1 boundary numbers we conclude that
|f2(n)| ≤ a1(a1 + 1)
2+ 3n
which implies that |f2(n)| ∈ O(n).
Figures 4.21 and 4.22 ilustrate our theory that for large enough k we have lkk = rkk 6= yk just
as Proposition 16 states. Furthermore, we can see that if yk = 0 then |uk| = |wk| = 1, otherwise we
have |uk| = |wk| = k − ck−1 + 1 as shown in Propositions 19 and 20.
Proposition 21. For any time-constructible time schedule T (k) ∈ Ω(2k) and vertex position y ∈]0, 1[, it
is possible to define a prefix advice function f such that f(n) codifies rkk and lkk for 1 ≤ k ≤ n and
|f(n)| ∈ O(n).
Proof. We consider the advice function introduced by the Expression 4.11 and apply directly Proposi-
tions 15, 19 and 20, and the fact that for every k ∈ N, |σk| = 1.
We can now present a more general version of Theorem 4.3.1:
Theorem 4.4.1. If A is decidable by an infinite precision SmSM with vertex at position y ∈]0, 1[ clocked
in polynomial time and time schedule T (k) ∈ Ω(2k), then A ∈ P/log?.
4.5 Lower bounds
We start this section by proving an auxiliary result:
51
Proposition 22. Take a prefix function f ∈ log. There exists a SmSM clocked in polynomial time
M equipped with time schedule T (k) ∈ Θ(2k) and access to a SmSE with vertex at position y(f), as
described in Section 4.1, that computes f(n) when clocked in polynomial time in n.
Proof. Let m ∈ N be such that 2m−1 < n ≤ 2m. Since f ∈ log, there exists a, b ∈ N such that
|f(n)| ≤ a⌈log n
⌉+ b = am+ b and according to the coding explained in Section 4.1, we only need 3 bits
to code every symbol of f , thus we need at most k = 3(am + b) + 3(m + 1) bits of y(f) to obtain f(n)
where 3(m+ 1) of those bits are used to codify the separators.
Now we apply the linear search algorithm introduced in the beginning of Section 2.1. Since y(f) ∈ C3,
by Proposition 9, taking any dyadic rational z with |z| = k we know that |z − y(f)| > 2−(k+10). If we want
to make sure that there are no timeouts during the execution of the algorithm, then we need to have
texp(z) < T (|z|) and by Equation 2.1 it is enough to consider T (k) = B.2k+10, which can be simplified
to T (k) = 2k+10 by assuming B = 1. The linear search algorithm will take time O(n.log(n)) which is
polynomial in n and given y(f)(2m) we can obtain f(2m) in linear time in n.
Theorem 4.5.1. Let A ∈ P/log?, then there exists a SmSMM with infinite precision and time schedule
T (k) ∈ Ω(2k) that decides A when clocked in polynomial time.
Proof. If A ∈ P/log?, then it exists D ∈ P and a prefix function f ∈ log such that, for all w ∈ 0, 1∗,
and for all n ≥ |w|:
w ∈ A⇔ 〈w, f(n)〉 ∈ D. (4.26)
Let MD be the Turing machine clocked in polynomial time pD that witnesses D ∈ P. Consider the
SmSM M that given w, it reads some of the places of the vertex position y(f) (see Expression 4.2)
using algorithm 1 and a time schedule T (k) = C2k to get f(|w|). Since |w| may not be a power of 2, the
SmSM reads instead c(f(n)) where n is the smallest power of 2 such that n ≥ |w|. By hypothesis we
have f ∈ log, thereforeM reads
l = 3(a⌈log (|w|)
⌉+ b) + 3(
⌈log |w|
⌉+ 1) (4.27)
bits of y(f), where 3(⌈log |w|
⌉+ 1) denotes the number of bits used in the separators. Using Algorithm 1
on l+5−1, the algorithm will return a dyadic rationalm such that |y(f)−m| < 2−(l+5), thus by Proposition
9, m and y(f) must coincide in the first l bits. By Remark 2 this algorithm runs in time O(l.T (l)) and
since l is logarithmic in the size of the input, the linear search algorithm takes a polynomial time in |w|.
The total running time ofM is therefore polynomial in the size of the input word.
Theorem 4.5.2. A set A is decidable by a SmSM clocked in polynomial time with infinite precision and
without explicit exponential time schedule T (k) ∈ Ω(2k) if and only if A ∈ P/log?.
Proof. It follows from Theorems 4.4.1 and 4.5.1.
52
Chapter 5
Conclusions and further work
To study the computational power of the ARNN proposed by Hava Siegelmann, we consider a physical
measurement experiment called the Sharp Scatter Experiment which is equivalent to the ARNN but
of much simpler comprehension. However, this experiment is not feasible due to non-smooth boundary
conditions so we turn our attention to a smooth version of the Scatter Experiment where the only differ-
ence between the two versions, besides the wedge analiticity, is that the sharp version takes constant
time to return an answer while the smooth version takes exponential time on the level of required preci-
sion. Therefore, it is of interest to study which sets are decided by these machines with physical oracles
when bounded in time: in our case we consider oracle Turing machines running in polynomial time.
Similarly to the sharp version, the SmSM also has some feasibility problems, namely one would need
unbounded precision in finite space (not possible by matter properties) or unbounded space and unlim-
ited physical support structure. For these reasons, we conceive these types of machines as concepts
rather than actual feasible machines.
In Chapter 3 we fix the vertex position to be 1/2 and study the influence of the nature of the time
schedule by considering three types of time schedules: time constructible functions, computable in-
creasing total functions and total increasing functions. Since y is computable in polynomial time, using
a time constructible function does not change the computational power of a machine as supported by
Theorem 3.2.5, simply because the boundary numbers are computable in polynomial time and hence
the advice function can also be computed in polynomial time. On the other hand, when the definition
of time schedule is relaxed to allow all total increasing functions, Theorem 3.2.1 shows that the power
becomes characterized by the class P/poly as conjectured by Hava Siegelmann in [24]. We completed
the proof of Theorem 3.2.4 by showing that given a recursive set A decidable in polynomial time by a de-
terministic Turing machine on help by a tally set, it is possible to decide the same set in polynomial time
by a (possibly different) deterministic Turing machine on help by a recursive tally set (see Proposition
8) allowing us to define P/poly ∩ REC as the computational power obtained when using computable
increasing functions as time schedule.
Regarding Chapter 4 we showed in Section 4.5 that introducing a non-computable vertex position is
enough for a SmSM working with infinite precision to break the Turing barrier as this lower bound was
53
proved to be P/log?. Concerning the upper bound, we showed in Section 4.2 that P/log2? is indeed
a upper bound and if we use a time schedule T (k) /∈ Ω(2k), Theorem 4.2.3 states that if we consider
the experimental time as in Expression 2.6 with n = 2, the class P/log? may not be achievable as
there exists some set that cannot be decided using a logarithmic sized advice function, however, if we
assume the exponent to be 1/λ then it is possible to write a logarithmic sized advice functions with time
schedules in Ω(2k/λ) as stated in the end of Section 4.2. If, on the other hand, we use a time schedule
T (k) ∈ Ω(2k), Proposition 10 shows that it is possible to achieve P/log? as a better upper bound, thus
completely defining the power of SmSM’s working with infinite precision and exponential time schedule
as P/log?. This raised the question of whether or not it was possible to achieve the same upper bound
for the unbounded precision without using the explicit formula of the time schedule. By introducing in
Section 4.3 a new way of coding the boundary numbers that can be applied to any time schedule, we
showed that it is possible to fetch the necessary information in polynomial time in the size of the advice
function and furthermore, if T (k) ∈ Ω(2k) then the advice function has linear size on n. This advice
function only needs the boundary numbers and not the formula of the time schedule to be written, and
so our hope is that it can be modified in order to be applied to the unbounded precision case, solving the
open problem of whether or not it is possible to have P/log? as the upper bound for both the error-prone
precision protocols.
The next step of future research would be to try to simplify the proof of Theorem 4.4.1 as this proof
requires several auxiliary propositions, each one containing several steps that require some intuition
on the behaviour of the boundary numbers for the time schedule T (k) = 2k. A simpler proof would
help the reader to understand and accept the results we obtained. The open problem of whether or
not a similar proof can be used to improve the upper bounds of SmSM ’s working with either finite or
abritrary precision from BPP// log2 ? to BPP// log ? is still not solved. On one hand, the advice function
introduced in Section 4.3 might provide a new insight on the problem which can lead to a new type of
proof showing that is indeed possible to achieve BPP// log ? as the upper bound. On the other hand, the
arbitrary precision protocol uses a linear search algorithm similar to Algorithm 1 taking advantage of the
fact that we can make the error as small as we want. A detailed discussion can be found in [17] where
it is proven that due to the possible error of the Scatter Experiment, in order to have an approximation
of the vertex position we need some additional bits of information, in other words, having a query word
with size l is not enough to guarantee an approximation up to 2−l of the vertex position, forcing the use
of the explicit time schedule to determine those extra bits. This is somewhat discouraging as we are
working with time bounded machines, so if we query a word that is too big, the machine might not have
the necessary time to write the approximation and the extra bits. If this is indeed the case, then it would
not be possible for BPP// log ? to be the lower bound. Since the explicit formula of the time schedule is
needed to obtain those extra bits, a possible solution to the open problem could pass by a modification
of the linear search algorithm where it would be possible to return an approximation up to 2−l of the
vertex position having only the boundary numbers up to size l as available information. At first sight this
does not seem to be feasible, as it would imply to do a faster search than the binary search.
54
Bibliography
[1] H. Sielgemann. Neural Networks and Analog Computation: Beyond the Turing Limit. Birkhauser,
1999.
[2] M. Davis. The myth of hypercomputation. In C. Teuscher, editor, Alan Turing: the life and legacy of
a great thinker, pages 195–212. Springer, 2006.
[3] M. Davis. Why there is no such discipline as hypercomputation. Applied Mathematics and Compu-
tation, 178(1):4–7, 2006.
[4] E. Beggs, J. F. Costa, B. Loff, and J. V. Tucker. Computational complexity with experiments as
oracles. Proceedings of the Royal Society, Series A (Mathematical, Physical and Engineering
Sciences), 464(2098):2777–2801, 2008.
[5] E. Beggs, J. F. Costa, B. Loff, and J. V. Tucker. Oracles and advice as measurements. In C. S.
Calude, J. F. Costa, R. Freund, M. Oswald, and G. Rozenberg, editors, Unconventional Computa-
tion (UC 2008), volume 5204 of Lecture Notes in Computer Science, pages 33–50. Springer-Verlag,
2008.
[6] E. Beggs, J. F. Costa, and J. V. Tucker. Physical experiments as oracles. Bulletin of the European
Association for Theoretical Computer Science, 97:137–151, 2009. An invited paper for the “Natural
Computing Column”.
[7] E. Beggs, J. F. Costa, D. Pocas, and J. V. Tucker. Oracles that measure thresholds: The Turing
machine and the broken balance. Journal of Logic and Computation, 23(6):1155–1181, 2013.
[8] E. Beggs, J. F. Costa, and J. V. Tucker. Physical oracles: The Turing machine and the Wheatstone
bridge. Studia Logica, 95(1–2):279–300, 2010. Special issue on Contributions of Logic to the
Foundations of Physics, Editors D. Aerts, S. Smets & J. P. Van Bendegem.
[9] E. Beggs, J. F. Costa, and J. V. Tucker. The impact of models of a physical oracle on computational
power. Mathematical Structures in Computer Science, 22(5):853–879, 2012. Special issue on
Computability of the Physical, Editors Cristian S. Calude and S. Barry Cooper.
[10] E. Beggs, J. F. Costa, B. Loff, and J. V. Tucker. Computational complexity with experiments as
oracles II. Upper bounds. Proceedings of the Royal Society, Series A (Mathematical, Physical and
Engineering Sciences), 465(2105):1453–1465, 2009.
55
[11] E. Beggs, J. F. Costa, D. Pocas, and J. V. Tucker. An Analogue-Digital Church-Turing Thesis.
International Journal of Foundations of Computer Science, 25(4):373–389, 2014.
[12] E. Beggs, J. F. Costa, D. Pocas, and J. V. Tucker. On the power of threshold measurements
as oracles. In G. Mauri, A. Dennunzio, L. Manzoni, and A. E. Porreca, editors, Unconventional
Computation and Natural Computation (UCNC 2013), volume 7956 of Lecture Notes in Computer
Science, pages 6–18. Springer, 2013.
[13] E. Beggs, J. F. Costa, D. Pocas, and J. V. Tucker. Computations with oracles that measure vanishing
quantities. Mathematical Structures in Computer Science, 23(6):1155 – 1181, 2017.
[14] E. Beggs, J. F. Costa, and J. V. Tucker. Computational Models of Measurement and Hempel’s Ax-
iomatization. In A. Carsetti, editor, Causality, Meaningful Complexity and Knowledge Construction,
volume 46 of Theory and Decision Library A, pages 155–184. Springer, 2010.
[15] E. Beggs and J. V. Tucker. Experimental computation of real numbers by Newtonian machines.
Proceedings of the Royal Society, Series A (Mathematical, Physical and Engineering Sciences),
463(2082):1541–1561, 2007.
[16] E. Beggs, J. F. Costa, and J. V. Tucker. Limits to measurement in experiments governed by algo-
rithms. Mathematical Structures in Computer Science, 20(06):1019–1050, 2010. Special issue on
Quantum Algorithms, Editor Salvador Elıas Venegas-Andraca.
[17] T. Ambaram, E. Beggs, J. F. Costa, D. Pocas, and J. V. Tucker. An analogue-digital model of compu-
tation: Turing machines with physical oracles. In A. Adamatzky, editor, Advances in Unconventional
Computing, Volume 1 (Theory), volume 22 of Emergence, Complexity and Computation, pages
73–115. Springer, 2016.
[18] J. L. Balcazar, J. Dıas, and J. Gabarro. Structural Complexity I. Springer-Verlag, 2nd edition, 1988,
1995.
[19] M. Durrett. Probabilty: theory and examples. Duxbury Press, 3rd edition, 2005.
[20] E. Beggs, J. F. Costa, and J. V. Tucker. Unifying science through computation: Reflections on
computability and physics. In O. Pombo, J. M. Torres, J. Symons, and S. Rahman, editors, New
Approaches to the Unity of Science, Vol. II: Special Sciences and the Unity of Science, volume 24
of Logic, Epistemology, and the Unity of Science, pages 53–80. Springer, 2012.
[21] M. Pour-El and I. Richards. The wave equation with computable inicial data such that its unique
solution is not computable. Advances in Mathematics, 39:215–239, 1981.
[22] R. Penrose. The Emperor’s New Mind. Oxford University Press, 1989.
[23] R. Penrose. Shadows of the Mind. Oxford University Press, 1994.
[24] H. T. Siegelmann. Computation beyond the Turing limit. Science, pages 545–548, 1995.
56
Index
ADTM , 2
ARNN , 1
σk, 14
xi, 45
zn, 10
advice
advice function, 11
prefix advice function, 11
boundary numbers, 11
Cantor set, 32
classes
C/F , 11
C/F?, 11
AP (F), 23
AP (f), 23
non-uniform complexity classes, 11
codings
c(w), 33
y(f), 33
experimental time, 8
linear search algorithm, 10
lower bound, 51
oracle, 1
oracle Turing machine, 1
“timeout” state, 2
“left” state, 5
“no” state, 5
“right” state, 5
“yes” state, 5
query state, 1
query tape, 1
smooth scatter machine, 2, 32
padding, 10
physical oracle, 2
prefix, 10
protocols
error-free infinite precision protocol, 9
error-prone arbitrary precision protocol, 9
error-prone fixed precision protocol, 9
scatter experiment, 5
cannon, 5
left collecting box, 5
right collecting box, 5
sharp scatter experiment, 6
vertex, 5
wedge, 5
smooth scatter experiment, 6
time schedule, 2, 7
CI, 2
IN , 2
TC, 2
truncating, 10
Turing barrier, 1, 30
unknown vertex position, 8
upper bound, 34
57