Jump to content

Talk:Stochastic process/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2

I’m sorry to interrupt your nice discussion with tree (3) sort sentences (comment, story, book passage). It might be that it seams irrelevant. I think not. 1)The comment: There is a totally different approach of facing or treating THINGS by a Mathematician and by an Engineer (not to mention a Physicist!). Mathematician’s task is to prove that there EXISTS a solution for a specific problem. Engineer’s task is to FIND a solution for a specific problem The relative story goes like this: Question - What is the value of pi (π) ? Mathematician’s answer: 3,1415926535897932384626433832795 ........ Physicist’s answer: 3,14159 engineer’s answer : about 3 (!!!) 2)The story: The wise old man of the village is talking with the local judge when some neighbor is approaching them and says to wise old man, ignoring totally the judge. "My neighbor did to me this and I forced to do to him this and this. Wasn’t I right?". The wise old man look firmly at him and with the most assuring voice says, "Yes, you were right". The man leaves satisfied. In a while the other neighbor appears and goes straight to the wise old man, ignoring totally the judge, and says to him: "my neighbor did to me this and so I forced to do to him this and this. Wasn’t I right?". The wise old man look also firmly at him and with the most assuring voice says, "Yes, you were right". The judge watching the whole scene with an out bursting voice says to him "Look wise old man, they can not be both right". The wise old man stares patiently at him and with the most assuring voice says: "You are right to". 3)The Book passage: (Unfortunately I could not find the original one, someone might help with this. The passage is from Lewis Carroll book "Through the Looking-Glass", so the following is what I recall) "When I say something", said the Humty Dumpty, "I mean not more not less from what I mean". "Yes, but what counts is the meaning you are giving to your words", said Alice. "No", said Humty Dumpty, "What counts is who is the BOSS" -- October 25th, 2006 -- diam@duth.gr

I definitely agree with the comment of diam@duth.gr, and also with Michael Hardy's one below. There should be a rough description of the concept of a random process understandable for non-mathematicians. In my opinion it is not so good to start with a formal 'dry' definition which seems to be written just for the sake of correctness, I think it would be better to start with a somewhat 'philosophical' discourse on random processes first - and then afterwards one presents a rigorous formal definition.

For example I would start as follows: In the mathematics of probability, a stochastic process or random process is a process that can be described by a probability distribution. Roughly speaking it is the counterpart of a deterministic process; instead of dealing only with one possible 'reality' of how the process might evolve under time (as it is the case for solutions of an ordinary differential equation, just as an example), in a random process there is some indeterminacy in its future evolution described by propability distributions. This means that even if the intial condition (or starting point) is known, there are more possibilities the process might go to, but some paths are more probable and others less... And now one could mention prominent examples like Brownian motion (and maybe other diffusions), Markov processes, Poisson processes and that they have important applications in physics, biology, finance, etc. After all, one could proceed with the formal definition.

I think intuitive descriptions are very useful for everyone - also for mathematicians. This is a general problem of formal sciences: inspiration and a deeper feeling for the subject is usually far away from its precise 'crystalline' description. It is not so easy to find a good balance between both qualities - intuition and rigour. Nevertheless it is worth to try it. --Uli.loewe 13:44, 12 April 2007 (UTC)


The definition given on this page for "stochastic process" as a random function is an elegant mathematical definition if one takes the viewpoint that there is a family of functions on a common domain and range and a probability measure on a sigma algebra of subsets of that family. But might a reader interpret this to mean that there could be a random choice of a function from a set of functions with different domains and ranges? (And would that fall within the intended definition?)

One way to deal with varying domains is by taking direct sums of the function spaces involved. -- [[User:Miguel|Miguel]]

Whatever the final verdict on the above point is, it would help to explain how the often seen definition of a stochastic process as "an indexed family of random variables" agrees with the idea of "random function". This would involve explaining that an indexing set can be finite, countable or uncountable. That fact may surprise someone who hasn't studied abstract mathematics.

This article needs input from otehr people, please add this definition if you'd like. -- Miguel


There is some inconsistency with the way books treat the term "stochastic process". Some (such as Gardiner) restrict it to a process in time. This appears to be in the same spirit as author's who say, for a vector, "In this book W will be a finite dimensional vector space" in order to set a context that is more specific than the general definition of "vector". There are books which use "Random Field" to include processes that take place in time. These are arbitrary conventions. It would help the reader to mention that he may encounter these inconsistencies.

I think it is best to give the broadest possible definition to which the basic techniques can be applied, and list special cases or narrower definitions later in the article. If you can add other definitions with references to the literature that will greatly improve the article. -- Miguel
I'll attempt to do that, but first I must study how the Wikipedia does things. Is there a way to use standard html editors like Netscape Composer or Mozilla's composer to create Wikipedia pages? Stephen Tashiro
Just go ahead and edit the page. Under the editing window there is an editing help link you can follow to get instructions on how to format the content and include mathematics. And remember the wikipedia policy: be bold in updating pages. -- Miguel

I can't yet expound upon the QM approach to stochastic processes. Below is what I propose for the traditional approach. If I put it all in at the top of the page, I get a warning about a 33 Kb limit, so I didn't edit the entry yet. Tashiro

Wow, that's pretty impressive. Why don't you add the content to the article a bit at a time? That way you will get around the 33Kb limit, and if you use sectioning (see the editing help about this) appropriately you will never hit the limit. Also, the rest of us will be able to work on your edits as you add them to the article more easily than if you added the whole thing at once. -- Miguel

OK, I pasted the draft into the beginning of the article. I confess that I don't know html or the wikipedia editing conventions very well. My method is compose the pages with Mozilla, view source, cut and paste into the Wiki editing window. Some spurious blank lines have been introduced and people who use the Wiki editor may not like the symbols introduced by Mozilla. Is there a consensus about whether using html editors to compose Wiki pages is a good or bad thing? Tashiro

You really have to read the Editing Help. The point of wiki is that ho HTML is necessary to create a page, you just write text and add very little wiki-specific markup.
I have moved your discussion to the end of the article because it does not really have the structure of an Encyclopedia article, although there is very good content in it. I will work on merging your discussion into the existing content. The first thing I'll do is section your discussion. -- Miguel

Modified Definition

I don't find the given definition very rigourous - it is more of a description than a definition. Heres the beginings of a new version. I still need to do some work on this as it tails off a bit towards the end...

A one dimensional stochastic process consists of a set of random variables together with a bijective map assigning one of these random variables to each member of an index set . All of the random variables share the same codomain (a probability space ) and the same domain (a measurable space). Thus each point corresponds to a value for each random variable and hence a function , known as a realisation of the stochastic process.
Technically a stochastic process is not a function, only a particular realisation of it, and the mapping can be described as such. Despite this, the term random function and notations such as are convenient abbreviations.
Stochastic processes can then be discrete or continuous: a discrete stochastic process is an indexed collection of random variables
where i runs over the countable index set I, while a continuous stochastic process is an uncountably infinite set of random variables, with an uncountable index set.
A particular stochastic process is determined by specifying the joint probability distributions of the various random variables f(x).

Comments and opinions please...! SgtThroat 15:21, 12 Dec 2004 (UTC)

  • In an encyclopedia article, clarity should come first and rigor later. Michael Hardy 19:56, 12 Dec 2004 (UTC)

I quite agree, but in a mathematical article such as this anything that claims to be a definition should be rigourous; compare for example the articles on rings or sigma-algebras, bith of which start off with a clear definition. I suppose that what I am really saying is that I find the definition given unclear as well as un-rigorous, perhaps with the first as a consequence of the second. As an alternative suggestion how about calling the 'definition' a 'description' (although I still feel that it needs work), then placing a proper definition later on..?SgtThroat 13:37, 13 Dec 2004 (UTC)

But should one always begin with a definition? Sometimes clarity might be better served by beginning with an intuitive description and putting the rigorous definition after some discussion that enables the reader to understand the definition. Michael Hardy 22:33, 13 Dec 2004 (UTC)

Once again I was not precise enough with my language - I meant 'further down' when I wrote 'later on', so I think we are in agreement. Thus I propose putting the definition further down. This necessitates renaming the existing 'definition' (which I don't think is appropriate anyway) - any suggestions? Also I'd like comment on the definition above - is it completely correct?SgtThroat 01:01, 14 Dec 2004 (UTC)

I have now changed the "definition" section to "Common stochastic processes" and removed some of what I felt was rather confusing language. I've put in an example, probably in the wrong place, which I hope gives a feel for the index set and the domain of the random variables. I plan to rework the above "modified definition" to remove some stuff that is probably redundant due to overlap with the reworked first section of the article, then include it towards the end of the article. I'd like to hear opinions and comments on both parts. SgtThroat 20:30, 15 Dec 2004 (UTC)

Moved the Brownian motion example into the following section, which I renamed "examples". I then removed the empty examples section that was already there. Forgot to add an edit summary... SgtThroat 20:51, 15 Dec 2004 (UTC)

Concerning the definition of a stochastic process, I agree with Miguel's point of view in the general discussion above. In my opinion the present definition as L^0-valued function is a little bit too technical, moreover it does not explain why considering a sample path as an equivalence class mod 0. Of course, when dealing with deep measure theory this point of view might be the proper setting, but it does not make much sense in an introductory essay. I understand probability theory not as structure theory like algebra or differential geometry for which (but not even there!) there exists a broad common agreement about the 'right' formal framework. As Miguel, I suggest to take the plain and simple definition of a random process as an (indexed) family of measurable functions on defined on a probability space. This definition should be broad enough for most articles in this enceclopedia, and if some articles need more specific definitions they can still introduce them. If the others here agree, I will do the changes. --Uli.loewe 08:41, 13 April 2007 (UTC)

I forgot to mention: to my knowledge L^0 is the space of all measurable functions (mod 0), not necessarily bounded. Bounded functions are usually (but not always, I agree) understood as uniformly bounded, which means that their (essential) supremum is finite. --Uli.loewe 09:03, 13 April 2007 (UTC)

The last section

The last section is too long. How about spinning it off as a seperate article? -wshun 03:07, 14 Nov 2003 (UTC)

I'm putting it here. for the time being. This will allow us to fix the formatting problems, and also to move content gradually from here to the main text. I don't think a new article is necessary, but maybe the need for it will become apparent in the process. -- Miguel 17:17, 14 Nov 2003 (UTC)

Informal Discussion

Several different definitions are found for "stochastic process" in mathematical literature.   This is not particularly scandalous.  Some mathematical definitions describe things which have many specific properties that are crucial in writing mathematical proofs.  (For an  example of such a definition, see the entry for vector space in the Wikipedia.) Other mathematical definitions do not provide much specific information.  The traditional definitions of  "stochastic process" fall in this latter category.  Even  a book whose subject is "stochastic processes", may treat the definition  rather casually.  Such texts chose to add details later by defining special cases of the general concept.  In order to explain the distinctions among the various defintiions of "stochastic process" it is best to begin with an informal discussion.

One purpose of defining the term  "stochastic process" is to create terminology that is broad enough  to describe a random phenomenon that produces an infinite amount of data each time it occurs.    Another goal is to have terminology that is narrow enough to describe situations where the data is , in a manner of speaking, all of the same format  (e.g.  it might be all be  prices in dollars or it might be all  be measurements of air pressure in millibars ). We give some examples of some physical situations that can be viewed as a stochastic process.

Example 1: Consider air pressures in millibars  at the local airport airport from 6:00AM to 7:00AM.   Assuming that time is a continuum, there are an infinite number of times between 6:00AM and 7:00AM.  A single occurrence ("realization")  of the process is this infinite set of pressures  that occurs on a particular day.   Each of these pressures is a datum that

is in the same format as the others.

Example 2: Suppose we hand out 8.5" by 11"  sheets of white paper to each member of an audience and ask them to draw a picture.  Let us take the simplistic view that underlying process that creates the picture is  the same for each member of the audience.  Each picture we receive is a realization of this random phenomena.  Each picture contains an infinite amout of data if we  take the idea of the space as a continuum seriously.    At each location (x,y) in the picture there is a certain color value that is part of the realized data.   There are an infinite number of such locations (x,y) on a sheet of 8.5" by 11"  paper.    To describe a color we may need more than a single number.  Suppose the color data  at location (x,y)  is given by a triplet of numbers (r,g,b) that measure the red,blue and green intensities.   Since the data has this form at all locations, we can think of it as being "all of the same format".

In practical applications, a theoretical infinity of data is often approximated by a large but finite amount of data.    The concept of "stochastic process" includes those cases where the amount of data produced is finite.   Suppose instead of the continuous readings of example 1, the sample of pressure readings  might be recorded by a device that records one temperature reading per second  Then each realization (recording for an hour)  produces  3600  readings.   We may consider this situation to be a stochastic process.  (We might also consider it to be a multivariate random variable with 3600  components.)   Likewise the images in example 2 might be recorded with a scanner.  This would produce a file which had (r,g,b) data for a large but finite number of pixels.

It is important to understand that the same practical problem can be described in mathematics in different ways.    Mathematics itself does not specify a unique translation of a physical situation into mathematical terms.   We give some examples of such ambiguity.

Example 3: Suppose we measure the height of a randomly selected person, we may think of this as a process that produces a single datum, the persons height, each time we perform it  The most common mathematical treatment of this situation is to view it as a  realization of a random variable . It is also possible  to view this as a stochastic process that produces 1 datum on each realization.  However it is not customary to do this.

Example 4:  Suppose we perform an experiment where we measure the  weight, height and temperature of a randomly selected person.  This is usually viewed as a realization of a multivariate random variable which has three components.   It is also possible  to view this as a stochastic process that produces 1 datum , the triplet of (weight, height , temperature),  on each realization.  However it is not customary to do this.

Some authors [ Neftci] view a stochastic process as a "random function" in the following manner.  A  stochastic process is considered to be a function of two variables f(t,w).  The first variable describes which datum we wish to examine from all the data produced by one realization of the process.   The second variable w represents  which specific realization of the process occurred.  

In example 1,  We may think of w is a datum that specifies information such as  "January 3, 2002 6:00AM to 7:00AM".    The variable would be used to indicate a specific time.  For example, one possible value of t is  6:03 AM.    From this point of view, the realization of the process consists of  picking a specific value of w.   Then, with w being fixed,  the function f(t,w) becomes a  function of t alone.    So a single realization of the process is a specific function of t.

To view example 2 as a "random function" , we consider it to be f(s,w).   We let s be a vector of numbers (x,y) that give a location on the picture.  We let  w be a datum that describes a specific picture , such as  "picture by Wilbur Semismith, completed January 3, 2003  9:42 AM".    We consider f(s,w) to be a vector valued function whose range is the set of 3 dimensional  vectors that give the (r,g,b) data.

The values of most of the variables in the above examples are familiar mathematical quantites, such as numbers or vectors.  But the reader may it have difficulty conceptualizing the nature of the variable w  and stating exactly what possible values it make have.    The possible values of  w  are taken from a probability space . Roughly speaking, the "probability space" refers to three things:   1) a set of things that we may think of as  "primitive" events,  2) a collection of subsets of the primitive events and  3) a function or algorithm that is able to  assign a probablity to each of these subsets.    In the simple examples above, we can only give a partial description of the probability space.   In example 1, the primitive events can be described as "all possible 6 to 7 AM time periods at the airport".  In example 2, the primitive events can be described as "all possible pictures that people might draw  on 8.5" by 11" paper".  These descriptions dodge the question of which  subsets of primitive events can be assigned a probability and how this might be done.    A primitive event in the probability space should determine all the values of the random phenomenon.   For instance,  in example 1 it would not be correct to say that a pressure reading of 1013.25 millibars at 6:03 AM is a primitive event.  Giving the air pressure at a single time would not determine it at the others.   In example 2, a primitive event would not be  "all the colors a person might draw at some location on the paper" or "all the images a person might draw in the upper left hand corner of the picture".   A primitive event should determine the whole picture. 

In practical applications of stochastic processes , there is often a quantitative description of the probability space.  For example, one may assume a specific formula or algorithm generates the occurrence of the process.  The algorithm will usually involve taking  realizations of random variables and doing certain computations with the results to arrive at the realization of the stochastic process.   In such a case, the primitive events are the set of all possible realizations of the random variables employed by the algorithm.  The probabilities involved are computed from the joint distribution of these random variables.     

Example 5:  For the sake of having a simple example,  assume that nature generates the air pressures of example 1 according to the following scheme. Pick two  air pressures in millibars by selecting a two random numbers number p0 and p1 from  a probability distribution on the interval -0.2.to 0.2.   Let the resulting pressure readings be given by a  pressure-vs-time graph that is a straight line connecting  the points (  6:00 AM, 1013.25 + p0) with  (7:00, 1013.25 + p1).

We may  view stochastic process of example 5 as a function  f(t,w) where  w  is the vector (p0,p1).   A primitive event is the selection of  specific values  for p0 and p1.   The probabilities of various subsets of primitive events can be  computed from the joint distribution of (p0,p1), which is the product of two uniform distributions since we have assumed p0 and p1 are independent.  For example, we can compute the probability of the subset  "p1 >  0.0 and p2 > 0.1".    


Many authors ([Doob] [ Iyanaga and Kawada][ Karlin and Tayor][ Parzen])  define a stochastic process as an "indexed collection of random variables".   The idea of "random function" can be reconciled with this viewpoint.   If  f(t,w) is a  "random function", the variable t is viewed as an index and the possible values of t are viewed as an "index set".   One may think of  t as providing a way to index a specific datum in all the data that have been produced by one realization of the stochastic process.  

In example 1, t is a time and the index set is the set of all times between 6:00 AM and 7:00 AM  (One should not assume that an "index set " must be a set of integers. Indexing via a set of integers is often used computer programming, as when we index an array A by integers in order to refer to A[1], A[2], etc.   However the concept of "index set" in  a stochastic process is more general than this.   The "index set" can be any set at all.)     The pressure at a specific time, such as 6:03 AM can be viewed as a single random variable since the pressure at this time will be different on different days.  We think of the stochastic process as an infinite family of random variables X(t) that are indexed by the time t.    A random variable has a domain and a range.    A realization of a variable like X(6:03 AM) is a single real number.  So we may say its range is the set of numbers that are possible pressure readings.   The domain of the random variable X(603 AM) ) is not the set of times, even though the notation X(t) makes it appear this way.  The domain of X(6:03 AM) is the probability space for the phenomena.  In this example, the domain is a datum that would describe a specific date at the airport such as  "January 3, 2002  6:00AM to 7:00AM"     


In example 2, once the random picture is realized, we can refer to a single datum by giving its location as a 2 dimensional vector s =  (x,y).   The set of all locations on the image is the "index set".  (The index set in this example does not consist of integers.  A location such as ( 1.345, 4.019) is considered to be an index.)   Each location may be viewed as a random variable X(s).  The range of each random variable X(s)  is the set of all  3 dimensional vectors of color information (r,g,b).   The domain of each X(s) is a datum that describes a specific picture such as "picture by Wilbur Semismith, completed January 3, 2003  9:42 AM".  

Another way of looking at example 2 is to let the index set be the set of all (x,y,k) where  (x,y) defines  a location on the image and k is 1,2 or 3 depending on whether we wish to index the red,green or blue datum.   Then an individual random variable X(s), with s = (x,y,k),  has a range that is the set of real numbers that describe a color intensity ( instead of the set of 3 dimensional vectors of such numbers).   This somewhat goes against the idea of having each datum in the range of the random variables  be "all of the same format", since we might not consider "red color intensity" information to have the same physical meaning as "blue color intensity".  However,  if we decide to think of all these data as random variables whose range is the set of real numbers, then we may do this.  

Example 5 can be interpreted as an "indexed" collection of random variables in the same way as example 1, except that the primitive events in the  probability space are given by the set of possible values for  p0 and p1.  (Assigning specific values to p0 and p1, determines all the pressure readings.)   This is the domain of each X(t).   The range of X(t) is the set of numbers that give a single pressure reading.  

In  the "indexed collection of random variables" view of stochastic process , all the variables have the same domain and range.   The fact they have the same range implements the idea that the phenomena produces data that is "all of the same format".  The fact they have the same domain indicates that they are all realized when a single primitive event in the probability space is realized.  For instance, in example 2, X(6:00 AM) and X(6:30 AM) denote the pressure readings at two different times.  A realization of the process consists of picking a particular day at the airport.    The realization of X(6:00 AM) and X(6:30) gives the pressure readings at those times on that one date.  We do not think of a realization of the process as  measuring a value for X(6:00 AM) on one day and then picking a different day to measure the value of  X(6:30 AM).

It is not correct to think each variable in the "indexed collection of random variables" as necessarily being the "same" random variable realized over and over again.  Two random variable in the collection must be "the same" only in two respects: they must have the same range and same domain.   However they need not be independent of each other.    In example 5, the measurement X(6:00AM) is completely determined by the choice of a value for p0.    However the measurement at X(6:30 AM) depends both on the choice of p0 and p1.  The measurement X(6:00 AM) can be near the maximum pressure of 1013.25 + 0.2 if p0 is near its maximum of 0.2.  But the measurement at X(6:30 AM) cannot be near the maximum unless both p0 and p1 are near the maximum (i.e. unless the linear graph is high at both ends).  This suggests that X(6:30AM) is not independent of X(6:00 AM) since both depend on p0.    It also suggests that their marginal probability distributions are not the same.  To write the formula for the joint distribution of X(6:00 AM) and X(6:30 AM) is not a simple task, even for a person experienced in probability theory.  However a reader who is familiar with computer programming  should be able to write a Monte Carlo simulation of this example and investigate the dependence of the two measurements.
 
Consider an attempt to model the process of example 2 by using a single random variable.  Suppose we  scan a large number of sample pictures into pixels.   Then we  create a histogram for how often each (r,g,b) datum occurred in their pixels.    To realize a random picture, we  randomly select an  (r,g,b) value for each pixel from this histogram according to the frequency with which the various  (r,g,b) vectors occur.    Most of the image that we would create this way would be a cloudy mess.  They would lack the images of people, houses, flowers and dogs that appear in pictures drawn by human beings.     Stochastic processes like example 2, whose realizations typically contain a high degree of organization and structure,  are poorly approximated by  making each X(t) an independent realization of the same random variable.



The following situation is discussed in most probability texts. Certain aspects of it are misleading if the reader erroneously assumes they apply to all stochastic processes.

Example 6:   Consider tossing a coin and suppose the probability it lands head is a known probability p.   (The example of tossing a thumbtack is also used by authors who wish to make it clear that p need not be 0.5 )  A single toss of a coin is usually viewed as a  realization  of a random variable.   A fixed number of tosses of a coin can be viewed as a multivariate random variable.    Suppose we  wish  to consider a question such as  "What is the probability that we must make more than 30 tosses before getting the first occurrence of 'heads'? "   Then we  must consider the situation where a coin is tossed over and over again an unlimited number of times.   The usual way to view this is to consider the repeated tossing of the coin to be a stochastic process.   Each toss is a datum that is either "heads" or "tails". A realization of the process is one particular infinite sequence of such data.

We may think of example 6 as a random function  f(t,w).  The variable 't' takes on integer values 1,2,3... depending on which toss in the infinite sequence of tosses we wish to examine.  The variable 'w' must be a primitive event in the probability space.   Since an infinite sequence of coin tosses is a conceptual experiment rather than an acutual one, we don't define a primitive event in the probability space to be an event like "Tosses begun by Lula Mumshelter on January 3, 2002 9:42 AM".    The customary way to define the primitive events for coin tossing is to say that it is the set S of all possible infinite sequences of the form {r1,r2,r3...} where each ri is either "heads" or "tails".   Notice that the  general concept of a stochastic process  has no requirement that a realization of the process contains exactly the same information as a primitive event.   However, this is  special feature of example 6.  In coin tossing  both w and f(t,w)  describe the results of a particular infinite sequence of coin tosses.   To meet all the requirements a probability space we must be able to compute probabilities.   The technical details will not be given here.  Probability texts give  examples of computing various subsets of primitive events.  For example, texts show how to answer questions like "What is the probability that there are at least 30 tails before the first head".  This is the probability of the subset of S consisting of all sequences whose first 30 terms are "tails" and whose other terms contain at least one "head".


If we think of example 6 as an "indexed collection of random variables" then the index set can be defined as the set of integers {1,2,3...}  As mentioned above, the index set for a process need not be a set of integers, but in this particular example it is.    We can define the random variable  X(k) to be the result of the kth toss of the coin.    The range of each random variable X(k) that is the set of two things {heads,tails}. The domain of this random variable is the the set S of infinite sequences of heads or tails.  ( The domain is not merely the set of two things {heads, tails} . Remember that an event in the domain must determines the entire realization of the random process, which is all the data for the entire collection of random variables)   As remarked above, this example is unusual in that one realization of the indexed collection of random variables can be identified with an event in  probability space associated with w.  

In example 6,  a  reader may think of the process as  "realizing the same random variable over and over again".   As we pointed out earlier, this is not a correct view of the general stochastic process.   However in example 6,  random variables such as  X(1) and X(2) are independent.  And they do have the same marginal distributions.   (  The phrase "marginal distribution" is required if we wish to express the idea that the probability that X(1) is "heads" is the same as the probability that X(2) is "heads".   To assign  probabilities to the events "heads" and "tails" we need a distribution whose domain is set of two  events {heads, tails}.  As remarked above, this set of two events is not the domain of the X(i).   The marginal distribution of X(1) over S is used to find the probability that X(1) is heads and the other X(i) are any values whatsoever.  So we may say the marginal distribution of X(1) has domain {heads,tails}.  The marginal distribution X(2) has the same domain.  Saying the marginal distributions are the same correctly expresses the idea.)


If we reader wishes to be reminded of the general definition of a "stochastic process"  by memorizing  only one example, it would be best not to choose example 6.  Coin tossing has many features that are not typical of more general stochastic processes.

Mathematical literature contains variations of the definitions that we have sketched above.   Many authors (e.g. [Doob] [Parzen] ) do not explicitly say  that each member of the "indexed collection of random variables" must have the same probability space.  However, in studying specific stochastic processes they make additional definitions that do require this.  Some authors (e.g. [Gardiner] ) say the  index set must represent time.    Some authors say that the random variables must be real valued  (e.g. [Iyanaga and Kawada]).   The definition we will give is consistent with the above informal discussions.

Definition

Let  P be a probability space consisting of (S,E,m) where S is a set, E is a sigma algebra of subsets of E and m is a probability measure defined on E.   Let R be a set.
Let X be a collection of random variables indexed by some index set T and having the property that each X(T) in X is random variable whose domain is the probability space P and whose range is R.  Then X is stochastic process on P with index set T.

A stochastic process whose index set represents time is called a Time Series.  The use of the word "Series" does not imply that the data must necessarily be indexed by integers.  Both the case where T is a set of integers and the case where T is the set of real numbers are studied.   Example 1 can be regarded as a Time Series.

The term Random Field is often used as a synonym for "stochastic process", especially when an author wishes to emphasize that the index set T need not represent time.   The word  "Field" does not necessarily imply that a Random Field  represents something like an electric or magnetic field, although there are many publications that do apply the theory of stochastic processes to such topics.   Example 2 can be regarded as a random field. You might want to remove stock markets and heart rate from the list of processes. They show characteristics of chaotic systems. See this article A Multifractal Walk down Wall Street; February 1999; by Mandelbrot; I also have heard that one reason people like Bach's Brandenburg concertos are that they are fractal and even mimic the heart rate. --66.44.104.246 12:57, 3 Aug 2004 (UTC)

The algebraic approach

The section titled "The Algebraic Approach" seems to contain an attempt to axiomatically define random variables (only for the complex valued case) and expectations for these. This is largely unrelated to the topic of the page, i.e. to "stochastic processes".

Also the paragraph claims "One of the important features of the algebraic approach is that apparently infinite-dimensional probability distributions are not harder to formalize than finite-dimensional ones." Can anybody make sense of this? The approach given seems to be only for one-dimensional random variables. And what might be the meaning of "apparently infinite-dimensional"?

My suggestion would be to simply remove this section titled "The Algebraic Approch". --Jochen 00:45, 28 Nov 2004 (UTC)

Remove it, as it is half-cooked. However, this is what's going on. There are two approaches to measure theory and integration. In the geometric approach, one defines first what is meant by measurable sets and measures on them, and then uses measures to define integrals and integrable functions. In the algebraic approach, one starts by defining an algebra of integrable functions and the integral as a positive linear functional on that algebra. Then, measurable sets are those whose characteristic function is integrable and their measure is the integral of their characteristic function. The geometric approach gets harder and harder when vector measures on high-dimension measurable sets are involved, while the algebraic approach is no harder for Banach-space valued measures on infinite-dimensional spaces than it is for real random variables on [0,1].

Kolmogorov's axioms for probability theory are analogous to the geometric approach to measure theory, and the Kolmogorov extension theorem constructs a sigma-algebra of measurable sets and a probability measure on it given the finite-dimensional distributions of a stochastic process. There is an alternative algebraic axiomatization of probability theory in terms of algebras, and an alternative "extension theorem" within that framework.

But, like I said, the current section is half-baked, so it would be fair to remove it.

Miguel 02:02, 2004 Nov 28 (UTC)

I have moved that material to algebra of random variables and created links from five other pages to that page. Michael Hardy 02:29, 28 Nov 2004 (UTC)

If you know the limits of X[n]=f(X[n-1]) and Y[n]=g(Y[n-1]) as n->00 what do you know about the random function Z[n]=50%*f(Z[n-1])+50%*g(Z[n-1])?--SurrealWarrior 01:05, 25 Jun 2005 (UTC)


Codomain D

the codomain D, often the real number R. If the D could be something other than R, can somebody please give an example? In the above example 2, the D is vector (r, g, b), any way, r,g,b are still real number. Is there any example, that D has nothing to do with real number? Thanks. Jackzhp 17:49, 31 August 2006 (UTC)

Calculus on stochastic process Stochastic calculus

In the example 1, the stochastic process can be express as f(t, w). What does it mean for the derivative over t, and the integral over t? Jackzhp 00:32, 1 September 2006 (UTC)

Entirely new version of this article?

As I already mentioned in my latter comments, I am not very happy with the article in its present form. Please don't get me wrong, I appreciate all the work that had been done! But I believe that such a prominent notion as a stochastic process should deserve a more extensive treatise. I think an introduction to this topic should include
1) First of all, and very importantly, an intuitive description of a random process (as I proposed in one of my latter comments)
2) A formal description which is not too technical but broead enough to serve the needs of most articles which deal with such processes.
2) A list of beautiful examples (maybe one/some explained in detail, like a Markov chain, even if it seems to be redundant). Pictures (like a generic sample path of a Brownian motion, for example) would be great!

The Kolmogorov extension theorem is a topic which serves enough material for a seperate article.

Again, the big question is for which audience the article should be written. I prefer not to be too technical (also from the point of view of a pure mathematician), there is always enough space and time to do this later on whenever it seems reasonable. What I mean is the following: probably no one who is not a little bit into mathematics will look up the Kolmogorov extension theorem. Here it makes sense to be more technical. But with such general notions as a stochastic process it is more likely (and in my opinion also the purpose of a wikipedia article) to meet the interest of a general (non-specialist) audience, as many people have heard of such processes and would like to know a little bit about it. Their needs should be served first, and afterwards we can go more into math. I think it is possible to write an article which satisfies both mathematicians and those who are not. It will not be so easy but worth to try it.
What do others think about? Doing all alone is a lot of work, I would be happy for suggestions, and help of course! --Uli.loewe 09:44, 13 April 2007 (UTC)

The suggestions are all very prudent, but perhaps, you shouldn't use the title entirely new version of the article - such things are better accomplished in stages. Arcfrk 08:29, 21 April 2007 (UTC)
You are right! I apologize for my inadequate formulation. This comes from not knowing how wikipedia articles grow. --Uli.loewe 16:50, 21 April 2007 (UTC)
No problem! Be bold and improve the article. Arcfrk 23:42, 21 April 2007 (UTC)
I just tried to be bold, hopefully not too much, and changed a little bit the introduction. --Uli.loewe 14:28, 22 April 2007 (UTC)
Good start! Are you going to follow it up with simplifying the main body of the article, as you outlined in your earlier comments? Arcfrk 23:11, 23 April 2007 (UTC)
Yes, I would like to do that! I hope I will find enough time the following days. --Uli.loewe 10:34, 24 April 2007 (UTC)

Hello! I have a small suggestion if you are still editing. I'm a student in Econ getting my MA and I haven't really been able to get an answer to this one question: Why is stochastic processes so important? As if often the case in grad school, I find myself learning all of this stuff and then wondering, "And what was the point of this again?" Sometimes the relevance of the technique gets lost in the carrying out of the technique. Could you perhaps expand the beginning a bit to explain why this is so important? For example, could you put it into context by saying what statisticians were limited to doing prior to the idea of stochastics, who invented it and what problem they were trying to solve that led them to the technique, and conceptually why it is so powerful? I would start it, but I need to learn how this wiki thing works a bit more first. I just got a username to post this comment! Thanks! --Smarzo 15:22, 2 May 2007 (UTC)

Where did the word Stochastic come from?

What is the root or where did the word Stochastic originate? Latin 'Sto' may be translated as 'place' or 'casue to stand' but 'chastic' does not have a common definition. Does anyone know? 80.249.76.34 09:28, 11 July 2007 (UTC)Evan80.249.76.34 09:28, 11 July 2007 (UTC)

According to Dictionary.com and Online Etymology dictionary
1662, "pertaining to conjecture," from Gk. stokhastikos "able to guess, conjecturing," from stokhazesthai "guess," from stokhos "a guess, aim, target, mark," lit. "pointed stick set up for archers to shoot at" (see sting). The sense of "randomly determined" is first recorded 1934, from Ger. Stochastik.
The Latin Sto/Stare is probably unrelated. --Mcorazao 20:18, 8 October 2007 (UTC)

modification of a stochastic process

It's not clear to me how a modification of a stochastic process is relevant right at the beginning. At each point of time the process has a random variable which must conincide almost everywhere with the one form the modified process. Did I say that right? --MarSch (talk) 11:12, 19 June 2008 (UTC)

The definition that is written in the article is simply incorrect (I have fixed it once already, but someone has changed it back). A process is a function T --> (random variables with values in X), whereas a modification is a random variable which attains values in X^T. Therefore, a modification is a stronger notion (not every process has a (measurable) modification). --Sodin (talk) 17:08, 6 July 2008 (UTC)

Bad definition?

I think that the definition of stochastic processes () used here is not exact.

According to that, a stochastic process is just a set of random variables, without any indexing.

This definition seems to work at first glance because it uses , which is implicitly assumed to be predefined, but if you look better, it is only a "flat" set that doesn't describe the whole stochastic process.

Actually, not the set is the stochastic process, but that implicitly predefined family of random variables . I think it shoud be explicitly defined that way. What do you think?

132.231.132.110 (talk) 13:54, 14 July 2008 (UTC)

The definition of s.p. is OK; what is wrong is the definition of modification. A modification is a random variable with values in (which is a stronger notion; the difference is esp. important when T is "large"). I am a bit wary of fixing it, since every time it returns to some incorrect form. Sasha
I have thought another minute about what you wrote. You are right, but this is standard notation. Writing , people usually mean a function from T to the set of objects (and not just the set). Is this what bothers you? Sasha —Preceding unsigned comment added by Sodin (talkcontribs) 17:43, 21 August 2008 (UTC)

My favorite definition of 'stochastic'

My developmental neurobiology teacher once defined "stochastic process" as "a process we don't understand yet" (compare to "random process"). —Preceding unsigned comment added by 71.172.120.144 (talk) 02:20, 26 November 2008 (UTC)

Conway in the "See also" section

Has John Horton Conway anything to do with stochastic processes?? —Preceding unsigned comment added by 139.82.27.31 (talk) 22:12, 23 March 2010 (UTC)

Separability, or what the Kolmogorov extension does not provide

"One solution to this problem is to require that the stochastic process be separable. In other words, that there be some countable set of coordinates {f(xi)} whose values determine the whole random function f." — Rather rough. Consider for instance the process whose values at all rational points are the same (random) X, and at all irrational points the value 2X. The countable set of coordinates does determine, but not in the way required in the definition of separable process. The definition is more technical, see Springer. Boris Tsirelson (talk) 07:15, 26 September 2011 (UTC)

Archive 1Archive 2