Theory of Probability and Mathematical Statistics
1. THEORETICAL PART
1 Convergence of sequences of random variables and probability distributions
In probability theory one has to deal with different types of convergence of random variables. Let's consider the following main types of convergence: by probability, with probability one, by mean of order p, by distribution.
Let,... be random variables defined on some probability space (, Ф, P).
Definition 1. A sequence of random variables, ... is said to converge in probability to a random variable (notation:), if for any > 0
Definition 2. A sequence of random variables, ... is said to converge with probability one (almost certainly, almost everywhere) to a random variable if
those. if the set of outcomes for which () do not converge to () has zero probability.
This type of convergence is denoted as follows: , or, or.
Definition 3. A sequence of random variables ... is called mean-convergent of order p, 0< p < , если
Definition 4. A sequence of random variables... is said to converge in distribution to a random variable (notation:) if for any bounded continuous function
Convergence in the distribution of random variables is defined only in terms of the convergence of their distribution functions. Therefore, it makes sense to talk about this type of convergence even when random variables are specified in different probability spaces.
Theorem 1.
a) In order for (P-a.s.), it is necessary and sufficient that for any > 0
) The sequence () is fundamental with probability one if and only if for any > 0.
Proof.
a) Let A = (: |- | ), A = A. Then
Therefore, statement a) is the result of the following chain of implications:
P(: )= 0 P() = 0 = 0 P(A) = 0, m 1 P(A) = 0, > 0 P() 0, n 0, > 0 P( ) 0,
n 0, > 0.) Let us denote = (: ), = . Then (: (()) is not fundamental ) = and in the same way as in a) it is shown that (: (()) is not fundamental ) = 0 P( ) 0, n.
The theorem is proven
Theorem 2. (Cauchy criterion for almost certain convergence)
In order for a sequence of random variables () to be convergent with probability one (to some random variable), it is necessary and sufficient that it be fundamental with probability one.
Proof.
If, then +
from which follows the necessity of the conditions of the theorem.
Now let the sequence () be fundamental with probability one. Let us denote L = (: (()) not fundamental). Then for all the number sequence () is fundamental and, according to the Cauchy criterion for number sequences, () exists. Let's put
This defined function is a random variable and.
The theorem has been proven.
2 Method of characteristic functions
The method of characteristic functions is one of the main tools of the analytical apparatus of probability theory. Along with random variables (taking real values), the theory of characteristic functions requires the use of complex-valued random variables.
Many of the definitions and properties relating to random variables are easily transferred to the complex case. So, the mathematical expectation M ?complex-valued random variable ?=?+?? is considered certain if the mathematical expectations M are determined ?them ?. In this case, by definition we assume M ?= M ? + ?M ?. From the definition of independence of random elements it follows that complex-valued quantities ?1 =?1+??1 , ?2=?2+??2are independent if and only if pairs of random variables are independent ( ?1 , ?1) And ( ?2 , ?2), or, which is the same thing, independent ?-algebra F ?1, ?1 and F ?2, ?2.
Along with the space L 2real random variables with finite second moment, we can introduce the Hilbert space of complex-valued random variables ?=?+?? with M | ?|2, где |?|2= ?2+?2, and the scalar product ( ?1 , ?2)= M ?1?2¯ , Where ?2¯ - complex conjugate random variable.
In algebraic operations, vectors Rn are treated as algebraic columns,
As row vectors, a* - (a1,a2,…,an). If Rn , then their scalar product (a,b) will be understood as a quantity. It's clear that
If aRn and R=||rij|| is a matrix of order nхn, then
Definition 1. Let F = F(x1,....,xn) - n-dimensional distribution function in (, ()). Its characteristic function is called the function
Definition 2 . If? = (?1,…,?n) is a random vector defined on a probability space with values in, then its characteristic function is called the function
where is F? = F?(х1,….,хn) - vector distribution function?=(?1,…, ?n).
If the distribution function F(x) has density f = f(x), then
In this case, the characteristic function is nothing more than the Fourier transform of the function f(x).
From (3) it follows that the characteristic function ??(t) of a random vector can also be defined by the equality
Basic properties of characteristic functions (in the case of n=1).
Let be? = ?(?) - random variable, F? =F? (x) is its distribution function and is the characteristic function.
It should be noted that if, then.
Indeed,
where we took advantage of the fact that the mathematical expectation of the product of independent (bounded) random variables is equal to the product of their mathematical expectations.
Property (6) is key when proving limit theorems for sums of independent random variables by the method of characteristic functions. In this regard, the distribution function is expressed through the distribution functions of individual terms in a much more complex way, namely, where the * sign means a convolution of the distributions.
Each distribution function in can be associated with a random variable that has this function as its distribution function. Therefore, when presenting the properties of characteristic functions, we can limit ourselves to considering the characteristic functions of random variables.
Theorem 1. Let be? - a random variable with distribution function F=F(x) and - its characteristic function.
The following properties take place:
) is uniformly continuous in;
) is a real-valued function if and only if the distribution of F is symmetric
)if for some n? 1, then for all there are derivatives and
)If exists and is finite, then
) Let for all n ? 1 and
then for all |t| The following theorem shows that the characteristic function uniquely determines the distribution function. Theorem 2 (uniqueness). Let F and G be two distribution functions having the same characteristic function, that is, for all The theorem says that the distribution function F = F(x) can be uniquely restored from its characteristic function. The following theorem gives an explicit representation of the function F in terms of. Theorem 3 (generalization formula). Let F = F(x) be the distribution function and be its characteristic function. a) For any two points a, b (a< b), где функция F = F(х) непрерывна, ) If then the distribution function F(x) has density f(x), Theorem 4. In order for the components of a random vector to be independent, it is necessary and sufficient that its characteristic function be the product of the characteristic functions of the components: Bochner-Khinchin theorem .
Let be a continuous function. In order for it to be characteristic, it is necessary and sufficient that it be non-negative definite, that is, for any real t1, ... , tn and any complex numbers Theorem 5. Let be the characteristic function of a random variable. a) If for some, then the random variable is lattice with a step, that is ) If for two different points, where is an irrational number, then is it a random variable? is degenerate: where a is some constant. c) If, then is it a random variable? degenerate. 1.3 Central limit theorem for independent identically distributed random variables Let () be a sequence of independent, identically distributed random variables. Expectation M= a, variance D= , S = , and Ф(х) is the distribution function of the normal law with parameters (0,1). Let us introduce another sequence of random variables Theorem. If 0<<, то при n P(< x) Ф(х) равномерно относительно х (). In this case, the sequence () is called asymptotically normal. From the fact that M = 1 and from the continuity theorems it follows that, along with the weak convergence, FM f() Mf() for any continuous bounded f, there is also convergence M f() Mf() for any continuous f, such that |f(x)|< c(1+|x|) при каком-нибудь. Proof. Uniform convergence here is a consequence of weak convergence and continuity of Ф(x). Further, without loss of generality, we can assume a = 0, since otherwise we could consider the sequence (), and the sequence () would not change. Therefore, to prove the required convergence it is enough to show that (t) e when a = 0. We have (t) = , where =(t). Since M exists, then the decomposition exists and is valid Therefore, for n The theorem has been proven. 1.4 The main tasks of mathematical statistics, their brief description The establishment of patterns that govern mass random phenomena is based on the study of statistical data - the results of observations. The first task of mathematical statistics is to indicate ways of collecting and grouping statistical information. The second task of mathematical statistics is to develop methods for analyzing statistical data, depending on the objectives of the study. When solving any problem of mathematical statistics, there are two sources of information. The first and most definite (explicit) is the result of observations (experiment) in the form of a sample from some general population of a scalar or vector random variable. In this case, the sample size n can be fixed, or it can increase during the experiment (i.e., so-called sequential statistical analysis procedures can be used). The second source is all a priori information about the properties of interest of the object being studied, which has been accumulated up to the current moment. Formally, the amount of a priori information is reflected in the initial statistical model that is chosen when solving the problem. However, there is no need to talk about an approximate determination in the usual sense of the probability of an event based on the results of experiments. By approximate determination of any quantity it is usually meant that it is possible to indicate error limits within which an error will not occur. The frequency of the event is random for any number of experiments due to the randomness of the results of individual experiments. Due to the randomness of the results of individual experiments, the frequency may deviate significantly from the probability of the event. Therefore, by defining the unknown probability of an event as the frequency of this event over a large number of experiments, we cannot indicate the limits of error and guarantee that the error will not exceed these limits. Therefore, in mathematical statistics we usually talk not about approximate values of unknown quantities, but about their suitable values, estimates. The problem of estimating unknown parameters arises in cases where the population distribution function is known up to a parameter. In this case, it is necessary to find a statistic whose sample value for the considered implementation xn of a random sample could be considered an approximate value of the parameter. A statistic whose sample value for any realization xn is taken as an approximate value of an unknown parameter is called a point estimate or simply an estimate, and is the value of a point estimate. A point estimate must satisfy very specific requirements in order for its sample value to correspond to the true value of the parameter. Another approach to solving the problem under consideration is also possible: find such statistics and, with probability? the following inequality holds: In this case we talk about interval estimation for. Interval is called the confidence interval for with the confidence coefficient?. Having assessed one or another statistical characteristic based on the results of experiments, the question arises: how consistent is the assumption (hypothesis) that the unknown characteristic has exactly the value that was obtained as a result of its evaluation with the experimental data? This is how the second important class of problems in mathematical statistics arises - problems of testing hypotheses. In a sense, the problem of testing a statistical hypothesis is the inverse of the problem of parameter estimation. When estimating a parameter, we know nothing about its true value. When testing a statistical hypothesis, for some reason its value is assumed to be known and it is necessary to verify this assumption based on the results of the experiment. In many problems of mathematical statistics, sequences of random variables are considered, converging in one sense or another to some limit (random variable or constant), when. Thus, the main tasks of mathematical statistics are the development of methods for finding estimates and studying the accuracy of their approximation to the characteristics being assessed and the development of methods for testing hypotheses. 5 Testing statistical hypotheses: basic concepts The task of developing rational methods for testing statistical hypotheses is one of the main tasks of mathematical statistics. A statistical hypothesis (or simply a hypothesis) is any statement about the type or properties of the distribution of random variables observed in an experiment. Let there be a sample that is a realization of a random sample from a general population, the distribution density of which depends on an unknown parameter. Statistical hypotheses regarding the unknown true value of a parameter are called parametric hypotheses. Moreover, if is a scalar, then we are talking about one-parameter hypotheses, and if it is a vector, then we are talking about multi-parameter hypotheses. A statistical hypothesis is called simple if it has the form where is some specified parameter value. A statistical hypothesis is called complex if it has the form where is a set of parameter values consisting of more than one element. In the case of testing two simple statistical hypotheses of the form where are two given (different) values of the parameter, the first hypothesis is usually called the main one, and the second one is called the alternative or competing hypothesis. The criterion, or statistical criterion, for testing hypotheses is the rule by which, based on sample data, a decision is made about the validity of either the first or second hypothesis. The criterion is specified using a critical set, which is a subset of the sample space of a random sample. The decision is made as follows: ) if the sample belongs to the critical set, then reject the main hypothesis and accept the alternative hypothesis; ) if the sample does not belong to the critical set (i.e., it belongs to the complement of the set to the sample space), then the alternative hypothesis is rejected and the main hypothesis is accepted. When using any criterion, the following types of errors are possible: 1) accept a hypothesis when it is true - an error of the first kind; )accepting a hypothesis when it is true is a type II error. The probabilities of committing errors of the first and second types are denoted by: where is the probability of an event provided that the hypothesis is true. The indicated probabilities are calculated using the distribution density function of a random sample: The probability of committing a type I error is also called the criterion significance level. The value equal to the probability of rejecting the main hypothesis when it is true is called the power of the test. 1.6 Independence criterion There is a sample ((XY), ..., (XY)) from a two-dimensional distribution L with an unknown distribution function for which it is necessary to test the hypothesis H: , where are some one-dimensional distribution functions. A simple goodness-of-fit test for hypothesis H can be constructed based on the methodology. This technique is used for discrete models with a finite number of outcomes, so we agree that the random variable takes a finite number s of some values, which we will denote by letters, and the second component - k values. If the original model has a different structure, then the possible values of random variables are preliminarily grouped separately into the first and second components. In this case, the set is divided into s intervals, the value set into k intervals, and the value set itself into N=sk rectangles. Let us denote by the number of observations of the pair (the number of sample elements belonging to the rectangle, if the data are grouped), so that. It is convenient to arrange the observation results in the form of a contingency table of two signs (Table 1.1). In applications and usually mean two criteria by which observation results are classified. Let P, i=1,…,s, j=1,…,k. Then the independence hypothesis means that there are s+k constants such that and, i.e. Table 1.1 Sum . . .. . .. . . . . .. . .. . . . . . . . . . . . . . .Sum . . .n Thus, hypothesis H comes down to the statement that frequencies (their number is N = sk) are distributed according to a polynomial law with probabilities of outcomes having the specified specific structure (the vector of probabilities of outcomes p is determined by the values r = s + k-2 of unknown parameters. To test this hypothesis, we will find maximum likelihood estimates for the unknown parameters that determine the scheme under consideration. If the null hypothesis is true, then the likelihood function has the form L(p)= where the multiplier c does not depend on the unknown parameters. From here, using the Lagrange method of indefinite multipliers, we obtain that the required estimates have the form Therefore, statistics L() at, since the number of degrees of freedom in the limit distribution is equal to N-1-r=sk-1-(s+k-2)=(s-1)(k-1). So, for sufficiently large n, the following hypothesis testing rule can be used: hypothesis H is rejected if and only if the t statistic value calculated from the actual data satisfies the inequality This criterion has an asymptotically (at) given level of significance and is called the independence criterion. 2. PRACTICAL PART 1 Solutions to problems on types of convergence 1. Prove that convergence almost certainly implies convergence in probability. Provide a test example to show that the converse is not true. Solution. Let a sequence of random variables converge to a random variable x almost certainly. So, for anyone? > 0 Since then and from the convergence of xn to x it almost certainly follows that xn converges to x in probability, since in this case But the opposite statement is not true. Let be a sequence of independent random variables having the same distribution function F(x), equal to zero at x? 0 and equal for x > 0. Consider the sequence This sequence converges to zero in probability, since tends to zero for any fixed? And. However, convergence to zero will almost certainly not take place. Really tends to unity, that is, with probability 1 for any and n there will be realizations in the sequence that exceed ?. Note that in the presence of some additional conditions imposed on the quantities xn, convergence in probability implies convergence almost certainly. Let xn be a monotone sequence. Prove that in this case the convergence of xn to x in probability entails the convergence of xn to x with probability 1. Solution. Let xn be a monotonically decreasing sequence, that is. To simplify our reasoning, we will assume that x º 0, xn ³ 0 for all n. Let xn converge to x in probability, but convergence almost certainly does not take place. Does it exist then? > 0, such that for all n But what has been said also means that for all n which contradicts the convergence of xn to x in probability. Thus, for a monotonic sequence xn, which converges to x in probability, also converges with probability 1 (almost certainly). Let the sequence xn converge to x in probability. Prove that from this sequence it is possible to isolate a sequence that converges to x with probability 1 at. Solution. Let be some sequence of positive numbers, and let and be positive numbers such that the series. Let's construct a sequence of indices n1 Then the series Since the series converges, then for any? > 0 the remainder of the series tends to zero. But then it tends to zero and Prove that convergence in average of any positive order implies convergence in probability. Give an example to show that the converse is not true. Solution. Let the sequence xn converge to a value x on average of order p > 0, that is Let us use the generalized Chebyshev inequality: for arbitrary? > 0 and p > 0 Directing and taking into account that, we obtain that that is, xn converges to x in probability. However, convergence in probability does not entail convergence in average of order p > 0. This is illustrated by the following example. Consider the probability space áW, F, Rñ, where F = B is the Borel s-algebra, R is the Lebesgue measure. Let's define a sequence of random variables as follows: The sequence xn converges to 0 in probability, since but for any p > 0 that is, it will not converge on average. Let, what for all n . Prove that in this case xn converges to x in the mean square. Solution. Note that... Let's get an estimate for. Let's consider a random variable. Let be? - an arbitrary positive number. Then at and at. If, then and. Hence, . And because? arbitrarily small and, then at, that is, in the mean square. Prove that if xn converges to x in probability, then weak convergence occurs. Provide a test example to show that the converse is not true. Solution. Let us prove that if, then at each point x, which is a point of continuity (this is a necessary and sufficient condition for weak convergence), is the distribution function of the value xn, and - the value of x. Let x be a point of continuity of the function F. If, then at least one of the inequalities or is true. Then Similarly, for at least one of the inequalities or and If, then for as small as desired? > 0 there exists N such that for all n > N On the other hand, if x is a point of continuity, is it possible to find something like this? > 0, which for arbitrarily small So, for as small as you like? and there exists N such that for n >N or, what is the same, This means that convergence and takes place at all points of continuity. Consequently, weak convergence follows from convergence in probability. The converse statement, generally speaking, does not hold. To verify this, let us take a sequence of random variables that are not equal to constants with probability 1 and have the same distribution function F(x). We assume that for all n the quantities and are independent. Obviously, weak convergence occurs, since all members of the sequence have the same distribution function. Consider: |From the independence and identical distribution of values, it follows that Let us choose among all distribution functions of non-degenerate random variables such F(x) that will be non-zero for all sufficiently small ?. Then it does not tend to zero with unlimited growth of n and convergence in probability will not take place. 7. Let there be weak convergence, where with probability 1 there is a constant. Prove that in this case it will converge to in probability. Solution. Let probability 1 be equal to a. Then weak convergence means convergence for any. Since, then at and at. That is, at and at. It follows that for anyone? > 0 probability tend to zero at. It means that tends to zero at, that is, converges to in probability. 2.2 Solving problems on the central heating center The value of the gamma function Г(x) at x= is calculated by the Monte Carlo method. Let us find the minimum number of tests necessary so that with a probability of 0.95 we can expect that the relative error of calculations will be less than one percent. For up to an accuracy we have It is known that Having made a change in (1), we arrive at the integral over a finite interval: With us, therefore As can be seen, it can be represented in the form where, and is distributed uniformly on. Let statistical tests be carried out. Then the statistical analogue is the quantity where, are independent random variables with a uniform distribution. Wherein From the CLT it follows that it is asymptotically normal with the parameters. This means that the minimum number of tests ensuring with probability the relative error of the calculation is no more than equal. We consider a sequence of 2000 independent identically distributed random variables with a mathematical expectation of 4 and a variance of 1.8. The arithmetic mean of these quantities is a random variable. Determine the probability that the random variable will take a value in the interval (3.94; 4.12). Let, …,… be a sequence of independent random variables having the same distribution with M=a=4 and D==1.8. Then the CLT is applicable to the sequence (). Random value Probability that it will take a value in the interval (): For n=2000, 3.94 and 4.12 we get 3 Testing hypotheses using the independence criterion As a result of the study, it was found that 782 light-eyed fathers also have light-eyed sons, and 89 light-eyed fathers have dark-eyed sons. 50 dark-eyed fathers also have dark-eyed sons, and 79 dark-eyed fathers have light-eyed sons. Is there a relationship between the eye color of fathers and the eye color of their sons? Take the confidence level to be 0.99. Table 2.1 ChildrenFathersSumLight-eyedDark-eyedLight-eyed78279861Dark-eyed8950139Sum8711291000 H: There is no relationship between the eye color of children and fathers. H: There is a relationship between the eye color of children and fathers. s=k=2 =90.6052 with 1 degree of freedom The calculations were made in Mathematica 6. Since > , then hypothesis H, about the absence of a relationship between the eye color of fathers and children, at the level of significance, should be rejected and the alternative hypothesis H should be accepted. It is stated that the effect of the drug depends on the method of application. Check this statement using the data presented in table. 2.2 Take the confidence level to be 0.95. Table 2.2 Result Method of application ABC Unfavorable 111716 Favorable 202319 Solution. To solve this problem, we will use a contingency table of two characteristics. Table 2.3 Result Method of application Amount ABC Unfavorable 11171644 Favorable 20231962 Amount 314035106 H: the effect of drugs does not depend on the method of administration H: the effect of drugs depends on the method of application Statistics are calculated using the following formula s=2, k=3, =0.734626 with 2 degrees of freedom. Calculations made in Mathematica 6 From the distribution tables we find that. Because the< , то гипотезу H, про отсутствия зависимости действия лекарств от способа применения, при уровне значимости, следует принять. Conclusion This paper presents theoretical calculations from the section “Independence Criterion”, as well as “Limit Theorems of Probability Theory”, the course “Probability Theory and Mathematical Statistics”. During the work, the independence criterion was tested in practice; Also, for given sequences of independent random variables, the fulfillment of the central limit theorem was checked. This work helped improve my knowledge of these sections of probability theory, work with literary sources, and firmly master the technique of checking the criterion of independence. probabilistic statistical hypothesis theorem List of links 1. Collection of problems from probability theory with solutions. Uch. allowance / Ed. V.V. Semenets. - Kharkov: KhTURE, 2000. - 320 p. Gikhman I.I., Skorokhod A.V., Yadrenko M.I. Theory of Probability and Mathematical Statistics. - K.: Vishcha school, 1979. - 408 p. Ivchenko G.I., Medvedev Yu.I., Mathematical statistics: Textbook. allowance for colleges. - M.: Higher. school, 1984. - 248 p., . Mathematical statistics: Textbook. for universities / V.B. Goryainov, I.V. Pavlov, G.M. Tsvetkova and others; Ed. V.S. Zarubina, A.P. Krischenko. - M.: Publishing house of MSTU im. N.E. Bauman, 2001. - 424 p. Need help studying a topic?
Our specialists will advise or provide tutoring services on topics that interest you. Fundamentals of probability theory and mathematical statistics Many things are incomprehensible to us not because our concepts are weak; The main goal of studying mathematics in secondary specialized educational institutions is to give students a set of mathematical knowledge and skills necessary for studying other program disciplines that use mathematics to one degree or another, for the ability to perform practical calculations, for the formation and development of logical thinking. In this work, all the basic concepts of the section of mathematics “Fundamentals of Probability Theory and Mathematical Statistics”, provided for by the program and the State Educational Standards of Secondary Vocational Education (Ministry of Education of the Russian Federation. M., 2002), are consistently introduced, the main theorems are formulated, most of which are not proven . The main problems and methods for solving them and technologies for applying these methods to solving practical problems are considered. The presentation is accompanied by detailed comments and numerous examples. Methodological instructions can be used for initial familiarization with the material being studied, when taking notes on lectures, to prepare for practical classes, to consolidate acquired knowledge, skills and abilities. In addition, the manual will also be useful for undergraduate students as a reference tool, allowing them to quickly recall what was previously studied. At the end of the work there are examples and tasks that students can perform in self-control mode. The guidelines are intended for part-time and full-time students. BASIC CONCEPTS Probability theory studies the objective patterns of mass random events. It is the theoretical basis for mathematical statistics, which deals with the development of methods for collecting, describing and processing observational results. Through observations (tests, experiments), i.e. experience in the broad sense of the word, knowledge of the phenomena of the real world occurs. In our practical activities, we often encounter phenomena the outcome of which cannot be predicted, the outcome of which depends on chance. A random phenomenon can be characterized by the ratio of the number of its occurrences to the number of trials, in each of which, under the same conditions of all trials, it could occur or not occur. Probability theory is a branch of mathematics in which random phenomena (events) are studied and patterns are identified when they are repeated en masse. Mathematical statistics is a branch of mathematics that deals with the study of methods for collecting, systematizing, processing and using statistical data to obtain scientifically based conclusions and make decisions. In this case, statistical data is understood as a set of numbers that represent the quantitative characteristics of the characteristics of the objects under study that interest us. Statistical data is obtained as a result of specially designed experiments and observations. Statistical data by their essence depends on many random factors, therefore mathematical statistics is closely related to probability theory, which is its theoretical basis. I. PROBABILITY. THEOREMS OF ADDITION AND MULTIPLICATION OF PROBABILITIES 1.1. Basic concepts of combinatorics In the branch of mathematics, which is called combinatorics, some problems related to the consideration of sets and the composition of various combinations of elements of these sets are solved. For example, if we take 10 different numbers 0, 1, 2, 3,: , 9 and make combinations of them, we will get different numbers, for example 143, 431, 5671, 1207, 43, etc. We see that some of these combinations differ only in the order of the digits (for example, 143 and 431), others - in the digits included in them (for example, 5671 and 1207), and others also differ in the number of digits (for example, 143 and 43). Thus, the resulting combinations satisfy various conditions. Depending on the rules of composition, three types of combinations can be distinguished: permutations, placements, combinations.
Let's first get acquainted with the concept factorial.
The product of all natural numbers from 1 to n inclusive is called n-factorial
and write. Calculate: a) ; b) ; V) . Solution. A) . b) Since , then we can put it out of brackets Then we get V) . Rearrangements. A combination of n elements that differ from each other only in the order of the elements is called a permutation. Permutations are indicated by the symbol P n
, where n is the number of elements included in each permutation. ( R- first letter of a French word permutation- rearrangement). The number of permutations can be calculated using the formula or using factorial: Let's remember that 0!=1 and 1!=1.
Example 2. In how many ways can six different books be arranged on one shelf? Solution. The required number of ways is equal to the number of permutations of 6 elements, i.e. Placements. Postings from m elements in n in each, such compounds are called that differ from each other either by the elements themselves (at least one), or by the order of their arrangement. Placements are indicated by the symbol, where m- the number of all available elements, n- the number of elements in each combination. ( A- first letter of a French word arrangement, which means “placement, putting in order”). At the same time, it is believed that nm. The number of placements can be calculated using the formula , those. number of all possible placements from m elements by n equals the product n consecutive integers, of which the largest is m. Let's write this formula in factorial form: Example 3. How many options for distributing three vouchers to sanatoriums of various profiles can be compiled for five applicants? Solution. The required number of options is equal to the number of placements of 5 elements of 3 elements, i.e. . Combinations. Combinations are all possible combinations of m elements by n, which differ from each other by at least one element (here m And n- natural numbers, and n m). Number of combinations of m elements by n are denoted by ( WITH-the first letter of a French word combination- combination). In general, the number of m elements by n equal to the number of placements from m elements by n, divided by the number of permutations from n elements: Using factorial formulas for the numbers of placements and permutations, we obtain: Example 4. In a team of 25 people, you need to allocate four to work in a certain area. In how many ways can this be done? Solution. Since the order of the four people chosen does not matter, there are ways to do this. We find using the first formula . In addition, when solving problems, the following formulas are used, expressing the basic properties of combinations: (by definition they assume and); . 1.2. Solving combinatorial problems Task 1. There are 16 subjects studied at the faculty. You need to put 3 subjects on your schedule for Monday. In how many ways can this be done? Solution. There are as many ways to schedule three items out of 16 as you can arrange placements of 16 items by 3. Task 2. Out of 15 objects, you need to select 10 objects. In how many ways can this be done? Task 3. Four teams took part in the competition. How many options for distributing seats between them are possible? . Problem 4. In how many ways can a patrol of three soldiers and one officer be formed if there are 80 soldiers and 3 officers? Solution. You can choose a soldier on patrol ways, and officers in ways. Since any officer can go with each team of soldiers, there are only so many ways. Task 5. Find , if it is known that . Since , we get , , By definition of a combination it follows that , . That. . 1.3. The concept of a random event. Types of events. Probability of event Any action, phenomenon, observation with several different outcomes, realized under a given set of conditions, will be called test.
The result of this action or observation is called event
. If an event under given conditions can happen or not happen, then it is called random
. When an event is certain to happen, it is called reliable
, and in the case when it obviously cannot happen, - impossible.
The events are called incompatible
, if only one of them is possible to appear each time. The events are called joint
, if, under given conditions, the occurrence of one of these events does not exclude the occurrence of another during the same test. The events are called opposite
, if under the test conditions they, being the only outcomes, are incompatible. Events are usually denoted in capital letters of the Latin alphabet: A, B, C, D, : . A complete system of events A 1 , A 2 , A 3 , : , A n is a set of incompatible events, the occurrence of at least one of which is obligatory during a given test. If a complete system consists of two incompatible events, then such events are called opposite and are designated A and . Example. The box contains 30 numbered balls. Determine which of the following events are impossible, reliable, or contrary: took out a numbered ball (A); got a ball with an even number (IN); got a ball with an odd number (WITH); got a ball without a number (D). Which of them form a complete group? Solution . A- reliable event; D- impossible event; In and WITH- opposite events. The complete group of events consists of A And D, V And WITH. The probability of an event is considered as a measure of the objective possibility of the occurrence of a random event. 1.4. Classic definition of probability A number that expresses the measure of the objective possibility of an event occurring is called probability
this event and is indicated by the symbol R(A). Definition. Probability of the event A is the ratio of the number of outcomes m that favor the occurrence of a given event A, to the number n all outcomes (inconsistent, only possible and equally possible), i.e. . Therefore, to find the probability of an event, it is necessary, having considered various outcomes of the test, to calculate all possible inconsistent outcomes n, choose the number of outcomes m we are interested in and calculate the ratio m To n. The following properties follow from this definition: The probability of any test is a non-negative number not exceeding one. Indeed, the number m of the required events is within . Dividing both parts into n, we get 2. The probability of a reliable event is equal to one, because . 3. The probability of an impossible event is zero, since . Problem 1. In a lottery of 1000 tickets, there are 200 winning ones. One ticket is taken out at random. What is the probability that this ticket is a winner? Solution. The total number of different outcomes is n=1000. The number of outcomes favorable to winning is m=200. According to the formula, we get . Problem 2. In a batch of 18 parts there are 4 defective ones. 5 parts are selected at random. Find the probability that two of these 5 parts will be defective. Solution. Number of all equally possible independent outcomes n equal to the number of combinations of 18 by 5 i.e. Let's count the number m that favors event A. Among 5 parts taken at random, there should be 3 good ones and 2 defective ones. The number of ways to select two defective parts from 4 existing defective ones is equal to the number of combinations of 4 by 2: The number of ways to select three quality parts from 14 available quality parts is equal to . Any group of good parts can be combined with any group of defective parts, so the total number of combinations m amounts to The required probability of event A is equal to the ratio of the number of outcomes m favorable to this event to the number n of all equally possible independent outcomes: . The sum of a finite number of events is an event consisting of the occurrence of at least one of them. The sum of two events is denoted by the symbol A+B, and the sum n events with the symbol A 1 +A 2 + : +A n. Probability addition theorem. The probability of the sum of two incompatible events is equal to the sum of the probabilities of these events. Corollary 1. If the event A 1, A 2, :,A n form a complete system, then the sum of the probabilities of these events is equal to one. Corollary 2. The sum of the probabilities of opposite events and is equal to one. . Problem 1. There are 100 lottery tickets. It is known that 5 tickets win 20,000 rubles each, 10 tickets win 15,000 rubles, 15 tickets win 10,000 rubles, 25 tickets win 2,000 rubles. and nothing for the rest. Find the probability that the purchased ticket will receive a winning of at least 10,000 rubles. Solution. Let A, B, and C be events consisting in the fact that the purchased ticket receives a winning equal to 20,000, 15,000, and 10,000 rubles, respectively. since events A, B and C are incompatible, then Task 2. The correspondence department of a technical school receives tests in mathematics from cities A, B And WITH. Probability of receiving a test from the city A equal to 0.6, from the city IN- 0.1. Find the probability that the next test will come from the city WITH. for 2nd year students of all specialties Department of Higher Mathematics Dear students! We bring to your attention a review (introductory) lecture by Professor N.Sh. Kremer on the discipline “Probability Theory and Mathematical Statistics” for second-year students of VZFEI. The lecture discusses tasks studying probability theory and mathematical statistics at an economics university and her place in the system of training a modern economist, is considered organization
independent student work using a computer-based training system (CTS) and traditional textbooks is given overview of the main provisions this course, as well as methodological recommendations for its study. Among the mathematical disciplines studied at an economics university, probability theory and mathematical statistics occupy a special position. Firstly, it is the theoretical basis of statistical disciplines. Secondly, the methods of probability theory and mathematical statistics are directly used in the study mass aggregates observed phenomena, processing observation results and identifying patterns of random phenomena. Finally, probability theory and mathematical statistics have important methodological significance in cognitive process, when identifying a general pattern researched processes, serves as a logical basis inductive-deductive reasoning. Each second-year student must have the following set (case) in the discipline “Probability Theory and Mathematical Statistics”: 1.
Overview orientation lecture in this discipline. 2.
Textbook N.Sh. Kremer “Probability Theory and Mathematical Statistics” - M.: UNITY - DANA, 2007 (hereinafter we will simply call it “textbook”). 3.
Educational and methodological manual“Probability Theory and Mathematical Statistics” / ed. N.Sh. Kremer. – M.: University textbook, 2005 (hereinafter referred to as “manual”). 4.
Computer training program COPR for the discipline (hereinafter referred to as “computer program”). On the institute’s website, on the “Corporate Resources” page, online versions of the KOPR2 computer program, an overview orientation lecture and an electronic version of the manual are posted. In addition, the computer program and manual are presented at
CD
-
ROM
ah for second year students.
Therefore, in “paper form” the student only needs to have a textbook. Let us explain the purpose of each of the educational materials included in the specified set (case). In the textbook the main provisions of the educational material of the discipline are presented, illustrated by a sufficiently large number of solved problems. IN benefits Methodological recommendations for independent study of educational material are given, the most important concepts of the course and typical tasks are highlighted, test questions for self-testing in this discipline are given, options for home tests that the student must complete, as well as methodological instructions for their implementation are given. Computer program is designed to provide you with maximum assistance in mastering the course in the mode dialogue program with a student in order to compensate to the greatest extent for your lack of classroom training and appropriate contact with the teacher. For a student studying through the distance learning system, the primary and decisive importance is organization of independent work. When starting to study this discipline, read this overview (introductory) lecture to the end. This will allow you to get a general idea of the basic concepts and methods used in the course “Probability Theory and Mathematical Statistics”, and the requirements for the level of training of VZFEI students. Before studying each topic Read the guidelines for studying this topic in the manual. Here you will find a list of educational questions on this topic that you will study; find out which concepts, definitions, theorems, problems are the most important that need to be studied and mastered first. Then proceed to study basic educational material according to the textbook in accordance with the received methodological recommendations. We advise you to take notes in a separate notebook about the main definitions, statements of theorems, diagrams of their proofs, formulas and solutions to typical problems. It is advisable to write out the formulas in special tables for each part of the course: probability theory and mathematical statistics. Regular use of notes, in particular tables of formulas, promotes their memorization. Only after working through the basic educational material of each topic in the textbook can you move on to studying this topic using a computer training program (KOPR2). Pay attention to the structure of the computer program for each topic. After the name of the topic, there is a list of the main educational questions of the topic in the textbook, indicating the numbers of paragraphs and pages that need to be studied. (Remember that a list of these questions for each topic is also given in the manual). Then, reference material on this topic (or on individual paragraphs of this topic) is given in brief form - basic definitions, theorems, properties and characteristics, formulas, etc. While studying a topic, you can also display on the screen those fragments of reference material (on this or previous topics) that are needed at the moment. Then you are offered training material and, of course, standard tasks ( examples), the solution of which is considered in the mode dialogue programs with a student. The functions of a number of examples are limited to displaying the stages of the correct solution on the screen at the student’s request. At the same time, in the process of reviewing most examples, you will be asked questions of one nature or another. Answers to some questions should be entered using the keyboard. numerical answer, to others - choose the correct answer (or answers) from several proposed. Depending on the answer you entered, the program confirms its correctness or suggests, after reading the hint containing the necessary theoretical principles, to try again to give the correct solution and answer. Many tasks have a limit on the number of solution attempts (if this limit is exceeded, the correct solution progress is necessarily displayed on the screen). There are also examples in which the amount of information contained in the hint increases as unsuccessful attempts to answer are repeated. After familiarizing yourself with the theoretical provisions of the educational material and examples, which are provided with a detailed analysis of the solution, you must complete self-control exercises in order to consolidate your skills in solving typical problems on each topic. Self-control tasks also contain elements of dialogue with the student. After completing the solution, you can look at the correct answer and compare it with the one you gave. At the end of the work on each topic, you should complete control tasks. The correct answers to them are not displayed to you, and your answers are recorded on the computer’s hard drive for subsequent review by the teacher-consultant (tutor). After studying topics 1–7, you must complete home test No. 3, and after studying topics 8–11, home test No. 4. Variants of these tests are given in the manual (its electronic version). The number of the option being executed must match the last digit of your personal file number (grade book, student card). For each test, you must undergo an interview, during which you must demonstrate your ability to solve problems and knowledge of basic concepts (definitions, theorems (without proof), formulas, etc.) on the topic of the test. The study of the discipline ends with a course exam. Probability theory is a mathematical science that studies the patterns of random phenomena. The discipline offered for study consists of two sections “Probability Theory” and “Mathematical Statistics”.Tutoring
Submit your application indicating the topic right now to find out about the possibility of obtaining a consultation.
but because these things are not included in the range of our concepts.
Kozma PrutkovTheory of Probability and Mathematical Statistics
Introductory part