BNC Text HYT

Nottingham University Economics Department: lecture. Sample containing about 5783 words speech recorded in educational context


4 speakers recorded by respondent number C456

PS3KH X u (No name, age unknown, lecturer) unspecified
HYTPS000 X u (No name, age unknown) unspecified
HYTPSUNK (respondent W0000) X u (Unknown speaker, age unknown) other
HYTPSUGP (respondent W000M) X u (Group of unknown speakers, age unknown) other

1 recordings

  1. Tape 109002 recorded on 1993-12-14. LocationNottinghamshire: Nottingham ( classroom ) Activity: lecture

Undivided text

(PS3KH) [1] Right okay log on to the network, get into Microfit, call up Q M four FIT and say data [...] last of the [...] finished testing for structural change and then we'll move on to and diagnostics ...
Unknown speaker (HYTPSUNK) [2] [...] person is it? ...
(PS3KH) [3] The windows [...] a bit
Unknown speaker (HYTPSUNK) [4] A little bit
(PS3KH) [laugh]
Unknown speaker (HYTPSUNK) [5] [...] coming straight at me ...
Unknown speaker (HYTPSUNK) [6] [cough] . ...
(PS3KH) [7] Thanks very much
Unknown speaker (HYTPSUNK) [8] It's not working, is that right?
Unknown speaker (HYTPSUNK) [...]
(PS3KH) [9] [...] ... [...] sometimes it gets overloaded, when everybody accesses the same
Unknown speaker (HYTPSUNK) [10] Oh right.
(PS3KH) [11] data file ... sorry but I'm using [...] windows
Unknown speaker (HYTPSUNK) [12] Oh I beg your pardon.
(PS3KH) [13] Right, if you er if you get access to data erm go into the data processing environment, log to the data, right so log T C I M P can you remember this as textile data, this is textile consumption in the U S, right and we are explaining it in terms of consumer income and the relative price of textiles ...
Unknown speaker (HYTPSUNK) [14] What are we doing?
[15] I've got this far
(PS3KH) [16] Right okay can you go into the, just log the data ... [cough] ... okay last week we were looking at test for structural change and we said that the Chow test is the most commonly used test for structural change ... in actual fact Chow developed two tests erm, the parameter constancy, I E structural change,fir ... the first one is where you remember what the, the principle behind the Chow test that you split the whole sample into two sub periods, right and you see whether the, some of the res residual sum of squares from each sub sample, right, is significantly different from the residual sum of squares from a single estimation over the who whole sample period, right [cough] if they are significantly different that suggests that the parameters that are estimated over the full er sample period, right, aren't as good estimates as the unrestricted estimates when we are allowing two different sets of parameters just to be estimated.
[17] Right, what [clears throat] what we'll do is I mean we can confuse the Chow test looking at the residual sum of squares er from each of these sums as the regressions on sub samples, comparing them with the residual sum of squares on a regressionary of the whole sample and the computer will actually do it for us.
[18] Right, so it's one I want to, I didn't get time to do last week was to tell you where you specify that you want to perform a Chow test, right, and the computer will generate both of ... both of Chow's tests with [...] one, the first one is where we've got enough observations in each sub sample, right, to estimate the regression, right, however, you may, you may detect and figure there is some structural change right at the end of your er sample of observations or alternatively right at the beginning.
[19] Now in those cases we can't use the normal Chow tests we've got more parameters to estimate than we have observations, right, as a result Chow developed a second test, right, from structural change where we don't need er erm to estimate essentially the regression in the sub sample which has got very few obser observations but in that Chow, that Chow second test is often called a test of predicted failure, right, Microfit will [...] calculate both of those tests ... and bear in mind I mean that we're spending a lot of time on er parameter constancy, we must bear in mind that parameter constancy is vitally important if we are going to make these inferences possible [...] be about policy making on the basis of our estimates.
[20] Now if our estimates, say for the marginal [...] to consume or the incoming elasticity of demand, right if our estimates are based on a regression in which the real or the underlying marginal propensity to consume or income elasticity demand is varying from plus four to minus two, you know, is our point test [...] generate from, from regression analysis they are going to be completely meaningless.
[21] We want to have some degree of confidence in that er the parameters that we estimate right have remained relatively constant over our sample period.
[22] If they haven't remained constant over our sampling period, right, then there's no point in making our sample predictions, alright, we've got to have at least the confidence that our model is re relatively stable over our small sample, right, in order to make any sort of predictions about the behaviour of the dependent variable that we are looking at and the parameter of interest ... out of sample ... more often than not when we have parameter instability that doesn't always signal a change in government policy, it often signals the fact that you've got a very poor model, a model er is mis-specified and so if we detect a structural change in our model, we first of all try and explain why it may come about
Unknown speaker (HYTPSUNK) [cough]
(PS3KH) [23] was there a major change in government policy, why was, why did consumer behaviour change at this period, right, if we don't have any erm er justification for changing behaviour it probably means that our original model is mis-specified we've got the wrong variables explaining erm the variable of interest on dependent variable.
[24] So parameter constancy is a necessary condition, right, for good applied incrementic work, if you don't have parameter constancy, look at another model basically, right.
[25] Right so what we'll do is I'll show you how we can compute Chow tests in Microfit if you come out of the data processing environment type Q erm and move to the action menu I guess [...] do that linear regression.
Unknown speaker (HYTPSUNK) [...]
(PS3KH) [26] Ah yes we will yeah, thanks very much.
[27] Yeah well sorry if you come out of the action menu, you will need to create the constant so you press the escape key erm then move back to er the process blot edit save option.
[28] Right that will take you back to the data processing menu, then you, then you just create a constant ... call it whatever you like.
[29] ... Now once you've created a constant go into the er estimated test forecast option in the action menu, right specify your equation L N T C space L N I space L N P a constant whatever you called it, right then press the end key.
[30] Now when it asks you for the sample statement, ah done it, right before, it will ask you for the sample statement er over what period would you like to estimate this equation, right instead of pressing the return key which gives you the default, right, if you specify nineteen twenty three to nineteen forty ... sorry just a [...] a dash between it like that nineteen twenty three space nineteen forty ... okay it asks you for the number of observations to be used in the structural stability tests, right, erm if you er press the default er if you press the return key then it should give you the maximum number available right five observations in this case right and then it will perform the [...] regression over the entire sample period, those will be the results you obtain, right, it will also er present Chow erm test statistics.
[31] Right so if you go er into [...] squared and estimate the equation, right, it has actually estimated over the full sample period ... has it, no it hasn't [...] no alright it hasn't estimated so that's an equation estimated over the first sum, first sub sample, right, if you press the return key again ... right ... right, at the bottom in [...] the table of diagnostic tests you'll see Chow's predictive failure test, right and also erm the Chow test.
[32] Right so there, the er Chow tests statistic F, right, is Chow's first, first test ... right and er E is Chow's second test which is often called predictive failure.
[33] ... Now if you look at the test statistics there ... there are two versions of test statistics, one is an asymptotic version, right, assuming that we've got an incredibly large sample and one is the small sample, that's the, that's the L M version.
[34] Now we've got the small sample version which is an F distribution.
[35] Now what I recommend is that you always use the F version of any of these diagnostic test statistics and we can go on to look at the others erm in a moment.
[36] Always use the F statistics unless there are circumstances in which you can't, right if the computer doesn't generate an F statistic, then you will just have to use the L M version.
[37] ... The reason why it's better to use the F statistic, is that the F statistic has much greater power, right, on the small samples.
[38] Both statistics are equally powerful in large samples but by and large you will always be using a small sample, so use the test statistic which is designed for small sample work, that's the F statistic.
[39] ... Right, what I want to do now is just hand out, before you press any other keys, sort of hand out some ... some sheets on er critical values.
[40] The Microfit does help by and large by computing er the probability value of it, all the critical values which er are testically significant are different from zero.
[41] Ho however that won't be the case for all erm computer programs we use, right, as a result we will need to know how to use er T tables [...] tables and [...] tables.
[42] Hopefully, you are all erm familiar with these but just in case you are not ... right, let's just er run through them.
[43] Could you look at the distribution of T ... okay.
[44] Let's just say that we are forming a T test on an estimated coefficient and we want to know whether T ratio will be generated on the computer is significantly different from zero.
[45] Now let's say that we've got a sample of erm thirty observations ... right.
[46] Now we want to know and say we generate a statistic of two point five a T ratio of two point five.
[47] Is that significantly different from zero ... right, well you just go down [cough] the right hand column in degrees of freedom until we reach thirty.
[48] Actual fact degrees of freedom is N minus K.
[49] Right N is the number of the sample and K is the number of parameters that you've estimated in your model.
[50] Right, so say that we have thirty, three observations, right we've got three parameters in this particular model, right, therefore degrees of freedom will be thirty, right, and we've jusk the critical values run across the rows alright.
[51] Now the more certain that we want to be about a particular inference, right, the smaller is the significance level.
[52] Right, so if we want to be er ninety percent certain about inference that corresponds with ten percent significance level and our critical value there is one point seven zero.
[53] Right so if we had a test statistic greater than one point seven zero on a T ratio.
[54] One point seven zero then we could refute the null hypothesis.
[55] Right, the coefficient was zero.
[56] As we increase our confidence, right, so if we are ninety five percent confident we are now looking at the five percent significance level T rat er the critical value rate rises to two point zero point two and if you want to be even more confident, to be ninety nine percent confident about our inference, you look at the one percent level, right, and that has a T ratio of two point seven five.
[57] Right so if we had a T ratio of two point five, right, we could reject the null hypothesis of the five percent level but we wouldn't be able to reject the null hypothesis at the one percent level.
Unknown speaker (HYTPSUNK) [...]
(PS3KH) [58] This is a very common problem in er ... statistical inference and th or which ... which significance level do we choose?
[59] And there's no right and wrong answer to that.
[60] You mention the prevention of the equal user five and ten percent level, right, but bear in mind that the more confident you want to, to be about inference I E the smaller the significance level, right, the, the lower is the power of the test.
[61] This is why we don't test at the ninety nine point nine nine nine nine nine percent confidence level because the higher you'll, the smaller the probability making a type one error [...] essentially er [...] is significance level, right the larger the probability will make a type two error, right, a type two error denotes the power of the test.
[62] Right, so normally we, you know, we want to be reasonably confident, right, so we want to have a reasonably small significance level ... we don't want that significance level to be too small, otherwise the power of the test will diminish very rapidly ... [...] so we normally use the ten or the five percent, five percent level and if you just look at the er the five percent column, right, overall realistic sample sizes, right, from a hundred and twenty down to, to about twenty, I, those T ratios were the critical value they are all about two and that's why we say you can have a T ratio of about greater than two, then you can be at least ninety five percent confident about your inference.
[63] Right, they don't change much as a result of the degrees of freedom er adjustment ... okay with er [...] squared distribution yes, sort of different distribution but we interpret the tables in exactly the same way so if you just have a look on your screen erm, there's an L M version of the serial correlation test, right, and that has a [...] squared distribution one ... right and the test statistic we obtain on your screens is calculated in two point eight eight, no two point zero eight.
[64] Alright, is that significantly different from zero or you could go to a pie squared tables, look at the degrees of freedom, which is one, right.
[65] At the five percent level the critical value cup of [...] squared is three point eight.
[66] Right, so if we had a test statistic greater than three point eight it would reject our null hypothesis.
[67] Right, in this particular case it is of no serial correlation is our null hypothesis.
[68] Right so [clears throat] again large test statistics whether they're Ts Fs or kie squares.
[69] Larger dis large test statistics mean rejection of the goal ... right erm there's no simple rule of thumb with kie squared you just have to look at the erm actual tables to find out what the critical values are.
[70] However, latest th er Microfit not only gives you the test statistic but gives you erm the significance level, probability value er which erm that test statistic is significantly different from zero ... right so if we are looking at that serial correlation test statistic of two point zero eight right we would accept the null hypothesis of er no serial correlation, right, or wouldn't be able to reject it strictly.
[71] Wouldn't be able to reject the null of no serial correlation, right, until we reach fifteen percent significance level.
[72] So if our, if you wanted to be eighty five percent confident about our inferences, right, we would reject that null hypotheses of no serial correlation, right, if we wanted to be ninety five percent confident about our inferences we would accept the null point of no serial correlation in tha in that case.
[73] Okay erm, [clears throat] if we just turn over the sheet look at distribution of er of the F statistic.
[74] In the F test we have two measures of the degrees of freedom, right, you need to have the degrees of freedom in the numerator N one and in denominator N two.
[75] Right, the degrees of freedom ... in the numerator just denote the number of restrictions that you are actually making the test.
[76] Right so if you look at erm ... the Chow test at the bottom of your screens, right the F test, right is an F three in seventeen test.
[77] Three in the numerator denotes that we are making three restrictions ... [...] to be restricting our parameters of three parameters in this particular model, constant, coefficient on, log of prices and log of [...] you are restricting those er at a zero [clears throat] when we estimate over the entire, over the ent the entire sample ... okay and yeah
(HYTPS000) [78] Do you have to say an N two [...] would you round up or round down to be sure?
(PS3KH) [79] Then you just er interpolate so erm if you are looking at an F one fifty test this, which we've got tables here that give F one thirty and F one forty, oh sorry F one forty and F one sixty just interpolate so the critical value would be nought point nought five, sorry, four point nought five, right, so you just average the difference there.
[80] ... Right [clears throat] so if we're erm want to compare an F statistic to see whether it's significant, right, you just go if it's a F three seventeen
Unknown speaker (HYTPSUNK) [cough]
(PS3KH) [81] as in this case we go down the three column, the three is a [...] in the numerator N one, right, till we reach seventeen, right, and now test to see if it's three point two ... right, so if we
Unknown speaker (HYTPSUNK) [cough]
(PS3KH) [82] our Chow test here indicates strongly that we have structural change because we've got a test statistic of twenty two, right, far away, different from three point two our critical value.
[83] When you are actually sort of writing er, say if you are, when you are doing your project or doing your [...] work [...] it's not just sufficient to say always T ratio is greater than two, therefore, it's statistically significant, you must calculate the er correct critical value you use, right, for th for each T ratio and also if you are looking at any diagnostics or looking at the significance of the regression which is an F statistic ... right you must give the five percent or ten percent whichever you choose.
[84] You must give the exact critical values, right, if those critical values that you are using to compare with the regressionally significant whether you have serial correlation or not erm so they're very very important and they ought to be included erm because otherwise we don't know whether a test statistic is er statistically significant or not.
[85] Right okay so when you're performing test for structural change, right, if you just erm go through the simple estimation routine as we've done ... the way you think.
[86] If you don't know a priori where the break will come, right, only we can get some, you can get a handle where the break in this series may come by looking at the rolling regression like you did last week.
[87] That will give you er a good idea as to where the break comes.
[88] If you are not too bothered erm, if you don't know a priori where the break comes, you can just split the sample size in half and just estimate erm an equation for each, not less, you won't but the computer will, if you just specify half the sample size, right, and when it asks for the number of observations [...] failure or Chow tests you just press the return key and it will use all the remaining observations, right, but when you are doing the empirical work you should always test for structural stability, right, and er either of Chow's tests will, will suffice, right, but if you've got [clears throat] a very small stock sample where there are fewer observations than there are parameters to be estimated you will have to use Chow's second test [...] failure.
[89] Although it does have lower, lower power than his first test, right, but if you can't calculate his first test then it's the best thing to use.
[90] Right, okay, erm ... let's now move on to er these other diagnostics, right, like test for structural change these diagnostic test statistics that are calculated for [...] right, because essentially what they're doing is that they're testing the assumptions on which ordinary leased squares is based.
[91] Now if you violate any of the assumptions ordinary squares, right, then the procedure will produce or may produce misleading results, we can only be confident in statistical terms about ordinary leased squares parameters, right, because we know and show in theory that they hold providing a number of assumptions are met, like you have serially uncorrelated errors, right, we don't have m multi co-linearity amongst the regresses, right, we have constant variance throughout the sample ... now if any of those er assumptions are breached, violated then our, any statistical results that are generated from erm the technique that assumes that those assumptions haven't been breached erm are invalidated and we can get very misleading er parameter estimates, right, in the presence of auto correlation or multi-linearity erm [...] .
[92] Right, Microfit holds this in this regard and every regression that you estimate will always have a table of diagnostic test statistics after it, right, so although we are interested in the parameter values of our estira estimated [...] right, in order to have any confidence in those parameter values you must ensure that we haven't violated any of the assumptions
Unknown speaker (HYTPSUNK) [cough]
(PS3KH) [93] in which [...] is based, right.
[94] In Microfit, you've got tests for serial correlation, functional form, normality of the residuals, right, and hetero skilasticity right if you reduce the sample size at the beginning of the estimation period you will also get Chow tests in there as well.
[95] We can use the whole sample but just get those four ... erm four test statistics ... okay.
[96] Now if we just look at the er test statistics for this particular model, right, if we chose say the five percent significance level ... right, then we can see serial correlation, we've got a test statistic of two point zero eight, right, we wouldn't reject the null hypothesis there, the null is that we have no serial correlation, we have uncorrelated errors ... right, clearly we want uncorrelated errors, right, so we'd be quite happy with that particular test statistic, it doesn't exceed the er five percent critical value.
[97] So it doesn't look like this model exhibits serial correlation.
[98] Second test for functional form and there should read, you know we've logged our data here ... logging implies there's a multi picketed relationship between the variable expressed in absolute value, is that the case or is there a linear relationship or some other type of relationship there?
[99] So funct the functional form test will see erm will tell us whether we ought to possibly log the data or whether we ought to unlog the data and just do a regression in er er in absolute [...] levels as opposed to logs.
[100] Right so the functional form test, if we look at the kie squared version, right, again we've got a very small er test statistic implying there's no ... breach of functional form ... right, the, the log er specification, right, seems to be working okay, there's no problems with it ... erm if we now look at normality we've got a bit of a problem with normality, right in that our test statistic is now four point nine, if we look at the critical value at the five percent level of kie when kie squared two, ah it's not too bad, our five percent critical value of the kie squared two is five point nine nine, so although that test statistic is reasonably high, I mean you'd probably reject, oh yes, we can reject the null at ten percent of normally distributed errors ... we wouldn't reject the null at five percent ... erm ... let's just have a look at in actual fact at those errors to see what the problem is.
[101] So if you erm press the return key ... er go into option three ... right and what we'll do is ... we'll, we'll plot erm plot the histogram of the residuals because what this test for normality is doing is seeing whether the residuals we get from our regression are normally distributed, right O and S assumes that they will be.
[102] Now, the reason why we're getting a fairly high test statistic is that er, that distribution, although it looks normal on the left hand side, it doesn't look particularly normal erm on the right and that we are missing some observations, we are missing some values of the residuals er in one area of the graph, nevertheless if we had a larger sample, right we probably erm, right it doesn't look, that looks quite encouraging in actual fact, those residuals do seem to be er normally distributed er what the test statistic is doing er it's saying [clears throat] it's performing a, it's a kiescraper two test, it's making two restrictions, one of which is saying, is the distribution of these residuals symmetric er and also it's testing whether there's one of the tails is a lot larger or a lot longer than the other tail of the distribution and er test statistics fairly high but we wouldn't reject the null of normality at the five percent level so our test statistic is four point zero eight and the critical value is five point nine nine and that the five percent significance level, so we've got reasonably er robust residuals.
[103] Right, if you just want to come out of there and we'll just have a look at the plot of the residuals, if you plot the residuals ... the test for serial correlation there ... well the test for serial correlation, right, and try and determine whether there's a auto regressive structure to those parameters and I think Steve was talking to you about er auto regressions, so what the computer is doing essentially, it is getting the residuals from the model ... raised and it's regressing them ... right on the residuals in the previous period, right, and it's testing whether this parameter row, right, is significantly different from zero ... right, now if this is, if row is significantly different from zero, let's say it's nought point six, that implies the residuals in T are not independent of the residuals in T minus one.
[104] Right there's some correlation between the two, right, auto violation ... the residuals ... right, so where we don't have residual auto correlation which is the case here, you could actually save the residuals, perform an error less and you wouldn't find coefficient on residuals with T minus one [...] significant, you've got an, potentially that's what these tests for serial correlation do, right, they, you can think of them as r saving the residuals, running a, running this regression, right.
[105] Where we [...] this is first order correlation, you may want to specify erm, some second order [...] serial correlation in which case you'd be seeing whether the residuals in T were related to the residuals in T minus one.
[106] Well if we specified our model correctly, right, then these residuals should be just one noise, they should just be, appear with random fluctuations with mean zero.
[107] Right, and if you look at those, they look pretty er pretty uncorrelated [...] .
Unknown speaker (HYTPSUNK) [cough]
(PS3KH) [108] Clearly there's no systematic structure in those residuals, right, if the residuals were moving in a cyclical manner erm that would imply that we are missing an important explanatory variable in our model and its systematic effect has been just thrown into the error term and as a result we are picking up that systematic effect in, in residuals.
[109] Okay press the er escape key ... and go back to the regression results, erm so return to the post regression menu and then display regression results,regressult results again.
[110] [...] case of the diagnostic tests ... [...] this model looks reasonably okay, we haven't got erm significant serial correlation, we haven't breached [...] form, we have reasonably normally distributed residuals, right, test for hetero skilasticity that just a test to see whether the residuals are growing over time, right, hetero skilasticity is where we have non constant, non constant variance ... of our, of our error term ... right, and very often you, you find that the variance through the residuals, something like that ... the residuals will look like that, I think, they're growing systematically over time, right, these are homo skilastic right, and these are hetero skilastic right, residuals and again we wouldn't want to have a model of hetero skilastic residuals, right, simply because that violates one of the assumptions on which the blue properties [...] are based.
[111] hetero skilasticity means to erm ... erm we don't have biased estimates for our parameters, we have estimates that don't have minimum variables and they won't be the most efficient estimates.
[112] [...] got a very low test statistic either in terms of kie or F right so we haven't reached [...] but the model is still a poor model because we don't have erm er stable parameters, right, so you would actually reject this model as it stands, alright, because it didn't pass the diagnostic test statistics but there is no point generating parameters w if they, if they are not constant and you're gonna say [...] elastic demand is [...] one point four six or really it it's ranged between minus three [...] plus, plus six over the sample, so you put estimate means ... so it must have constant parameters.
[113] What you might do, although I say you'd reject this particular model and that
Unknown speaker (HYTPSUNK) [cough]
(PS3KH) [114] this is a structural change, what you could do or the easiest thing would be to do was just to incorporate a dummy variable, right, if you incorporated a dummy variable to a erm ... and to explain the [...] and to take out the effect of structural change, right, er in that dummy inclusive model, right, all the diagnostic test statistics were okay, right, you would use, you would, therefore, use that, that particular model ... what we might do is just see if that is the case erm so if you come out of er diagnostics, work towards the data processing environment and generate a dummy variable ... right, so if you go into the erm data processing environment ... if it's in the er sort of process plot [...] option ... what we'll do create a dummy variable call it D and let D pull zero press the return key and then edit ... D oh and if you just set erm observations for
Unknown speaker (HYTPSUNK) [cough]
(PS3KH) [115] nineteen thirty nine through to nineteen er forty five or whatever the end of the sample is to one, set them all to one
Unknown speaker (HYTPSUNK) [116] Is [...] forty.
(PS3KH) [117] Well thirty nine or forty all do it to thirty nine just to keep, so [...] all have the same results.
[118] I mean where we could dummy into the war here so the war starts in thirty nine so unless you've got good reason to believe that consumption wasn't affected until nineteen forty, we use a dummy for the whole war period ... once you've edited, once you've edited the variable you press the end key that saves the edit.
[119] All I do now is regress our model ... right, including just a, a cons a constant dummy, right, so if you specify your regression equation, just add D to the list of explanatory variables, right, so we are assuming that the effect on the textile consumption is simply just to move the demand function up, right, use the entire sample period ... amongst this estimation.
[120] You know that the parameters were non constant but hopefully they will be now constant, now that we've er included the dummy variable ... okay er so if you go to the diagnostic menu
Unknown speaker (HYTPSUNK) [...]
(PS3KH) [121] we've got a bit of a problem, right, the dummy variable that we've included erm er you might, you might get away with this amount.
[122] It seems that we now have serial correlation in our model as a result of including the dummy ... right, now we've got to test statistic, I mean always look at the F version of the test, right, er our F statistic of three point seven six is significantly different from zero, right, that leaves seven percent level, so a five percent test ... we probably accept that we didn't have any serial correlations and we just got there by the skin of our teeth on that particular test erm yes, you could probably get away with this one, functional forms fine, no problem there, hetero skilasticity right, are F statistic three point nine but significantly different from zero that's six percent level, right, so again we just scrape it if we were looking at the ninety five percent confidence [...] it wants to be five percent if you are using as five percent significance level.
[123] Right, we can just get there.
[124] So this model would be er, you can say it's reasonably robust, bear in mind though that those test statistics on serial correlation and er hetero skilasticity are very close to the critical value used.
[125] Had you chosen the ten percent significance level as your cut off point ... erm you would infer there is both serial correlation and hetero skilasticity in, in the model.
[126] Right, so, you know, there's no econo-matrixes by no means the science and you can essentially get the computer to er tell the story that you want simply by choosing er critical values but nevertheless we've re probably be reasonably confident in our estimate, particularly if those estimates, we'll just go back to them, er the er display the regression results again, [...] display them again but those coefficients are elasticities, right, they're cored with our a priori reasoning ... right, we put ah, unit elasticity, you may want to test the hypothesis that the elasticity on erm the income variable, that income elasticity is significant given one, just generate computer a T ratio in the hypothesis value being one instead of zero, as in a normal T test, right, and that coefficient point nine five is sufficiently close to one, by looking at the standard error [...] to er [...] further that is an estimate of one ... you've got very inelastic er demand erm for the, the textiles, that coefficient, is that the time, I can't believe it if that's all, oh no that's eight, don't worry
Unknown speaker (HYTPSUNK) [127] Eight.
(PS3KH) [128] Yeah so it's point eight, so it is, you do have price, price elasticity demand for textiles is price inelastic.
[129] Right, and we may, we may expect that to be the case, you know, clothes don't have, textiles don't have many substitutes.
[130] As a result erm changes in demand are gonna be er fairly unresponsive to changes in price because you've got to use textiles in order to make clothes and everything else that you make textiles with.
[131] Right, okay I think that's probably er [...] about it we'll leave it there, if you come out of Microfit don't forget to tell it to log off the network.
[132] What's er Steve doing with you at the moment in, in lectures, has he started auto regressive models?
Unknown speaker (HYTPSUNK) [133] Yes.
(PS3KH) [134] Right ...