(Project 1a)
1. The nature of the problem:
This project involves finding out the relationship between frequency and density in the
quantitative measurement of vegetation. I chose this topic because it seemed to get at an issue I have wondered about from other angles, i.e. the relationship between the objective subject matter of scientific inquiry and the limitations of the methods most often used to know it.
One such method uses frequency symbols, which are used to represent such subjective assessments as rare, occasional, common, etc., and are the result of making a complete list of species. The specific objection to frequency symbols is that their use attempts "to assess on one scale two largely independent variables"--density and cover--which is "an ideal probably impossible to attain." Density, which represents the number of plants per unit area, and cover, which is the percent of total area covered, are both factors which influence the relative conspicuousness of a species to an observer, confounding perception of true frequency, which is a complex character that cannot be measured by density and cover alone. Not only is this relationship difficult to standardize and to communicate, but it is also subject to semantic confusions because the highest grade of density and cover is often considered to be "dominant," which is difficult to keep from confusing with the concept of dominance as "degree of influence exerted over other species of the community." While it is possible for a species to be dominant in both senses, it is not necessarily so, and thus, according to Greig-Smith, a new word is needed for the combination of density and cover so that "dominant" can be used to actually mean dominant in this active sense involving influence exerted.
According to P. Greig-Smith, frequency symbols give inconsistent and inaccurate results due to "personal factors," which is to say, subjective differences in criteria which make it difficult to achieve objectivity. Studies have shown that as much as 25% deviation is due to subjective variables (Smith, 1944), and that the same observer can see the same subject matter differently at different times (Simpson 1940). It stands to reason then that different observers will see differently even when looking at the same object at the same time. And the consequences of this are twofold; firstly, that it is difficult to put an absolute value on an individuals results, and secondly, that is is difficult to compare results except broadly.
And yet, because it is easier to determine, frequency is used often despite its disadvantages, one of which--the need for a large area of study so to correlate the results with habitat factors involving small areas--is minimized by the use of small quadrats. We could increase accuracy further by increasing the number of random samples tested.
As P. Greig-Smith noted, one disadvantage of frequency symbols involves assessing two different properties on the same scale without discriminating between them. On the other hand, it does integrate these two important aspects of vegetation. If we could easily assess density and pattern, he says, then we wouldn't need frequency, but density is only easy if discrete units can be counted, and then it's very time consuming. Likewise, describing pattern is "relatively laborious" compared with simple frequency estimates. If a pattern can be described mathematically, then frequency can be calculated, but this is difficult to do and the results are of little practical value. Likewise, if the only pattern is a random one, then it is easy to calculate the relationship between frequency and density. But again, the results are of little use, and more importantly, distributions are seldom random. Thus, it is often held that the loss of some information by this method is balanced by the speed of description it allows.
The purpose of this experiment at the outset then is to clarify (in this writer's mind) the relationship between some of these subjective factors which can compromise the accuracy and validity of scientific data, and to understand how methods differ with regard to soundness of observation.
2. The site and methods:
The site chosen for this experiment was the U. W. Arboretum (a natural laboratory that is conveniently close to home). The site was chosen for no better or worse reason than its beauty.
This experiment uses the method of percentage frequency, which measures species
according to their presence or absence in a sample unit, in this case, an 8 meter square quadrat, divided into 1, 2, 4, & 6 meter squares in order to measure local frequency and to compare scales.
The sampling area itself was chosen by tossing a stick into the air and calling its landing position the top left perimeter of the uppermost unit.
Frequency was determined by examining each unit sample and recording the species
present or absent in each. The number of samples in which a species occurs is equal to the o/o of the total and represents an estimate of the chance of its occurring in any one sample.
In determining density, or the# of direct counts per sample area, it was sometime difficult to distinguish one individual unit from another due to intermingling of foliage. Therefore, number per sample was determined for each subunit according to root frequency in order to avoid the difficulty of assessing an exact relationship between cover and frequency.
3. The results:
The results of this experiment (as shown in Figure 1. in the attached Appendix) show that both frequency and sometimes density are highly variable relative to the scale of observation.
Species a. showed 100% frequency at junction (1A, 3C), 25% frequency at (1A, 5E), and
only 11% frequency at (1A, 70), but was moving back up to 18.75% by (lA, 9I). Inversely,
species c. showed only 25% at (lA, 3C), was up to 68.75% by (lA, 5E), and was back down
again to 48.5% at (Al, 9I). By comparison, species b. was at 75% frequency at (lA, 3C), up to 87.5% at (lA, 5E), and stayed about level, from 86 to 82.8% at (lA, 70) and (lA, 9I)
respectively.
Meanwhile, density seems likewise variable, though not as orderly. Density of species a.
began at 14 units per sample at (Al, 3C), dropped to 3.5 u.p.s. by (1A, 5E), 2.7 u.p.s. at (lA,70), and 3 u.p.s. at (lA, 9I) For species b. density dropped from 5.75 u.p.s. at (lA, 3C) to 3.25 u.p.s. at (lA, 5E), and remained steady at 1.1 u.p.s. through (lA, 70) and (lA, 9I).
Thus, species a. went from high frequency and high density in the smallest sample measured, to low frequency and low density as larger samples were taken into account. And then, while frequency began to rise again, density stayed low, relative to where it began, into the largest sample perspective taken.
On the other hand, species b. went from high frequency and high density in the small frame of reference, to high frequency and lower density at center. And then, while frequency stayed high throughout, density dropped and stayed low.
Meanwhile, species c. which began with low frequency and very low density, grew to high frequency, even while it stated at relatively low density.
Essentially, what this all looked like to the scientifically-uninitiated eye was that the pretty little yellow flowers were clumped everywhere into groups, two of which touched on the comers of our quadrat, while soft green plants wandered among the white ones, which were fairly evenly distributed.
The results of this experiment indicate that frequency and density are variable relative to the scale from which they are viewed.
This is, of course, what P. Greig-Smith indicated could happen due to the fact that
frequency depends on both number and pattern of distribution. Density can stay the same while frequency rises and falls, and vice versa, due to the fact that frequency changes as the pattern of distribution changes. It is important to note the biological implications of the occurrence of non random distributions, i.e. the meaning which orders the mechanisms. Pattern is a long ignored variable. In random distribution, the probability of finding an individual at any given point is the same as for all other points, for one individual does not change the probability of another being nearby. Whereas in a uniform distribution, the probability of finding an individual rises at the corners of an imaginary grid, indicating that one individual lowers the chances of another being nearby. And likewise, in clumped distributions, one individual raises the probability of another being near. Thus, while the absolute numbers stay the same, a uniform distribution of individuals represents a high frequency, while a grouped pattern is counted as a low frequency.
This shows, I think, how dependent frequency is upon the size of the measuring unit itself.
It has to be either extensive or fine grained enough to take in the governing pattern. When one changes the scale of the sampling unit, new patterns emerge.
I see now what Greig-Smith meant when he said that this percentage frequency method,
unlike absolute measures, is easier to determine than density and cover, but its meaning is not clear cut. Because an "increase in size of sampling area will necessarily result in an increase in the chance of a species occurring in any particular sample...frequency value has meaning only in relation to the particular size and shape of sampling area used ..." "and has meaning only when coupled with a statement of the method used." Because all other things are not equal. And what changes is the scale of measurement.
Thus, the thing about frequency is that we can see it change with the scale of measurement used. Such that, if one goes smaller in their scale of observation, then what seems low frequency in the larger square suddenly gets higher. And vice versa, if one goes larger, as in our experiment, then frequency goes lower whether density does or not, up until the point where our sampling unit is large enough to be inclusive of other wrinkles in the pattern.
"We tend to study (landscapes) at conveniently human scales." Allen and Hoekstra write.
"There are, however, small and large scales at which we can profitably study landscapes ...there is a remarkable unity to the landscape criterion...many of the patterns at the scale of an unaided human experience ...are remarkably universal ...many processes ...recur at scales from the landscape of a leaf surface as seen by a mite, all the way up to remotely sensed images of continents ..."
"Although Greig-Smith and Curtis aim to quantify the community relationships on the
ground, they are the first to say that it is extremely difficult, fraught with ambiguity and arbitrary decisions of measurement. Our definition (of community) appears to solve some of the dilemmas
recognized by other authors, and it does this by categorically refusing to put community on the landscape as simple patches."
4. The meaning:
The lesson, I think, is that in order to see the relationship between the density of a
phenomenon and its frequency, we have to look at it on a sliding scale, a measure by which otherwise invisible constraints and opportunities appear. We can move in ever closer or back up to where the object of our interest shows its whole face, which is not necessarily visible from just any scale. To avoid being like a person with her eyes closed trying to perceive an elephant, first by the leg, then the trunk, then a side, then the tail... A sliding scale method allows us to see the whole elephant simply by opening our eyes. There are points of view we have as of yet to imagine.
Allen and Hoekstra introduce such a sliding scale method, pointing out the significance of frequency and the importance of measuring it properly. Frequency being a key variable involved in the relationship between high and low order phenomena explains "why complex systems require several levels of organization for their adequate description." "Levels of organization are ordered by the frequency of the return time for the critical behavior of the entity in question. Higher levels have a longer return time, that is, they behave at lower frequency." This relationship makes nature appear hierarchical and regular "if what behaves at a lower frequency is defined as occupying an upper level." "For our purposes," they explain, "frequency and constraints are the most important criteria for orderly levels. Upper levels constrain lower levels by behaving at a lower frequency..." and "even by refusing to act..." like the "impregnable stupidity" that constrains elegant ideas. They point out to the heart of the matter that, "[A]ny system that does not involve such behavior would be hard to know."
"Constraints...allow systems to be predictable." Which is to say that the lower-frequency
higher-level phenomenon, like rivers that always comes back, are the natural constraints
frequency lower-level phenomenon, including much of that which human activity enacts. Therefore, "the name of the game in science is finding those helpful constraints that allow important predictions."
5. Conclusion:
AsP. Greig-Smith points out, "It is evident...that care and experience are necessary before results of value can be obtained ...and...at its best the method (of frequency symbols) is subject to considerable error." Careful standardization of method would reduce errors introduced by subjective factors, and thereby reduce the difficulties of comparison between scales. And this appears to be the formidable task that Allen and Hoekstra have taken as their own.
As noted early on, I chose this question because the answer to it, as far as I could see,
seemed to get at the limits of the method of objectification. However, this project has given me a new appreciation of its reaches as well. The extent to which we actually can measure the world is truly an amazing and awe inspiring endeavor. But we've long realized the limits of what we can measure from outside-looking-in. Now comes the time when we realize that even the most concrete of subject matters is governed by the most abstract of patterns, and with this comes a whole host of new questions. Our outside-looking-in way of observing the world must ultimately take account of the dynamics involved in the subjective component of our perception, as well as of those forces which cannot possibly be objectified, despite our efforts, precisely because we are inside of them looking-out. From here, we cannot see the perimeters of the forces which contain us, but we can feel and understand their influence by paying attention to the low-frequency but higher-order experiences which are the signs of the times. Scale is a magnificent phenomenological tool, and a method such as would make our perspective more flexible could only help us. Through the new paradigm insights which guide this nacient method, many of the objects of our knowledge that we used to consider to be "out there" are shown rather to be "in here"--or at any rate, in between here and there, as we all are. Once able to operate in multidimensional space-time, we could know much more than we do about all that has been beyond the reach of a method that treats all knowable entities as objects with no will of their own.
This is what pattern in nature represents, i.e. a will of its own. We can only benefit by
understanding it better. Call it the will of god, if it pleases, but do not disregard it as beyond our ability to know, for we can indeed understand these laws of nature if we so choose. If we hope ever to begin to understand the reaches and the limits of human potential, we must. Our very survival and well-being, that for which consciousness itself has evolved, might well depend upon our grasp of theses channels and constraints.
6. References:
7. Appendices:
1. The nature of the problem:
This project involves finding out the relationship between frequency and density in the
quantitative measurement of vegetation. I chose this topic because it seemed to get at an issue I have wondered about from other angles, i.e. the relationship between the objective subject matter of scientific inquiry and the limitations of the methods most often used to know it.
One such method uses frequency symbols, which are used to represent such subjective assessments as rare, occasional, common, etc., and are the result of making a complete list of species. The specific objection to frequency symbols is that their use attempts "to assess on one scale two largely independent variables"--density and cover--which is "an ideal probably impossible to attain." Density, which represents the number of plants per unit area, and cover, which is the percent of total area covered, are both factors which influence the relative conspicuousness of a species to an observer, confounding perception of true frequency, which is a complex character that cannot be measured by density and cover alone. Not only is this relationship difficult to standardize and to communicate, but it is also subject to semantic confusions because the highest grade of density and cover is often considered to be "dominant," which is difficult to keep from confusing with the concept of dominance as "degree of influence exerted over other species of the community." While it is possible for a species to be dominant in both senses, it is not necessarily so, and thus, according to Greig-Smith, a new word is needed for the combination of density and cover so that "dominant" can be used to actually mean dominant in this active sense involving influence exerted.
According to P. Greig-Smith, frequency symbols give inconsistent and inaccurate results due to "personal factors," which is to say, subjective differences in criteria which make it difficult to achieve objectivity. Studies have shown that as much as 25% deviation is due to subjective variables (Smith, 1944), and that the same observer can see the same subject matter differently at different times (Simpson 1940). It stands to reason then that different observers will see differently even when looking at the same object at the same time. And the consequences of this are twofold; firstly, that it is difficult to put an absolute value on an individuals results, and secondly, that is is difficult to compare results except broadly.
And yet, because it is easier to determine, frequency is used often despite its disadvantages, one of which--the need for a large area of study so to correlate the results with habitat factors involving small areas--is minimized by the use of small quadrats. We could increase accuracy further by increasing the number of random samples tested.
As P. Greig-Smith noted, one disadvantage of frequency symbols involves assessing two different properties on the same scale without discriminating between them. On the other hand, it does integrate these two important aspects of vegetation. If we could easily assess density and pattern, he says, then we wouldn't need frequency, but density is only easy if discrete units can be counted, and then it's very time consuming. Likewise, describing pattern is "relatively laborious" compared with simple frequency estimates. If a pattern can be described mathematically, then frequency can be calculated, but this is difficult to do and the results are of little practical value. Likewise, if the only pattern is a random one, then it is easy to calculate the relationship between frequency and density. But again, the results are of little use, and more importantly, distributions are seldom random. Thus, it is often held that the loss of some information by this method is balanced by the speed of description it allows.
The purpose of this experiment at the outset then is to clarify (in this writer's mind) the relationship between some of these subjective factors which can compromise the accuracy and validity of scientific data, and to understand how methods differ with regard to soundness of observation.
2. The site and methods:
The site chosen for this experiment was the U. W. Arboretum (a natural laboratory that is conveniently close to home). The site was chosen for no better or worse reason than its beauty.
This experiment uses the method of percentage frequency, which measures species
according to their presence or absence in a sample unit, in this case, an 8 meter square quadrat, divided into 1, 2, 4, & 6 meter squares in order to measure local frequency and to compare scales.
The sampling area itself was chosen by tossing a stick into the air and calling its landing position the top left perimeter of the uppermost unit.
Frequency was determined by examining each unit sample and recording the species
present or absent in each. The number of samples in which a species occurs is equal to the o/o of the total and represents an estimate of the chance of its occurring in any one sample.
In determining density, or the# of direct counts per sample area, it was sometime difficult to distinguish one individual unit from another due to intermingling of foliage. Therefore, number per sample was determined for each subunit according to root frequency in order to avoid the difficulty of assessing an exact relationship between cover and frequency.
3. The results:
The results of this experiment (as shown in Figure 1. in the attached Appendix) show that both frequency and sometimes density are highly variable relative to the scale of observation.
Species a. showed 100% frequency at junction (1A, 3C), 25% frequency at (1A, 5E), and
only 11% frequency at (1A, 70), but was moving back up to 18.75% by (lA, 9I). Inversely,
species c. showed only 25% at (lA, 3C), was up to 68.75% by (lA, 5E), and was back down
again to 48.5% at (Al, 9I). By comparison, species b. was at 75% frequency at (lA, 3C), up to 87.5% at (lA, 5E), and stayed about level, from 86 to 82.8% at (lA, 70) and (lA, 9I)
respectively.
Meanwhile, density seems likewise variable, though not as orderly. Density of species a.
began at 14 units per sample at (Al, 3C), dropped to 3.5 u.p.s. by (1A, 5E), 2.7 u.p.s. at (lA,70), and 3 u.p.s. at (lA, 9I) For species b. density dropped from 5.75 u.p.s. at (lA, 3C) to 3.25 u.p.s. at (lA, 5E), and remained steady at 1.1 u.p.s. through (lA, 70) and (lA, 9I).
Thus, species a. went from high frequency and high density in the smallest sample measured, to low frequency and low density as larger samples were taken into account. And then, while frequency began to rise again, density stayed low, relative to where it began, into the largest sample perspective taken.
On the other hand, species b. went from high frequency and high density in the small frame of reference, to high frequency and lower density at center. And then, while frequency stayed high throughout, density dropped and stayed low.
Meanwhile, species c. which began with low frequency and very low density, grew to high frequency, even while it stated at relatively low density.
Essentially, what this all looked like to the scientifically-uninitiated eye was that the pretty little yellow flowers were clumped everywhere into groups, two of which touched on the comers of our quadrat, while soft green plants wandered among the white ones, which were fairly evenly distributed.
The results of this experiment indicate that frequency and density are variable relative to the scale from which they are viewed.
This is, of course, what P. Greig-Smith indicated could happen due to the fact that
frequency depends on both number and pattern of distribution. Density can stay the same while frequency rises and falls, and vice versa, due to the fact that frequency changes as the pattern of distribution changes. It is important to note the biological implications of the occurrence of non random distributions, i.e. the meaning which orders the mechanisms. Pattern is a long ignored variable. In random distribution, the probability of finding an individual at any given point is the same as for all other points, for one individual does not change the probability of another being nearby. Whereas in a uniform distribution, the probability of finding an individual rises at the corners of an imaginary grid, indicating that one individual lowers the chances of another being nearby. And likewise, in clumped distributions, one individual raises the probability of another being near. Thus, while the absolute numbers stay the same, a uniform distribution of individuals represents a high frequency, while a grouped pattern is counted as a low frequency.
This shows, I think, how dependent frequency is upon the size of the measuring unit itself.
It has to be either extensive or fine grained enough to take in the governing pattern. When one changes the scale of the sampling unit, new patterns emerge.
I see now what Greig-Smith meant when he said that this percentage frequency method,
unlike absolute measures, is easier to determine than density and cover, but its meaning is not clear cut. Because an "increase in size of sampling area will necessarily result in an increase in the chance of a species occurring in any particular sample...frequency value has meaning only in relation to the particular size and shape of sampling area used ..." "and has meaning only when coupled with a statement of the method used." Because all other things are not equal. And what changes is the scale of measurement.
Thus, the thing about frequency is that we can see it change with the scale of measurement used. Such that, if one goes smaller in their scale of observation, then what seems low frequency in the larger square suddenly gets higher. And vice versa, if one goes larger, as in our experiment, then frequency goes lower whether density does or not, up until the point where our sampling unit is large enough to be inclusive of other wrinkles in the pattern.
"We tend to study (landscapes) at conveniently human scales." Allen and Hoekstra write.
"There are, however, small and large scales at which we can profitably study landscapes ...there is a remarkable unity to the landscape criterion...many of the patterns at the scale of an unaided human experience ...are remarkably universal ...many processes ...recur at scales from the landscape of a leaf surface as seen by a mite, all the way up to remotely sensed images of continents ..."
"Although Greig-Smith and Curtis aim to quantify the community relationships on the
ground, they are the first to say that it is extremely difficult, fraught with ambiguity and arbitrary decisions of measurement. Our definition (of community) appears to solve some of the dilemmas
recognized by other authors, and it does this by categorically refusing to put community on the landscape as simple patches."
4. The meaning:
The lesson, I think, is that in order to see the relationship between the density of a
phenomenon and its frequency, we have to look at it on a sliding scale, a measure by which otherwise invisible constraints and opportunities appear. We can move in ever closer or back up to where the object of our interest shows its whole face, which is not necessarily visible from just any scale. To avoid being like a person with her eyes closed trying to perceive an elephant, first by the leg, then the trunk, then a side, then the tail... A sliding scale method allows us to see the whole elephant simply by opening our eyes. There are points of view we have as of yet to imagine.
Allen and Hoekstra introduce such a sliding scale method, pointing out the significance of frequency and the importance of measuring it properly. Frequency being a key variable involved in the relationship between high and low order phenomena explains "why complex systems require several levels of organization for their adequate description." "Levels of organization are ordered by the frequency of the return time for the critical behavior of the entity in question. Higher levels have a longer return time, that is, they behave at lower frequency." This relationship makes nature appear hierarchical and regular "if what behaves at a lower frequency is defined as occupying an upper level." "For our purposes," they explain, "frequency and constraints are the most important criteria for orderly levels. Upper levels constrain lower levels by behaving at a lower frequency..." and "even by refusing to act..." like the "impregnable stupidity" that constrains elegant ideas. They point out to the heart of the matter that, "[A]ny system that does not involve such behavior would be hard to know."
"Constraints...allow systems to be predictable." Which is to say that the lower-frequency
higher-level phenomenon, like rivers that always comes back, are the natural constraints
frequency lower-level phenomenon, including much of that which human activity enacts. Therefore, "the name of the game in science is finding those helpful constraints that allow important predictions."
5. Conclusion:
AsP. Greig-Smith points out, "It is evident...that care and experience are necessary before results of value can be obtained ...and...at its best the method (of frequency symbols) is subject to considerable error." Careful standardization of method would reduce errors introduced by subjective factors, and thereby reduce the difficulties of comparison between scales. And this appears to be the formidable task that Allen and Hoekstra have taken as their own.
As noted early on, I chose this question because the answer to it, as far as I could see,
seemed to get at the limits of the method of objectification. However, this project has given me a new appreciation of its reaches as well. The extent to which we actually can measure the world is truly an amazing and awe inspiring endeavor. But we've long realized the limits of what we can measure from outside-looking-in. Now comes the time when we realize that even the most concrete of subject matters is governed by the most abstract of patterns, and with this comes a whole host of new questions. Our outside-looking-in way of observing the world must ultimately take account of the dynamics involved in the subjective component of our perception, as well as of those forces which cannot possibly be objectified, despite our efforts, precisely because we are inside of them looking-out. From here, we cannot see the perimeters of the forces which contain us, but we can feel and understand their influence by paying attention to the low-frequency but higher-order experiences which are the signs of the times. Scale is a magnificent phenomenological tool, and a method such as would make our perspective more flexible could only help us. Through the new paradigm insights which guide this nacient method, many of the objects of our knowledge that we used to consider to be "out there" are shown rather to be "in here"--or at any rate, in between here and there, as we all are. Once able to operate in multidimensional space-time, we could know much more than we do about all that has been beyond the reach of a method that treats all knowable entities as objects with no will of their own.
This is what pattern in nature represents, i.e. a will of its own. We can only benefit by
understanding it better. Call it the will of god, if it pleases, but do not disregard it as beyond our ability to know, for we can indeed understand these laws of nature if we so choose. If we hope ever to begin to understand the reaches and the limits of human potential, we must. Our very survival and well-being, that for which consciousness itself has evolved, might well depend upon our grasp of theses channels and constraints.
6. References:
7. Appendices: