Search Documents
Search Again
Search Again
Refine Search
Refine Search
- Relevance
- Most Recent
- Alphabetically
Sort by
- Relevance
- Most Recent
- Alphabetically
-
Logging and Log Interpretation - Interpretation of the Induction-Electrical Log in Fresh MudBy J. L. Dumanoir, Maurice Martin
The induction-electrical log is a combination of a SP curve, a short normal curve, and an induction log; all three logs being recorded simultaneously in a single run. The spacing of the short normal is 16 in. The induction log is designated as 5FF40, which means that the sonde involves five coils and a focusing system, and that the distance between the main coils is 40 in. Examples of induction-electrical logs are shown in Figs. 13 and 15. The right-hand track contains the induction conductivity curve. The scale of this curve is linear with the units of conductivity, i.e., mhos/m or millimhos/m (mmhos/m), increasing from right to left. The same curve can be read in the reverse direction as a resistivity log, with a hyperbolic scale. In the middle track, the solid curve represents the short normal. (An amplified short normal is also shown in Fig. 15). The dotted curve is the resistivity curve of the induction log, i.e., the reciprocal of the conductivity curve, recorded with the same linear scale as the short normal. The SP is in the left-hand track. In this combination, the SP curve and the 16-in. normal are the essential correlation curves, as in the conventional electrical log. The induction log provides a more accurate and detailed record of formation boundaries than the conventional log, especially in hard rock territories. Furthermore, the induction-electrical log makes possible the determination of fluid content, particularly in thin beds—at least qualitatively and very often quantitatively. In such thin beds, because of the adverse effect of the adjacent formations, the responses of the conventional devices are generally doubtful and/or incom-plete. Another advantage of the new combination is that the induction log is little affected by the presence of thin hard streaks interbedded within the permeable sections, whereas such occurrences are the cause of serious distortions of the long normal and lateral curves. The evaluation of saturation from any resistivity log implies the knowledge of the true resistivity R, of the bed and of the resistivity R.. of the flushed zone, or at least of the ratio Rxo/Rc. With the induction-electrical log run in fresh mud, the 16-in. nor-mal has a smaller radius of investigation than the induction log, so that the former reads closer to R.. and the latter closer to Rt. Thus, it has been observed that the ratio of the 16-in. normal to the induction log readings can be used as a basis for the interpretation in terms of saturation, with the help of appropriate charts. The purpose of the present paper is essentially to explain this procedure and to provide the corresponding charts. On the other hand, when a formation is deeply invaded, the response of any log (in particular the induction log) depends upon the radial distribution of formation resistivities behind the wall of the hole, which itself is a function of the arrangement of the fluids saturating the formation pore spaces. This paper, accordingly, will begin with a schematic description of the distribution of fluids and resistivities in the invaded zone. Next, some essential characteristics of the induction log will be reviewed. Finally, the determination of the interpretation charts and their application will be explained. The discussion will be confined to the case of fresh muds, i.e., muds whose resistivity is at least five times as great as that of the formation waters. With the present equipment the induction log is not well adapted for logging in salty mud. The Laterolog and guard electrode log are at their best under conditions of low resistivity mud. Accordingly, when these tools were introduced a few years ago, the use of salty mud was recommended. Now, with the introduction of induction logging, spe-
Jan 1, 1958
-
Minerals Beneficiation - Flotation of Chalcopyrite by Xanthates and Dizanthogens Under Oxidizing ConditionsBy C. R. Ramachandra, C. C. Patel
Flotation of chalcopyrite from a low grade ore was studied by using different xanthates and dixanthogens as collectors and by conditioning the flotation pulp with oxidizing gaseous systems. The improvement in the recovery of chalcopyrite with the oxidizing systems was mainly due to the auto-activation of chalcopyrite, leading to the enhanced adsorption of xanthates and dixanthogens even when the latter were formed by oxidation of the xanthates. With air or oxygen as oxidizing system, the amount of ethyl xanthate required at pH 8.5 was much less than normally employed in the industry, the recovery being 97-99% of chalcopyrite with a concentrate of 24% copper. With isopropyl and isoamyl xanthates, the recovery was improved but the grade was lowered. By conditioning the pulp with air-CO2 or oxygen-CO2 mixture, the recovery was lowered with ethyl xanthate only, mainly due to the depressant action of bicarbonate formed. Flotation with dixanthogens gave cleaner concentrates of chalcopyrite. because iron pyrite did not float with these collectors. Of the dixanthogens tried, diisopropyl dixanthogen gave the highest recovery, 99% chalcopyrite with a concentrate containing 28.5 to 30% copper, when the pulp at pH 8.5 was condi-tioned with air. Isopropyl dixanthogen was found to be the best collector of all the xanthates and dixanthogens studied. It was found that the adsorbed ethyl, isopropyl and isoamyl xanthates at chalcopyrite surface, activated by cupric sulfate, and at iron pyrite surface were oxidized to the corresponding dixanthogens to different degrees in presence of air, oxygen, air-carbon dioxide mixture and oxygen-carbon dioxide mixture.' The dixanthogen formed at chalcopyrite surface was, however, firmly adsorbed while that at iron pyrite surface was easily desorbed. Since dixanthogens formed have greater hydrophobicity than the corresponding xanthates, as revealed by contact angles,1,2 the former were expected to improve the flotation results. It was, therefore, thought desirable to find out the effect of air, oxygen, air-carbon dioxide mixture and oxygen-carbon dioxide mixture on the selective flotation of chalcopyrite when individual xanthates were used as collectors. The effect of direct use of dixanthogens as collectors on the selective flotation of chalcopyrite was also investigated. For these studies, low grade chalcopyrite ore, received from the Indian Copper Corp., Ltd. (I.C.C.), Ghatsila, Bihar, was employed. At the milling plant of the I.C.C., chalcopyrite is floated at a pH of 8.5, employing ethyl xanthate as collector. EXPERIMENTAL Materials Employed: XANTHATES - Ethyl, isopropyl and isoamyl xanthates of potassium were prepared in a pure state by the methods described earlier.4 A 0.03% aqueous solution of a xanthate was freshly prepared in air-free distilled water for flotation studies. DIXANTHOGENS - Diethyl, diisopropyl and diisoamyl dixanthogens were prepared by the oxidation of xanthates by iodine and purified by extraction with petroleum ether. A 0.03% alcoholic solution of dixanthogen was used as a collector. FROTHER - A 0.2% solution of pine oil in ethyl alcohol was employed. SODIUM CARBONATE - A 0.4 N solution was used. ORE - The composition of the -100 +200-mesh size (Tyler standard) powdered ore, as determined by standard methods of analysis, was: copper 1.57%, iron 10.29%, sulfur 3.15% and acid insolubles 66.10%. Iron present as chalcopyrite (CuFeS2) was 1.38%, while that present as iron pyrite (FeS,) was 1.37%. Both chalcopyrite and iron pyrite were found to be liberated from the gangue minerals and from each other, as observed under a microscope. Flotation of Chalcopyrite with Xanthates in Presence of Different Gaseous Systems: Fifteen grams of -100 + 200-mesh size ore were shaken well with 100 ml of distilled water. The pH of the pulp was adjusted to 8.5 by the addition of sodium carbonate solution. Then 0.1 ml of xanthate solution (0.005 lb xanthate per ton of the ore) was mixed with the pulp, and the latter together with washings transferred to a
Jan 1, 1963
-
Part I – January 1967 - Papers - Interface Compositions, Motion, and Lattice Transformations in Multiphase Diffusion CouplesBy J. W. Spretnak, D. A. Chatfield, G. W. Powell, J. R. Eifert
In nzost cases, the driving force for a lattice transformation is produced by supercooling below the equilibriunz transformation temperature. The interfnce reaction in isothermally annealed, multiphase diffusion couples may involve a luttice transformation which also requires a driving force. Direct experinzental evidence has been obtained for the existence of the driring force in the form of a supersaturated phase at the aocc)-0@cc) interface in Cu:Cu-12.5 ult pct A1 couples; the super saturation is equivalent to an excess free energy of approximately 3 cal per mol at 905. A tentatiue interpretation of the dynanzic situation a1 the interface based on the free energy-composition diagram is proposed. THE presently accepted theory of diffusion in multiphase couples1 states that there will be a phase layer in the diffusion zone for every region which has three degrees of freedom and which is crossed by the diffusion path in the equilibrium phase diagram. For binary systems, this restriction excludes all but single-phase fields and, for ternary systems, only one- and two-phase fields are included. In addition, Rhines"~ as well as other investigators3 6 have reported that the compositions of the various phases adjacent to the interfaces are, for all practical purposes, the compositions given by the intersections of the diffusion path with the solubility limits of the single-phase fields of the equilibrium phase diagram. Some studies of the rate of thickening of these intermediate diffusion layers indicate that the thickness of the layer changes para-bolically with time, or: where x is the position of the interface relative to an origin xo, t is the diffusion time, and k is a temperature-dependent factor. crank7 shows mathematically that, if the compositions at an interface are independent of time and the motion of the interface is controlled by the diffusion of the elements to and from the interface, then the segments of the concentration penetration curve for a semi-infinite step-function couple will be described by an equation of the form: hence, Eq. [l] follows from Eq. (21 if the interface compositions are fixed and if the motion of the interface is diffusion-controlled. Although the concept of local equilibrium being attained at interfaces has assumed a prominent role in the theory of diffusion in multiphase couples, experimental evidence and theoretical discussions which challenge the general validity of this concept have been reported in the literature. arkeen' has stated that strict obedience to the conditions set by the equilibrium phase diagram cannot be expected in any system in which diffusion is occurring because diffusion takes place only in the presence of an activity gradient. Darken also noted that it is usually assumed that equilibrium is attained locally at the interface although the system as a whole is not at equilibrium, the implication being that the transformation at the interface is rapid in comparison with the rate of supply of the elements by diffusion. ISirkaldy3 indicates agreement with Darken in that he believes the concept of local equilibrium is at best an approximation because the motion of the phase boundary requires that there be a free-energy difference and, hence, a departure from the equilibrium composition at the interface. Seebold and Birks9 have stated that diffusion couples cannot be in true equilibrium, but the results obtained are often in good agreement with the phase diagram. The initial deviation from equilibrium in a diffusion couple will be quite large because alloys of significantly different compositions are usually joined together. Kirkaldy feels that the transition time for the attainment of constant interface compositions (essentially the equilibrium values) will be small, although in some cases finite. Castleman and sieglelo observed such transition times in multiphase A1-Ni couples, but at low annealing temperatures these times were quite long. Similarly, ~asing" found departures, which persisted for more than 20 hr, at phase interfaces in Au-Ni and Fe-Mo diffusion couples. Braun and Powell's12 measurements of the solubility limits of the intermediate phases in the Au-In system as determined by microprobe analysis of diffusion couples do not agree with the limits reported by Hiscocks and Hume-Rothery13 who used equilibrated samples. Finally, Borovskii and ~archukova'~ have stated that the determination of the solubility limits of phase diagrams using high-resolution micro-analyzer measurements at the interfaces of multiphase couples is not an accurate technique because of deviations from the equilibrium compositions at a moving interface; diffusion couples may be used to map out the phase boundaries in the equilibrium diagram, but the final determination of the solubility iimits should be made with equilibrated samples. The purpose of this work was to investigate the conditions prevailing at an interface in a multiphase diffusion couple and to compare the interface compositions with those associated with true thermodynamic equilibrium between the two phases. Microanalyzer techniques were used to measure interface compositions in two-phase Cu-A1 diffusion couples annealed at 80@, 905", and 1000°C for various times.
Jan 1, 1969
-
Geophysics - Geophysical Case History of a Commercial Gravel DepositBy Rollyn P. Jacobson
THE town of Pacific, in Jefferson County, Mo., is 127 miles west of St. Louis. Since the area lies entirely on the flood plain of a cutoff meander of the Meramac River, it was considered a likely environment for accumulation of commercial quantities of sand and gravel. Excellent transportation facilities are afforded by two major railways to St. Louis, and ample water supply for washing and separation is assured by the proximity of the river. As a large washing and separation plant was planned, the property was evaluated in detail to justify the high initial expenditure. An intensive testing program using both geophysical and drilling methods was designed and carried out. The prospect was surveyed topographically and a 200-ft grid staked on which electrical resistivity depth profiles were observed at 130 points. The Wenner 4-electrode configuration and earth resistivity apparatus" were used. In all but a few cases, the electrode spacing, A, was increased in increments of 11/2 ft to a spread of 30 ft and in increments of 3 ft thereafter. Initial drilling was done with a rig designated as the California Earth Boring Machine, which uses a bucket-shaped bit and produces a hole 3 ft in diam. Because of excessive water conditions and lack of consolidation in the gravel there was considerable loss of hole with this type of equipment. A standard churn drill was employed, therefore, to penetrate to bedrock. Eighteen bucket-drill holes and eight churn-drill holes were drilled at widely scattered locations on the grill. The depth to bedrock and the configuration will not be discussed, as this parameter is not the primary concern. Thickness of overburden overlying the gravel beds or lenses became the important economic criterion of the prospect.** The wide variety and gradational character of the geologic conditions prevailing in this area are illustrated by sample sections on Fig. 2. Depth profiles at stations E-3 and J-7 are very similar in shape and numerical range, but as shown by drilling, they are measures of very different geologic sequences. At 5-7 the gravel is overlain by 15 ft of overburden, but at E-3 bedrock is overlain by about 5 ft of soil and mantle. Stations L-8 and H-18 are representative of areas where gravel lies within 10 ft of surface. In most profiles of this type it was very difficult to locate the resistivity breaks denoting the overburden-gravel interface. In a number of cases, as shown by stations M-4 and H-18, the anomaly produced by the water table or the moisture line often obscured the anomaly due to gravel or was mistaken for it. In any case, the precise determination of depth to gravel was prevented by the gradual transition from sand to sandy gravel to gravel. In spite of these difficulties, errors involved in the interpretation were not greatly out of order. However, results indicated that the prospect was very nearly marginal from an economic point of view, and to justify expenditures for plant facilities a more precise evaluation was undertaken. The most favorable sections of the property were tested with hand augers. The original grid was followed. In all, 46 hand auger holes were drilled to gravel or refusal and the results made available to the writer for further analysis and interpretation. When data for this survey was studied, it immediately became apparent that a very definite correlation existed between the numerical value of the apparent resistivity at some constant depth and the thickness of the overburden. Such a correlation is seldom regarded in interpretation in more than a very qualitative way, except in the various theoretical methods developed by Hummel, Tagg (Ref. 1, pp. 136-139), Roman (Ref. 2, pp. 6-12), Rosenzweig (Ref. 3, pp. 408-417), and Wilcox (Ref. 4, pp. 36-46). Various statistical procedures were used to place this relationship on a quantitative basis. The large amount of drilling information available made such an approach feasible. The thickness of overburden was plotted against the apparent resistivity at a constant depth less than the depth of bedrock for the 65 stations where drilling information was available. A curve of best fit was drawn through these points and the equation of the curve determined. For this relationship the curve was found to be of the form p = b D where p is the apparent resistivity, D the thickness of overburden, and b a constant. The equation is of the power type and plots as a straight line on log-log paper. The statistical validity of this equation was analyzed by computation of a parameter called Pearson's correlation coefficient for several different depths of measurements, see Ref. 5, pp. 196-241. In all but those measurements taken at relatively shallow depths, the correlation as given by this general equation was found to have a high order of validity on the basis of statistical theory.
Jan 1, 1956
-
Low-Level Radioactive Waste Disposal TechniquesBy E. Douglas Sethness
The uranium industry is booming. In Texas alone, there are about 22 different companies with active exploration programs. Twelve solution mines have been permitted; three surface mines have been authorized; and two mills are currently in operation. However, the industry also has a problem, and that is the disposal of radioactive wastes. Over the past several years, stories concerning nuclear wastes have appeared frequently in the news. One of the most frequently cited cases occurred in Grand Junction, Colorado. In 1966, after ten years of investigations, the U. S. Public Health Service (PHS) discovered that tailings from a uranium mill were being used as fill material and aggregate for local construction purposes. It was estimated that between 150,000 and 200,000 tons of material had been removed and used under streets, driveways, swimming pools, and sewer lines. In addition, tailings had been used under concrete slabs and around foundations of occupiable structures. Further studies prompted the Surgeon General to warn that the risk of leukemia and lung cancer could be doubled at the measured radiation levels. More recently, the L. B. Foster Company discovered that its building site in Washington, West Virginia, was radioactive. While digging a foundation, the ground erupted and a ball of fire 30 feet high shot out. Evidently, the dirt was laced with radioactive thorium and zirconium, a potentially explosive mixture contained in a Nigerian sand which had been used by the previous site owners in the manufacture of nuclear fuel rods. Just this month we have read about legal suits to stop exploration for a nuclear waste disposal site in Randall County, Texas. The U. S. Department of Energy is trying to locate a deep underground nuclear waste depository for final burial of over 76 million gallons of high-level wastes. The problem is acute, the wastes are accumulating at a rate of about 300,000 gallons per year. Nor do these numbers include the spent fuel elements from nuclear power plants that are in temporary storage facilities. Fortunately, public awareness of these and other related issues is high. Unfortunately, the differences in the waste products from the nuclear fuel cycle are not always apparent to the general public. There are two distinct types of radioactive wastes: "high-level", which consist of spent fuel or wastes from the reprocessing of spent fuel; and "low-level", which, in general, are by-product wastes. There are numerous non-technical definitions that can be applied to help the layman differentiate between high-level and low-level wastes. For this latter purpose, it is best to think of them in terms of what we can see and feel. In general, high-level wastes are physically hot and can cause acute radiation sickness in a short period of time. Low-level wastes are not hot, but may cause chronic health effects after long exposure. The wastes which we are concerned with in the uranium mining and milling industry are low-level wastes. As recently as ten years ago, there were very few controls or regulations governing tailings disposal methods. At the same time, mine reclamation was not enforced through either state or Federal laws and the long-term viability of abandoned tailings ponds was not assured. The regulatory climate has changed significantly in the last decade, however. The low-level radioactive wastes generated by uranium mining and milling are generally contained in a tailings pond. Approximately 85-97% of the total radioactivity contained in uranium ore is present in the mill waste that goes to such tailings ponds. The isotope Radium-226 is probably the most potentially harmful radioactive parameter in the ponds. Radium emits gamma radiation and is also an alpha particle emitter. Because gamma radiation is very penetrating, it presents a potential health problem when a source is located external to the body. Gamma radiation will go through the body, causing damage to each cell encountered on the way. Although alpha particles have very little penetration capability, they can cause extensive cell damage. For this reason, alpha particles are a problem after inhalation or ingestion. Radium creates a health hazard by both of these mechanisms. Radium decays to radon gas which can be inhaled and serve as an alpha particle emitter. Additionally, radium is very soluble and readily enters the natural hydrologic cycle if allowed to leach from a tailings pond. With a half-life of 1620 years, radium has plenty of time to be taken into the food chain and end up in our bodies, emitting alpha particles. Because the potential health problems are better understood today than ten years ago, and because the Nuclear Regulatory Commission (NRC) has developed increasingly stringent government regulations, the uranium mining industry applies a high level of technology to the disposal of nuclear wastes. In most cases, low-level radioactive wastes are disposed of at or near the site where they are produced. There are six commercial burial grounds for low-level wastes, but it would not be economical to ship all mine or milling wastes to these sites. The on-site disposal methods most often used are ponding
Jan 1, 1979
-
Producing - Equipment, Methods and Materials - The Effect of Liquid Viscosity in Two-Phase Vertical FlowBy K. E. Brown, A. R. Hagedorn
Continuous, two phase flow tests have been conducted during which four liquids of widely differing viscosities were produced by means of air-lift through 1%-in. tubing in a 1,500-ft. experimental well. The purpose of these tests was to determine the effect of liquid viscosity on two-phase flowing pressure gradients. The experimental test well was equipped with two gas-lift valves and four Maihak electronic pressure transmitters as well as instruments to accurately measure the liquid production, air injection rate, temperatures, and surface pressures. The tests were conducted for liquid flow rates ranging from 30 to 1,680 BID at gas-liquid ratios from 0 to 3,-270 scf/bbl. From these data, accurate pressure-depth traverses have been constructed for a wide range of test conditions. As a result of these tests, it is concluded that viscous effects are negligible for liquid viscosities less than 12 cp, but must be taken into account when the liquid viscosity is greater than this value. A correlation based on the method proposed by Poettmann and Carpenter and extended by Fan-cher and Brown has been developed for 1¼-in. tubing, which accounts for the effects of liquid viscosity where these effects are important. INTRODUCTION Numerous attempts have been made to determine the effect of viscosity in two-phase vertical flow. Previous attempts have all utilized laboratory experimeneal models of relatively short length. One of the initial investigators of viscous effects was Uren1 with later work being done by Moore et al.2,3 and more recently by Ros.4 However, the present investigation represents the fist attempt to study the influence of liquid viscosity on the pressure gradients occurring in two-phase vertical flow through a 1¼-in., 1,500 ft vertical tube. The approach of some authors has been to assume that all vertical two-phase flow occurs in a highly turbulent manner with the result that viscous effects are negligible. This has been a logical approach since most practical oil-well flow problems have liquid flow rates and gas-liquid ratios of such magnitudes that both phases will be in turbulent flow. It has also been noted, however, that in cases where this assumption has been made, serious discrepancies occur when the resulting correlation is applied to low production wells or wells producing very viscous crudes. Both conditions suggest that perhaps viscous effects may be the cause of these discrepancies. In the first case, the increased energy losses may be due to increased slippage between the gas and liquid phases as the liquid viscosity increases. This is contrary to what one might expect from Stokes law of friction,' but the same observations were made by ROS4 who attributed this behavior to the velocity distribution in the liquid as affected by the presence of the pipe wall. In the second case, the increased energy losses may be due to increased friction within the liquid itself as a result of the higher viscosities. The problem of determining the li- quid viscosity at which viscous effects becomes significant is a difficult one. Ros4 has indicated that liquid viscosity has no noticeable effect on the pressure gradient so long as it remains less than 6 cstk. Our tests have shown that viscous effects are practically negligible for liquid viscosities less than approximately 12 cp. Actually there is no single viscosity at which these effects become important. These effects are not only a function of the viscosities of the liquids and of the gas but are also a function of the velocities of the two phases. The velocities in turn are a function of the in situ gas-liquid ratio and liquid flow rate. Furthermore, the role of fluid viscosities in either slippage or friction losses will depend on the mechanism of flow of the gas and liquid, i.e., whether the flow is annular. as a mist, or as bubbles of gas through the liquid. These mechanisms are also a function of the in situ gas-liquid ratios and the flow rates. It would thus seem that the best one could hope for is to determine a transition region wherein the viscous effects may become significant for gas-liquid ratios and liquid production rates normally encountered in the field. The viscous effects might then be neglected for liquid viscosities less than those in the transition region but would have to be taken into account when higher viscosities are encountered. There are numerous instances where crude oils of high viscosity must be produced. The purpose of this study has been to evaluate the effects of liquid viscosities on twephase vertical flow by producing four liquids of widely differing viscosities through a 1 % -in. tube by means of air-lift. The approach used in this study was as follows:
Jan 1, 1965
-
Iron and Steel Division - Structure and Transport in Lime-Silica-Alumina Melts (TN)By John Henderson
FOR some time now the most commonly accepted description of liquid silicate structure has been the "discrete ion" theory, proposed originally by Bockris and owe.' This theory is that when certain metal oxides and silica are melted together, the continuous three dimensional silica lattice is broken down into large anionic groups, such as sheets, chains, and rings, to form a liquid containing these complex anions and simple cations. Each composition is characterized by "an equilibrium mixture of two or more of the discrete ions",' and increasing metal oxide content causes a decrease in ion size. The implication is, and this implication has received tacit approval from subsequent workers, that these anions are rigid structures and that once formed they are quite stable. The discrete ion theory has been found to fit the results of the great majority of structural studies, but in a few areas it is not entirely satisfactory. For example it does not explain clearly the effect of temperature on melt structure,3 nor does it allow for free oxygen ions over wide composition ranges, the occurrence of which has been postulated to explain sulfur4 and water5 solubility in liquid silicates. In lime-silica-alumina melts the discrete ion theory is even less satisfactory, and in particular the apparent difference in the mechanism of transport of calcium in electrical conduction8 and self-diffusion,' and the mechanism of the self-diffusion of oxygen8 are very difficult to explain on this basis. By looking at melt structure in a slightly different way, however, a model emerges that does not pose these problems. It has been suggested5" that at each composition in a liquid silicate, there is a distribution of anion sizes; thus the dominant anionic species might be Si3,O9 but as well as these anions the melt may contain say sis0:i anions. Decreasing silica content and increasing temperature are said9 to reduce the size of the dominant species. Taking this concept further, it is now suggested that these complexes are not the rigid, stable entities originally envisaged, but rather that they exist on a time-average basis. In this way large groups are continually decaying to smaller groups and small groups reforming to larger groups. The most complete transport data 8-10 available are for a melt containing 40 wt pct CaO, 40 wt pct SiO2, and 20 wt pct Al2O3. Recalculating this composition in terms of ion fractions and bearing in mind the relative sizes of the constituent ions, Table I, it seems reasonable to regard this liquid as almost close packed oxygens, containing the other ions interstitially, in which regions of local order exist. On this basis, all oxygen positions are equivalent and, since an oxygen is always adjacent to other oxygens, its diffusion occurs by successive small movements, in a cooperative manner, in accord with modern liquid theories." Silicon diffusion is much less favorable, firstly because there are fewer positions into which it can move and secondly, because it has the rather rigid restriction that it always tends to be co-ordinated with four oxygens. Silicon self-diffusion is therefore probably best regarded as being effected by the decay and reformation of anionic groups or, in other words, by the redistribution of regions of local order. Calcium self-diffusion should occur more readily than silicon, because its co-ordination requirements are not as stringent, but not as readily as oxygen, because there are fewer positions into which it can move. There is the further restriction that electrical neutrality must be maintained, hence calcium diffusion should be regarded as the process providing for electrical neutrality in the redistribution of regions of local order. That is, silicon and calcium self-diffusion occur, basically, by the same process. Aluminum self-diffusivity should be somewhere between calcium and silicon because, for reasons discussed elsewhere,' part of the aluminum is equivalent to calcium and part equivalent to silicon. Consider now self-diffusion as a rate process. The simplest equation is: D = Do exp (-E/RT) [I] This equation can be restated in much more explicit forms but neither the accuracy of the available data, nor the present state of knowledge of rate theory as applied to liquids justifies any degree of sophistication. Nevertheless the terms of Eq. [I] do have significance;12 Do is related, however loose this relationship may be, to the frequency with which reacting species are in favorable positions to diffuse, and E is an indication of the energy barrier that must be overcome to allow diffusion to proceed. For the 40 wt pct CaO, 40 wt pct SiO2, 20 wt pct Al2O3, melt, the apparent activation energies for self-diffusion of calcium, silicon, and aluminum are not significantly different from 70 kcal per mole of diffusate,' in agreement with the postulate that these elements diffuse by the same process. For oxygen self-diffusion E is about 85 kcal per mole,' again in agreement with the idea that oxygen is transported,
Jan 1, 1963
-
Part XII – December 1968 – Papers - Determination of the Absolute Short-Term Current Efficiency of an Aluminum Electrolytic CellBy E. R. Russell, N. E. Richards
The current ejyiciency of aluminum cells was derived from the metal produced over a period of time and the theoretical faradaic yield. The difference in the actual amount of aluminum in the cathode at the beginning and end of the period must be determined. The weight of aluminum in the cathode was calculated from the dilution of an added quantity of impurity metal. Use of multiple indicator metals, copper, manganese, and titanium, demonstrated that the weight of aluminum in cells can be determined to within 1 pct with routine but careful chemical analyses. Over intervals of the order of 30 days, current efficiencies reliable to within 1 pct can be obtained. INVESTIGATIONS beginning with those of Pearson and waddington ,' through the most recent published work of Georgievskii,9-11 illustrate the direct relationship between the composition of the anode gas and the applicability of analysis of anode gases to the control of alumina reduction cells. McMinn12 noted the lack of an independent method for measuring cell production efficiency over the short term. There is no doubt that changes in the current efficiency are immediately reflected in the composition of anode gases. However, the accuracy of faradaic yields calculated from gas analyses depends upon the degree of interaction between primary anode gas and Carbon.6 A conventional industrial practice of obtaining long-term current efficiency for production units from mass balances and quantity of electricity is generally insensitive to the impact of planned control of any one or more of the influential reduction cell parameters such as temperature, alumina concentration, and mean interelectrode distance. Consequently, there is a real need in the aluminum industry for a procedure to obtain the absolute cell current efficiency over a short term—10 to 30 days—both for the calibration of values obtained from gas analysis6 and for evaluating the effect of controlling specific parameters in the reduction process. The amount of aluminum produced may be determined by considering the cathode pool as a reduction of an impurity metal in aluminum. Analyses over a period will show a decreasing concentration of the impurity due to the accumulation of aluminum solvent. The increase in aluminum inferred from analyses is the amount produced by the cell during the period. Combining weights of the cell aluminum in the cathode at the beginning and end of a specific period, weights of aluminum tapped and the quantity of electricity passed during the interval will yield the current efficiency. Smart,I3 Lange;4 Rempel,15 Beletskii and Mashovets,16 and winkhaus17 have used dilution techniques to determine aluminum inventory in alumina reduction cells. A technique for determining the weight of aluminum in production cells by addition of small amounts of copper to the aluminum cathode was described by smart.13 The precision in values of the aluminum reservoir through dilution of copper in the cathode ranged from about 1 to 3 pct depending upon the quantity of copper added in the range 0.2 to 0.01 wt pct, respectively. Because the method appears so direct and apparently simple, one would not anticipate difficulties in application to industrial cells. The objective of this study was to resolve this problem associated with the trace metal dilution technique for determining the amount of aluminum in a cell. The approach in evaluating trace metal dilution as a basic factor in determining the weight of aluminum in the cell reservoir, and the absolute current efficiency of the Hall-Heroult cell, was to dilute more than one trace metal in the aluminum cathode so that we could discriminate among complications arising from physical mixing, the possibility of separation of intermetallic compounds, loss of the added elements, and chemical detection. EXPERIMENTAL METHODS These experiments are not complex but require standardized procedures. The technique involves addition of the trace metals to the cathode, knowing when these metals are homogeneously distributed in the liquid cathode, timing of the sampling, employing accurate and precise analytical methods, using reliable procedures for monitoring the amount of electricity passed through the cell, and accurate weighing of aluminum removed from the cell during the particular period. More accurate results might be obtained if the increment in concentration of the added indicator metals were of the order of 0.1 to 0.2 wt pct. The method must be applicable to production units and, hence, the contamination of the aluminum minimized. For this reason, the concentration of trace metals in the cathode was kept below 0.07 wt pct and generally at 0.04 wt pct level. Trace quantities of copper, manganese, titanium, and silicon are already present in virgin aluminum and are suitable as additives from electrochemical and analytical points of view. Concentration of silicon is quite dependent upon the characteristics of the raw materials and was not used extensively in this work. Chemical Analyses. All instrumental analyses require calibration against an absolute technique such as a gravimetric, volumetric, or spectrophotometric method which represents the ultimate in sensitivity, precision, and accuracy. On review, the best methods for copper appeared to be optical absorption without
Jan 1, 1969
-
Part IX – September 1969 – Papers - Separation of Tantalum and Columbium by Liquid- Liquid ExtractionBy Willard L. Hunter
Four solvent extraction systems were studied to determine their efficiency jor extraction and separation of tantalum and columbium. Aqueous feed solutions of varying HF-HCl concentrations and metal content were contacted with equal volumes of cyclohexanone, 3-methyl-2-butanone, and 2-pentanone and solutions of varying HF-H2S04 concentrations were contacted with equal volumes of 2-pentanone. One multistage continuous test was made in a polyethylene pulse column using cy clohexanone as the organic phase. In each system studied, columbium and tantalum purities in excess of 95 pct with respect to each other were obtained in single-stage tests at low acidities in the feed solution. Separation factors ranging from 1700 to 2400 were obtained when rising HF-HCl mixtures in the aqueous phase. Best results were obtained when a solution of HF-H2S04 was used as the aqueous phase and 2-pentanone as the organic phase. A separation factor in excess of 6000 was obtained in one stage with aqueous solution concentrations of 2 _N HF and 2N H2S0,. When acid concentrations were increaszd to 52 HF and 10 _N H2S0,, 99.9 pct of the tantalum and 98.2 pct of the columbium initially present in the feed solution were transferred to the organic phase. The separation of columnbium and tantalum obtainable by means of the solvent extraction systems presented in this paper was found to corn -pare favorably with other systems, including the HF-H2SO4-methyl isobutyl ketone system currently used by most producers for the extraction and separation of these metals. TANTALUM and columbium are always found together in minerals of commercial significance, although the proportion of the two metals in ores varies within broad limits. Columbium is estimated to be 13 times more abundant than tantalum. Five methods generally employed for the separat:ion of these metals are: 1) fractional crystallization (the Marignac process),2 2) solvent separation, 3) fractional distillation of their chlorides, 4) ion exchange, and 5) selective reduction. Of these methods, the one currently used by industry to the greatest extent is that of solvent separation. One of the early technical developments in solvent separation of tantalum from columbium was reported by the Bureau of Mines: the HF-HC1-methyl isobutyl ketone system; data were presented for both laboratory and pilot-plant experimentation.3 Of twenty-eight organic solvents tested for their ability to extract tantalum from an HF-HC1 solution of columbium and tantalum, 3-pentanone (diethyl ke-tone), cyclohexanone, 2-pentanone, and 3-methyl-2-butanone were chosen for further study. Data on the HF-HC1-diethyl ketone system has been published4 and data describing the use of cy clohexanone, 2-pentanone, and 3-methyl-2-butanone as the organic phase are included in this report. RAW MATERIAL The source of tantalum and columbium oxides for this study was ('Geomines" tin slag from the Manono Smelter, Cie Geomines, Gelges, S.A., Congo. In order to extract the valuable Ta-Cb content, the slags were carbided, chlorinated, and the sublimate from chlo-rination was hydrolized and washed free of chloride with water. The washed material was air-dried and stored in a stoppered container. Throughout the paper, "feed material" refers to this mixture of hydrated oxides which was employed because of its high solubility in aqueous solutions. Typical analysis of the hydrated oxides is shown in Table I. I) HF-HC1-CYCLOHEXANONE SYSTEM Batch Separation. Effect of Acid Concentration. To determine the effect of varying the acid concentration upon the transfer of tantalum and columbium, a series of tests was made in which approximately 2.5 g of feed material was added to 25 ml solutions of 2, 4, 6, 8, and 10 N HF and 0 through 5 N HC1. Tantalum pentoxide concentration of the solu%ons was approximately 21 g per liter and columbium pentoxide was 14 g per liter. These starting solutions were shaken with equal volumes of cyclohexanone in 100 ml polyethylene bottles for 30 min. The phases were carefully separated in 125 ml glass separatory funnels. The time of contact of the solutions with the separatory funnels was kept at a minimum to reduce silica contamination. The measured phases were separated into 400 ml polyethylene beakers and the metal contents of each were precipitated by addition of an excess of ammonium hydroxide. Precipitate from each phase was filtered on ashless filter paper, ignited at 800" to 1000°C for 45 min, weighed, and analyzed by X-ray fluorescence.5 Data tabulated in Table I1 and illustrated in Fig. 1, show that maximum separation of tantalum from columbium for each HF concentration was obtained with no HCl present. The purest tantalum product was obtained with some HCl present. The highest separation factor was obtained at 2 N HF and
Jan 1, 1970
-
Geochemistry - Applied Geochemistry in Exploration for Selected Mineral Occurrences in the PhilippinesBy W. E. Hale, G. J. S. Govett
An orientation survey was conducted over a known disseminated copper deposit and a Au-Cu vein deposit and employed geological, geophysical, and geochemical methods. Geochemical techniques proved the most effective and economical means of locating and defining areas underlain by these types of mineralization. Six additional, widely separated occurrences were also investigated and mineralization was confirmed by determination of cold-extractable copper in stream sediments. The length of anomalous drainage varies from several to 16 km and depends on the type and tenor of mineralization. Despite generally marked relief and extreme physical and chemical weathering, cold-extractable copper anomalies in soils appear to reflect closely variations in the ore mineral content of immediately underlying rocks. Threshold values vary with local geological conditions and type and grade of mineralization. Significant thresholds in stream sediments are about 65 ppm and 10 ppm for cold-extractable copper in disseminated and vein copper occurrences respectively. Similar differences are encountered in soils. This necessitates the use of a scale of threshold values at progressive stages in each exploration program. Dis-seminated copper occurrences are presently the major source of mineral production m the Philippines. Much potentially productive ground remains to be investigated and applied geochemistry appears to offer the best available approach. The Institute of Applied Geology (I.A.G.), a joint project sponsored by the University of the Philippines, the Philippine Bureau of Mines and the United Nations Development Program (Special Fund) undertook during 1965-66 a number of field projects in Northern Luzon Island (Fig. 1). These field operations were of a two-fold nature and involved both orientation and exploration. During approximately six weeks, an orientation survey, involving geological mapping, geophysical surveying and geochemical prospecting, was conducted over a partly developed, disseminated copper deposit at Santo Nino (Fig. 1). This study demonstrated the economic and technical feasibility of geochemical ptospecting, relative to the other methods tested, for the location and delineation of a typical disseminated copper occurrence characteristic of those that represent the major source of metal in the Philippines.' Geochemical techniques, similar to those tested at Santo Nino, were then applied in a second orientation study of a vein occurrence at Suyoc (Fig. 1). The results of these two orientation studies suggest that, in reconnaissance work in this region, stream sediment analyses, combined with the determination of pH in the same streams, can be used to indicate areas underlain by rock containing anomalous concentrations of copper, whether it occurs in the large tonnage, low grade, disseminated deposits or in the comparatively small tonnage, higher grade, gold vein occurrences. Furthermore, the orientation surveys demonstrate that soil sampling and analysis serve effectively to outline more closely the mineralized zones within the generally anomalous areas identified from the stream reconnaissance. To further test the methods selected from the two earlier trials, six additional, widely separated prospects were investigated. The geological and geochem-
Jan 1, 1970
-
Technical Papers - Mining Practice - Laying Panel Track at the Morenci Open Pit (Mining Tech., July 1947, TP 2189)By Walter C. Lawson
The primary objective in laying track in panel sections is to reduce the number of track laborers required. This is possible because the work is mechanized. Moreover, because the work is mechanized and each of the operations needs only a few men, and each is self-contained, the work can be carried on at night as well as in daylight; therefore, the method provides a means of preparing tracks on more than one shift. The laying of tracks in open pits and quarries in panel sections is not new but new methods have been made possible by the introduction of new types of equipment. The purpose of this paper is to describe the methods that are followed in the Morenci open pit. General Mining Operations Full-scale ore production at Morenci is about 50,000 tons daily. The normal yearly output of ore and waste is 32,000,000 tons, of which 30,000,000 tons is handled by rail haulage. This requires the building of approximately 47 miles of loading and other temporary tracks during a 12 months' period (Fig I). Broken ore and waste is loaded into 90-ton capacity dump cars with 5-cu-yd full-revolving electric shovels. Shovel banks are uniformly 50 ft high. They are blasted with churn drill holes and the broken material from a blast is loaded out with two shovel cuts, the first one being called (locally) the "splatter" cut and the second one the "clean-up" or "face-up" cut. When a shovel is making its clean-up cut, the loading track is about 70 ft from the toe of the bank and, under normal conditions, a bank is blasted against it without removing the track. It is thus evident that each track is used for two shovel cuts before relocation is necessary, inasmuch as a single position serves for both the clean-up cut prior to blasting and for the splatter cut following the blast. The normal advance of a solid bench by a single blast is 40 ft (Fig 2). Preparation OF Panel Grade In preparation of panel grades bulldozers and rooters are used for the bulk of the work and an auto-patrol road grader for the final stage. At Morenci, bulldozers have always been used for track-grade preparation, but before the introduction of the other two types of equipment grades were uneven and irregular at best, and always necessitated a big gang of track laborers to hand-block under the ties with rocks after a track was laid into place. High blocking also made poor track because of instability and caused frequent derailments. Under these conditions derailments were generally bad as re-railing became difficult. Much track was torn up and many tics broken in the process. The inadequacy of the hand-blocked track, together with the inefficiency of hand blocking, constantly pointed, to the desirability of reducing the amount of labor required, so various means were tried to improve conditions, such as wood
Jan 1, 1949
-
Geology and Non-Metallics - Aerial Photography as an Aid In Geological StudiesBy Gerard Matthes
Only in recent years has any practical headway been made in the application of aerial photography to geological problems, and up to the present time its principal value to the geologist and mining engineer has been in "areal geology;" i. e., the study of formations as revealed at the surface. The object of this paper is to set forth briefly the principal advantages that may be derived from aerial photographic data in geological work and to serve in a measure as a guide to those who may be contemplating its use. Geophysical prospecting has for object the locating of mineral under the surface. Inasmuch as the geologist's chief complaint in life is a dual one—inability to see under the surface and inability to visualize comprehensively what he does see on top of the ground—geophysical prospecting coupled with aerial photography should do much to lessen his burdens. To what extent aerial photography can be made a useful adjunct to geophysical prospecting remains to be proved, but the inference drawn here is that it holds many alluring possibilities. To avoid disappointments, it should be stated at the outset that to the geologist or mining engineer an aerial photograph or mosaic is only a tool, the efficient use of which he must set out to master. Upon his skill and ingenuity in applying it to particular problems will depend largely the value of the results that he will obtain. Under the most favorable conditions an aerial mosaic, or even a few loose photographs, because of the exceptionally comprehensive oversight which they afford of field data, such as dips and strike observations at isolated outcrops, local indications of faulting, mineral and fossil finds, greatly facilitate study and correlation, save much tedious foot work, and may repay their cost many times. Even so, types of country may be found where the most skillful use of the best of pictures will produce results so meager as to render the cost of aerial photography entirely unwarranted. Between these two extremes will be found endless variations and possibilities. Hence no set rules can be promulgated. The safest procedure is to treat each case as a separate problem and to carefully weigh the relative advantages and shortcomings of aerial photography before embarking on its use.
Jan 1, 1928
-
Coal - Recent Coal Geology ResearchBy Aureal T. Cross
THIS paper is a review of the published literature on research in coal geology, principally exclusive of resource studies, which appeared or became available during 1950 and the latter part of 1949. This report is not to be construed as being complete. The papers referred to in the bibliography are those among many more, which were read either in full or in abstract. Undoubtedly other papers were published which either escaped the author's notice or were not available to him. Those which were seen in abstract only (about one fourth of those listed) were not available in time for the inclusion of more than a notice. An outline of all papers listed in the bibliography has been arranged by subjects and reasonable subdivisions with some papers cited under more than one subject. Most papers are indexed according to the principal subject of discussion or research only as to an unusual or noteworthy section of the entire report. There will likely be some disagreement as to the quality or merit of some of the papers selected and the specialist may be supercritical of the outline or organization of papers in his field. It may be that attention has occasionally been drawn to papers reporting old information or conclusions of questionable value. Conferences and Meetings One of the best indications of the growing interest in coal geology problems in the United States is the increasing number of times this field has been the focus of attention at conferences and meetings. Notable among these are the joint meeting of the Society of Economic Geologists and the Geological Society of America at El Paso, November 1949, at which the principal thesis was concerned with low rank carbonaceous fuel deposits, especially of western United States. Among the papers given which are already available were those presented by Barghoorn,'" Parry? Roe? and Parks."' At the annual meeting of the Botanical Society of America in New York, December 1949, a joint meeting of the Paleobotanical and Microbiological Sections was held for which a symposium on Microbiology in Relation to the Geologic Accumulation of Organic Complexes was organized. Publication of the six papers presented by Ralph G. H. Siu, Elso S. Barghoorn, Irving Breger, Claude E. ZoBell, James M. Schopf, and A. C. Thayson is anticipated. At the regular meetings of the Paleobotanical Section at the same time, several other papers of interest reported on coal ball studies, partial coalification of petrified wood, and floras. In Chicago, April 1950, a symposium on Applied Paleobotany was held by the Society of Economic Paleontologists and Mineralogists in conjunction with the American Association of Petroleum Geologists. The five papers presented at this meeting dealt with the use of Paleozoic plant microfossils for stratigraphic work, J. M. Schopf, Devonian-Missis-sippian fossils of the black shales, Aureal T. Cross, Mesozoic plants of stratigraphic value, Th. Just, plant microfossils of the Tertiary, L. R. Wilson, and studies of the Brandon lignite, Elso S. Barghoorn. Early publication of these in the Journal of Paleontology is expected. The Nova Scotia Research Foundation and the Nova Scotia Dept. of Mines sponsored an excellent 3-day conference in June 1950, which dealt with several aspects of coal geology. Papers on coal classification, P. A. Hacquenbard, structure and sedimentation problems in Nova Scotia, T. B. Haites, new techniques of thermal analysis, W. L. White-head, geochemical investigations of Nova Scotia coals, Irving Breger, the role of fossil plant spores in coal correlation and the stratigraphy of the coal-bearing strata of the Appalachian Region, Aureal T. Cross, were given. Some discussions of these papers by those in attendance were recorded, and the entire proceedings is being prepared for publication. In September 1950, an unusual 3-day field conference was held by the Ohio and West Virginia Geological Surveys under the sponsorship of the Coal Geology Committee. This study of the stratigraphy, sedimentation, and nomenclature of the Upper Pennsylvanian and Permian coal-bearing strata of southeastern Ohio, southwestern Pennsylvania, and northern West Virginia was augmented by two discussions on associated rocks (clays and shales) and stratigraphic nomenclature at Wheeling and Morgantown, West Va. An extensive guidebook was prepared, and transcriptions of the Morgantown meeting were made. As a follow-up of the September field conference, a round-table discussion was held on this general topic at a special open meeting of the Coal Research Committee in conjunction with the November meeting of the Geological Society in Washington. Short prepared statements to invite discussion were given on each of several topics by L. M. Cline, Carl 0.
Jan 1, 1953
-
Reservoir Engineering Equipment - Transient Pressure Distributions in Fluid Displacement ProgramsBy O. C. Baptist
The Umiat oil field is in Naval Petroleum Reserve No. 4 between the Brooks Range and Arctic Ocean in far-northern Alaska. The Umiat anticline has been tested by 11 wells, six of which produced oil ; however, [lie productive capacity and recoverable reserves of the field are subject to considerable speculation because of unusual reservoir conditions and because several wells appear to have been .seriously damaged during drilling and completion. Oil is produced at depths of 275 to 1,100 ft; the depth to the bottom of the permanently frozen zone varies from about 800 to 1,100 ft, .so that most of the oil reserves are in the permafrost Reservoir pressures are estimated to range from 50 to 350 psi, increasing with depth, and the small amount of gas dissolved in the oil is the major source of energy for production. Laboratory tests were made on cores under simulated permafrost conditions to estimate oil recoverable by solution-gas expansion from low saturation pressures. The cores were also tested for clay content and susceptibility to productivity impaiment by swelling clays and increased water. content if exposed to fresh water. The results indicate that oil can be produced fronz reservoir rocks in the permafrost and that substantial amounts of oil can be produced from depletion-drive reservoirs by a pre.s.r~lrr drop of as little as 100 psi below the saturation pressure. Freezing of formation water reduces oil productivity much more than that due to increased oil viscosity: Failure of we1ls drilled with rtuter-base mud to produce is attributed to freezing of water in the urea immediately surrounding the wellbore. Swelling clays apparently contributed very little to the plugging of the wells. INTRODUCTION Naval Petroleum Reserve No. 4 lies between the Brooks Range and the Arctic Ocean in northern Alaska. The Umiat oil field is located in the southeastern part of the Reserve and is about 180 miles southeast of Point Barrow (the only permanent settlement in the Reserve and the primary supply point for drilling of the wells at Umiat). Eleven wells were drilled for the U. S. Department of the Navy, Office of Naval Petroleum and Oil Shale Reserves, between 1944 and 1953 to test the oil and gas possibilities of the Umiat anticline. Six of these wells produced oil in varying quantities and the best one pumped about 400 B/D.' Estimates of recoverable oil range from 30 to 100 million bbl. The main oil-producing zones are two marine sandstone beds in the Grandstand formation of Cretaceous age: these are referred to as the upper and lower sands. Good oil shows were found throughout the sand settions in the first three wells drilled on the structure, but the highest rate of oil production obtained on any 01 the many tests was about 24 BOPD. These first wells were drilled with conventional rotary methods using water-base mud; later wells were drilled either with cablc tools using brine or rotary tools using oil or oil-base mud. These experiments were successful as is shown by comparing the oil production from Well No. 2 with that from No. 5. These two wells are only 200 ft apart and are located at about the same elevation on the structure. Well No. 2. drilled with a rotary rig using water-base mud, was abandoned as a dry hole after all formation tests were negative. Well No. 5. drilled with cable tools and reamed with a rotary using oil, pumped 400 BOPD which was the maximum capacity of the pump and less than the capacity of the well. These field results indicated that the producing sands were extremely "water sensitive" and it was assumed that the cause of this sensitivity was the presence of swelling clays in the sands. Because of the very unusual reservoir conditions and the difficulties encountered in completing oil wells in the permafrost. the Navy asked the U. S. Bureau of Mines to make laboratory studies under simulated permafrost conditions to assist them in estimating the production potential of the field and the recoverable reserves. These tests were designed to determine the cause of the plugging of wells in the permafrost and to test oil recovery from frozen sand by solution-gas expansion with the oil gas-saturated at very low pressures. EXPERIMENTAL METHODS AND PROCEDURES Samples Analyzed Core samples were analyzed that represent the lower sand in Umiat Well No. 7, the upper sand in No. 3. and both the upper and lower sands in No. 9. These sands should be productive in all of the wells because of their location on the structure. Core samples from
-
Iron and Steel Division - Investigation of Bessemer Converter Smoke ControlBy A. R. Orban, R. B. Engdahl, J. D. Hummell
The initial phase of a research program on smoke abatement from Bessemer converters is described. In work sponsored by the American Iron and Steel Institute, a 300-lb experimental Bessemer converter was assembled to simulate blowing conditions in a commercial vessel. Measurements of smoke and dust were also made in the field on a 30-ton commercial vessel. During normal blows the dust loading from the laboratory converter averaged 0.51 lb per 1000 lb of exhaust gas. This was similar to the exhaust-gas loading of a commercial vessel. The addition of hydrogen to the blast gas of the laboratory converter caused a decided decrease in smoke density. Smoke was also reduced markedly when methane or ammonia was added instead of hydrogen. The research is continuing on a bench-scale investigation of the mechanism of smoke formation in the converter process. DURING the past 2 years, on behalf of the American Iron and Steel Institute, Battelle has been conducting a research program on the control of emissions from pneumatic steelmaking processes. The objective of the research program is to discover a practical method for reducing to an unobjectionable level the emission of smoke and dust from Bessemer converters. PRELIMINARY INVESTIGATION Although conceivably some new collecting technique may be devised which would be economically practicable for cleaning Bessemer gases, no such system based on presently known principles seems feasible because of the extremely large volume of high-temperature gases involved. Hence, the research is being directed toward prevention of smoke formation at the source. A thorough review was first made of former work to determine the present status of the cleaning of converter gases. No published work was found on work done in the United States on collecting smoke or on preventing its formation in the bottom-blown, acid-Bessemer converter. In Europe, however, a number of investigations have been made on the basic-Bessemer converter. Kosmider, Neuhaus, and Kratzenstein1 conducted tests on a 20-ton converter to obtain characteristic data for dust removal and the utilization of waste heat. They concluded that because of the submicron size of the dust, special equipment would be necessary to clean the exhaust gases. Dehne2 conducted a large number of smoke-abatement experiments at Duisburg-Huckingen in a 36-ton Thomas converter discharging into a stack. A number of wet-scrubbing and dry collectors were tried unsuccessfully. A waste-heat boiler and electrostatic collector with necessary gas precleaners was felt to be the best solution for this particular plant. Meldau and Laufhutte3 determined that the particle size was all below 1 µ in the waste gas of a bottom-blown converter. Sel'kin and zadalya4 describe the use of oxygen-water mixtures injected into a molten bath in refining open-hearth steel. They claim that with use of oxygen-water mixtures the amount of dust formed was reduced between 33.3 and 20 pct of its previous level, and emission of brown smoke almost ceased. Pepperhoff and passov5 attempted unsuccessfully to find some correlation between the optical absorption of the smoke, the flame emission, and the composition of the metal in a Thomas converter in order to determine automatically the metallurgical state in the melt. In a recent U. S. Patent (NO. 2,831,762)' issued to two Austrian inventors, Kemmetmuller and Rinesch, the inventors claim a process for treating the exhaust gases from a converter. By their method the inventors claim that the exhaust gases from the converter are cooled immediately after leaving the converter to a degree that oxidation of the metal vapors and metal particles to form Fe2O3 is inhibited in the presence of surplus oxygen. Gledhill, Carnall, and sargent7 report on cleaning the gases from oxygen lancing of pig iron in the ladle. They claim the Pease-Anthony Venturi scrubber removed 99.5 + pct of the smoke, thereby reducing the concentration to 0.1 to 0.2 grain per cu ft, which resulted in a colorless stack gas after the evaporation of water. Fischer and wahlster8 developed a small basic converter and compared the metallurgical behavior of the blow with that of a large converter. Later work by Kosmider, Neuhaus, and Hardt9 on the use of steam for reduction of smoke from an oxygen-enriched converter confirmed that the cooling effect of steam is detrimental to production. From review of all of the published information on the subject, it was concluded that a practical solution to the smoke-elimination problem had not been found. Accordingly, it was deemed desirable to investigate the feasibility of preventing the initial formation of smoke in the converter.
Jan 1, 1961
-
Emergence Of By-Product CokingBy C. S. Finney, John Mitchell
The decline of the beehive coking industry was inevitable, but it had filled the needs and economy of its day. A beehive plant required neither large capital investment to construct nor an elaborate and expensive organization to run. The ovens were built near mines from which large quantities of easily-won coking coal of excellent quality could be taken, and handling and preparation costs were thus at a minimum. The beehive process undoubtedly produced fine metallurgical coke, and low yields were considered to be the price that had to be paid for a superior product. Few could have foreseen that the time would come when lack of satisfactory coking coal would force most of the beehive plants in the Connellsville district, for example, to stay idle; and if there were those like Belden who cried out against the enormous waste which was leading to exhaustion of the country's best coking coals, there were many more to whom conservation was almost the negation of what has since become popularly known as the spirit of free enterprise. As for the recovery of such by-products as tar, light oil, and ammonia compounds, throughout much of the beehive era there was little economic incentive to move away from a tried and trusted carbonization method simply to produce materials for which no great market existed anyway. With the twentieth century came changes that were to bring an end to the predominance of beehive coking. Large new steel-producing corporations were formed whose operations were integrated to include not only the making and marketing of iron or steel but also the mining of coal and ore from their own properties, the quarrying of their own limestone and dolomite, and the production of coke at or near their blast furnaces. As the steel industry expanded so did the geographic center of production move westward. By 1893 it had moved from east-central to western Pennsylvania, and by 1923 was located to the north and center of Ohio. This western movement led, of course, to the utilization of the poorer quality coking coals of Illinois, Indiana and Ohio. These coals could not be carbonized to produce an acceptable metallurgical coke in the beehive oven, but could be so treated in the by-product oven. By World War I the technological and economic limitations of the beehive oven as a coke producer were being widely recognized. After the war the number of beehive ovens in existence dropped steadily to a low of 10,816 in 1938, in which year the industry produced only some 800,000 tons of coke out of a total US production of 32.5 million tons. The demands of the second World War led to the rehabilitation of many ovens which had not been used for years, and in 1941, for the first time since 1929, beehive ovens produced more than 10 pet of the country's total coke output. Production fell off again after 1945, but the war in Korea made it necessary once more to utilize all available carbonizing capacity so that by 1951 there were 20,458 ovens with an annual coke capacity of 13.9 million tons in existence. Since that time the iron and steel industry has expanded and modernized its by-product coking facilities, and by the end of 1958 only 64 pet of the 8682 beehive ovens still left were capable of being operated. Because beehive ovens are cheap and easy to build and can be closed down and started up with no great damage to brickwork or refractory, it is likely that they will always have a place, albeit a minor one, in the coking industry. The future role of the beehive oven would seem to be precisely that predicted forty years ago by R. S. McBride of the US Geological Survey. Writing with considerable prescience, McBride declared: "A by-product coke-oven plant requires an elaborate organization and a large investment per unit of coke produced per day. Operators of such plants cannot afford to close them down and start them up with every minor change in market conditions. It is not altogether a question whether beehive coke or by-product coke can be produced at a lower price at any particular time. Often by-product coke will be produced and sold at less than cost simply in order to maintain an organization and give some measure of financial return upon the large investment, which would otherwise
Jan 1, 1961
-
Institute of Metals Division - The Oxidation of Hastelloy Alloy XBy S. T. Wlodek
The surface and subscale oxidation reactions were followed by means of continuous weight-gain and metallographic techniques over the range 1600" to 2200°F (871° to 1204 °C) for up to 400 hr. Full identification of all scale and subscale reaction products was obtained by electron and X-ray diffraction. At or below 1800°F (982°C) a linear rate of reaction (QL = 46.0 kcal per mole) governed the oxidation process, extending for up to 100 hr at 1600°F (871 "C). During linear oxidation the surface scale consisted of an amorphous SiO2 film overgrown with Cr 2O 3 and NiCr204. This initial linear process was followed, and above 1800°F completely replaced, by two successive parabolic rate laws (Qp = 60 and 57 kcal per mole). This parabolic reaction involved the formation of a complex scale consisting of Cr2 O3 and smaller amounts of NiCr2O4. Parabolic oxidation appeared to coincide with the disruplion of the silica film present during linear oxidation and was followed by subscale (internal) oxidation of crystobalite and NiCr2O4. The balance between the subscale and surface oxidation reactions controls the oxidation of this commercial alloy. The amorphous silica film appears to result in the linear rate and diffusion through Cr2O3 is the more likely rate-limiting step during parabolic oxidation. THE oxidation of a multicomponent composition is a complex phenomenon not presently amenable to a rigorous classical interpretation. Nevertheless, even a qualitative understanding of the scaling and subscale reactions that occur in a commercial composition can illuminate the reactions that limit its high-temperature stability in an oxidizing environment. This study of the oxidation of Hastelloy Alloy X presents the first of a series of studies with the above approach in mind. Hastelloy X exhibits one of the best combinations of strength and oxidation resistance available in a wrought, solution-strengthened, nickel-base alloy. Although during long time exposure some precipitation of M6C and M23C8 carbides as well as a complex Laves phase occurs, the amounts are probably small enough to have no appreciable effect on the chemistry of the matrix. Radavich has identified the oxidation products on Hastelloy X oxidized for 5 min to 10 hr at 1115°F as NiO and the NiCr2O4 spinel. Oxidation for 5 to 15 min at 1500°F produced a scale of spinel, NiO, and a rhombohedra1 phase, probably Cr2Os. Sannier et 2. have reported continuous weight-gain data for Hastelloy X at 1650" and 2010°F and internal-oxidation measurements after 150 hr at 2010°F. In addition, much of the data on binary Ni-Cr alloys recently reviewed by Kubaschewski and okins' and Ignatov and Shamgunova4 as well as studies of binary Ni-Mo alloys5 are also pertinent to the oxidation of this composition. EXPERIMENTAL Continuous weight-gain measurements and metallographic measurements of subscale reactions were the main experimental techniques used in this study. X-ray and electron diffraction backed up by a limited amount of electron-microprobe analysis served to characterize the nature of the scale- and subscale-reaction products. Two heats of commercial sheet of the composition given in Table I and identified as A and B were used in the bulk of this study. Internal-oxidation measurements were made on a third heat of material in the form of a 0.5-in.-diam bar. In order to assure homogeneity, all heats were reannealed 4 hr at 2175°F prior to sample preparation. weight-Gain Measurement. All specimens (1.5 by 0.4 by 0.03 in.) were abraded through 600 paper, electropolished, and lightly etched in an alcohol-10 pct HCl solution. An electrolyte of 150 cu cm H,O, 500 cu cm HsPO4 (85 pct conc), and 3 g CrO3 at a current density of 0.9 amp per sq cm or a solution of 10 pct HaW4 in alcohol used at 4 v and 0.3 amp per sq cm was used for electropolishing. The resultant surface exhibited a finish of 3 ± 1 p rms. Continuous weight-gain tests were made at 1600°, 1700°, 1800°, 1900°, 2000", and 2200°F on auer' type balances capable of recording a total weight change of 110 mg with an accuracy of k0.1 mg. All tests were made in air dried to a dew point of -70°F and metered into the 2-in.-diam reaction
Jan 1, 1964
-
Coal - Anchorage Performance in Rock BoltingBy D. S. Choi, R. Stefanko
There are a number of complex factors that influence the effectiveness of anchorage to maintain tension in rock bolts. However, a plastic analysis of the anchorage site employing certain simplifying assumptions with application of the Mohr-Coulomb criterion appears to explain the observed phenomena. Such an analysis has been made and a correlation sought with field and laboratory tests. Field tests were made in an anthracite mine in eastern Pennsylvania and included pull tests and long-term tests of a variety of anchorage devices in two basic lengths, 30 and 42 in. in two widely differing seams. Performance is reviewed for wedge, expansion shell, and resin anchorage. Laboratory tests duplicated many of the field conditions but in addition compared the performance of shells with normal and reversed serrations. This performance was compared with the predicted results from the plastic analysis. One of the major problems in conducting long-term underground tests is the selection of suitable instrumentation. All installed bolts were fitted with spherical and hardened washers to insure the best possible torque wrench readings. In addition, commercially available load cells were used. Finally, the performance of a specially developed strain-gage-equipped ring cell is viewed. Rock bolting as a method of support continues to increase with applications in many other industries in addition to mining. Nevertheless, with nearly 55,000,000 roof bolts installed in coal mines alone last year, this remains as the single greatest use. While bolts have frequently supported ground where conventional timbering could not, there are relatively few design criteria; and trial-and-error procedures prevail. Furthermore, there has been a lag in development of suitable instrumentation that is simple to install and read out, sensitive, durable, reliable, safe, and economical in evaluating the effectiveness of a bolt over long periods of time. Therefore, the pull test continues to be the most popular method of evaluating the applicability of a certain type of roof bolt under specific installation conditions. At The Pennsylvania State University in the Dept. of Mining, research has been conducted for a number of years to measure bleed off in carefully controlled laboratory experiments as well as in underground investigations."-' Unfortunately, most of the instrumentation developed has been primarily suitable only for research purposes, not possessing all of the aforementioned characteristics desirable for routine underground use. Other groups also have met with restricted success. Therefore, while relatively crude, the torque wrench continues to remain as the most widely used load measuring device. While both field and laboratory tests continue to be con- ducted, analytical analyses are attempted to discover the more important design parameters in order that more efficient anchorage might be devised. Bolts are being used for a greater variety of purposes in mining. Suspending wire sideframe belt conveyors from roof bolts is a common application. The suspension of a monorail transportation system presents yet another. One such system has just been installed in a recently reopened anthracite mine and is presently being evaluated under production conditions. Preliminary studies revealed that a considerable cost reduction was possible by suspending the monorail on bolts anchored in the top. The monorail was to be installed under two widely differing conditions—a competent sandstone above the Buck Mountain seam and a softer shale top above the Skidmore. The type of anchorage device, length of bolt, and long-term performance, consistent with economy and safety, had to be established for the installation once the decision was made to suspend the system on rock bolts. This paper describes some of the testing procedures leading to a final selection. Theoretical Analysis of Expansion Shell Anchorage A detailed look at an expansion shell assembly might shed some light on the factors involved in the design of a suitable shell, Fig. 1. When a bolt is rotated, the tapered plug is forced downward, expanding the leaves laterally to grip the sides of the hole. Two friction surfaces are present: (1) the interface of the plug and leaf and (2) the interface between leaf and rock. The relationships of these friction planes, geometry of expansion shell, and properties of the rock are important in the design of an expansion shell. Therefore, an analysis assuming the rock to behave as a rigid plastic material with its yield governed by the Mohr-Coulomb criterion was made." Furthermore, the effect of friction between the leaf and rock produced by serrations was analyzed.
Jan 1, 1971
-
Institute of Metals Division - Concentration Dependence of Diffusion Coefficients in Metallic Solid SolutionBy D. E. Thomas, C. E. Birchenall
ALTHOUGH Eoltzmann gave a mathematical solution for the diffusion equation (for planar diffusion in infinite 01. semi-infinite systems only) in 1894 allowing for variation of the diffusion coefficient with a change in concentration, it was not until 1933 that this solution was applied to an experimentally investigated metallic system. The calculation was carried out by Matano' on the data obtained by Grube and Jedele3 for the Cu-Ni system. Since that time concentration dependence of the diffusion coefficient has been demonstrated for many pairs of metals. However, the nature of this dependence has never been fully elucidated. Many investigators have suspected that these variations could be related to the thermodynamic properties of the solutions, one of the earliest explicit statements being contained in a discussion of irreversible transport processes by Onsager' in 1931. Development along these lines has been greatly retarded by the lack of reliable data on the variation of tliffusivity with concentration, the paucity of the thermodynamic data for the same systems at the same temperatures and compositions, and an incomplete understanding of the relation of the thermodynamic properties of the activated state for diffusion to the bulk thermodynamic properties. The last factor has been discussed by Fisher, Hollomon, and Turnbull.5 In many instances where data exist, it is difficult to know which are acceptable. This problem probably applies more strongly to diffusion data than to activity measurements. For instance, four sets of observers"-" have reported self-diffusion coefficients for copper. The average spread between extreme results is a factor of about four, though the individual sets of data are self-consistent to about 20 pct. Thus one or more factors are out of control, at least in these experiments, making estimates of internal error unreliable. The most reliable diffusion data in most systems have resulted from the use of welded couples with a plane interface from which layers for analysis are machined parallel to the interface after diffusion. The layers are analyzed, and the result is a graphical relation between distance and concentration, usually called the penetration curve. Given the same set of analytical data and distances and following the same procedure in computation, different observers will generally produce diffusion coefficients which vary appreciably, especially at the extremes of the concentration range. Experiments must be carefully designed so that the precision is good enough to answer a particular question unequivocally. In the first calculation of the dependence of the diffusion coefficient on concentration in the metallic solid solution Cu-Ni, Matano found that the coefficient was insensitive to concentration from 0 to 70 pct Cu, after which it rose more and more steeply to some undetermined value as pure copper was approached.' The same behavior was reported for Au-Ni, Au-Pd, and Au-Pt.* The data used were those of Grube and Jedele which were very good at the time, but are not considered particularly good by present standards. Furthermore, the method of calculation makes the ends of the diffusion coefficient-concentration curve unreliable. For better reliability, the high copper end of the curve has been checked by incremental couples, where the concentration spread is 67.7 to 100 atomic pct Cu. The implication of the curves calculated by Matano was that diffusion is very concentration sensitive in one dilute range of this completely isomorphous system and hardly at all in the other. Matano's result is confirmed. Later Wells and Mehll0 published data on diffusion in Fe-Ni at 1300°C, which represent a thorough test of the shape of the concentration dependence curve. They ran couples with the following ranges of nickel concentration: 0-25 pct, 1.9-20.1 pct, 0-20.1 pct, 20.1-41.8 pct, 0-99.4 pct, and 79.3-99.4 pct. Although the trend of the data indicates an S-shaped concentration dependence, their curve was drawn to the pattern set by Matano. Their original data have been recalculated for the 0-99.4 and 79.3-99.4 pct couples. Wells and Mehl's points and two independent recalculations from the raw data are plotted in Fig. 1. What appears to be the best curve is drawn through them. This curve shows little sensitivity to composition in both dilute ranges with a strong dependence at intermediate composi-tions.? Similar experiments on the Cu-Pd system are reported here at temperatures where solubility is unlimited. These lead to the same type of concentration dependence for the diffusion coefficients as was found upon recalculation of the data for the Fe-Ni system. Experimental Procedure Cu-Pd: The concentration dependence of the diffusion coefficient may be determined by the use of
Jan 1, 1953
-
Institute of Metals Division - The Fine Structure and Habit Planes of Martensite in an Fe-33 Wt Pct Ni Single CrystalBy G. Krauss, W. Pitsch
The fine structure of the bcc martensite formed in an Fe-33 wt pct ATi single crystal of arrstenite is sho~on by transmission electron microscoPy to consist of combinations of transformation twins, stacking faults, deformation twins, and regular arrays of parallel screw dislocations. These structures constitute evidence for the multiple lattice-invariant deformations which operating during the formation of martensite could produce the real habit-plane scatter measured by a two-surface analysis of the plates formed in the single crystal of this investigation and reported in the literature for other Fe-Ni rnartensites. CRYSTALLOGRAPHIC theories1,2 of martensitic transformation show that the habit plane of martensite in a parent lattice is dependent in part upon an inhomogeneous distortion or lattice-invariant deformation which takes place on a fine scale within a martensite plate during its formation. Several recent theoretical papers3,4 have addressed themselves to an analysis of a wide variety of conceivable lat-tice-invarient deformations and the habit planes which they produce, while experimental investigation have been concerned with either the measurement of habit planes or the description and identification of the martensitic fine structure which reflects the nature of the lattice-invariant deformation operating during transformation. In Fe-Ni alloys with subzero Ms temperatures, the group of alloys with which this paper concerns itself, habit planes have often been found to scatter an amount greater than might be expected from possible experimental errors,5-7 and fine twinning has been identified as a major constituent of the fine structure of martensite.8-11 It has been suggested3,4 that more than one type of invariant shear occurs during martensitic transformation. This possibility has been experimentally supported12,13 by the observation of both dislocation configurations and twinning in a single martensite plate. The purpose of this paper is to report additional evidence for multiple lattice-invariant deformations in martensite and so to account for the real scatter in the habit planes of the martensite plates formed in Fe-Ni alloys. EXPERIMENTAL PROCEDURE The Fe-Ni single crystal was produced by pulling a high-purity iron and nickel charge through a single-crystal vacuum furnace in an alumina crucible. The crystal was double-melted to promote homogeneity and to increase its size by further additions on the second pass. In its final form the crystal was 4 cm in diam and 5 cm long. The nickel and carbon contents were analyzed at 32.9 and 0.006 wt pct, respectively. The austenite of this alloy first transformed to martensite by bursts at about -120°C, and, to preserve as much of the austenite as possible, all transformation was performed just below -120°C. Some observations were made on transformed samples which had been heated for 2 min at 340°C. It is assumed that the features of the martensite of these samples, Figs. 1 and 4, are the same as those of the as-quenched martensite. Orientation of the crystal by X-ray diffraction established 10.735 0.609 0.3161? as the axis of the crystal, an orientation that was checked within 2 deg by neutron diffraction. Further checks by electron diffraction of samples cut normal to the axis confirmed this orientation within the larger limits of error inherent in electron diffraction of thin foils. The X-ray orientation was the one used for the two-surface analysis of the martensite habit planes. A two-surface analysis was performed on the quadrant of the single crystal which had been oriented by both X-ray and neutron-diffraction techniques. Photomicrographs at X50 were made on two surfaces along an edge 2 cm long. Fiducial marks and the fact that many of the plates were almost completely surrounded by retained austenite made good matching of individual plates on two surfaces possible. The habit-plane trace on a surface was taken as the best line parallel to the long axis of a plate. A measure of the accuracy afforded by this criterion was provided by a family of very large plates which appeared at intervals along the entire 2 cm length of the edge. The plates all had habit-plane traces within 2 deg of one another. Many of the plates did not show midribs and, therefore, the use of midribs7 to represent habit-plane traces was not feasible in this investigation. The over-all experimental accuracy is estimated to be better than ±2 deg. Samples for transmission examination in a Siemens Elmiskop I at 100 kv were prepared by cutting 2-mm-thick discs from the single crystal, removing about 0.5 mm by chemical polishing,14 trans-
Jan 1, 1965