URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://www.cmentarzwawerski.pl/WERmill/6153/calculate_horse_power_of_a_ball_mill.html
[ "#### How To Calculate The Primary Drive For A Ball Mill\n\nHorse Horse Power Calculations To Rotate A Ball The ball mill motor power requirement calculated above as 1400 hp is the power that must be applied at the mill drive in order to grind the tonnage of feed from one size distribution the following shows how the size or select the matching mill required to,Horse Horse Power Calculations To Rotate A Ball Mill\n\n#### Horse Power Calculations To Rotate A Ball Mill\n\nThe basic parameters used in ball mill design (power calculations), rod mill or any tumbling mill sizing are; material to be ground, characteristics, Bond Work Index, bulk density, specific density, desired mill tonnage capacity DTPH, operating % solids or pulp density, feed size as F80 and maximum 'chunk size', product size as P80 and maximum and finally the type of circuit open/closed ...\n\n#### Gorilla Mill\n\nBall Mill Design/Power Calculation 2018/04/08 · A motor with around 1400 Horse Power is calculated needed for the designed task. Now we much select a Ball Mill that will draw this power.The ball mill motor power requirement calculated above as 1400 HP is the power that must be applied at the mill drive in order to grind the tonnage of feed ...\n\n#### SAGMILLING.COM .:. Mill Critical Speed Determination\n\nThe mill critical speed will be calculated based on the diameter (above) less twice this shell liner width. Mill Actual RPM: Enter the measured mill rotation in revolutions per minute. Result #1: This mill would need to spin at RPM to be at critical speed.\n\n#### Ball Nose Finishing Mills Speed & Feed Calculator - DAPRA\n\nSpeed And Feed Calculators Ball Mill Finish Calculator Part Spacing Calculator G And M Code Characters Standard End Mill Sizes Standard Drill Sizes Drill And Counterbore Sizes. Contact. End Mill Speed & Feed Calculator. Tool Dia. In. Radial (Side) Depth of Cut. This will adjust the feedrate if less than the tool rad. In. Num of Flutes\n\n#### How Do Calculate Moter H P Require For Ball Mill\n\nDec 12, 2016· The ball mill motor power requirement calculated above as 1400 HP is the power that must be applied at the mill drive in order to grind the tonnage of feed from one size distribution. The ...\n\n#### Ball Nose Milling Strategy Guide - In The Loupe\n\nGorilla Mill Baboon. Based on the original 5 flute Gorilla Mill, the \"Baboon\" for high temp alloys features geometric enhancements that make it uniquely suited for difficult-to-machine materials including: inconel, waspaloy, hastelloy, rene, stellite, 17-4 SS, 15-5 SS, 13-8 SS and titanium.\n\n#### horsepower calculations to rotate a ball mill\n\nThree types of tests are available for mill power determinations. In most cases one of two bench scale tests is adequate. First, a Jar Mill grindability test requires a 5 lb. (2 kg) sample and produces a direct measured specific energy (net Hp-hr/t) to grind from the design feed size to the required product size.\n\n#### to calculate ball mill drive hp\n\nhorse power calculations to rotate a ball mill. to calculate ball mill drive hp . ball mill performance calculation. to calculate ball mill drive hp gold ore crusher 2012910 : ball mill drive motor choices for presentation at the the time to accelerate the mill to the design speed is given by the following calcul. ball mill load calculations.\n\n#### How To Calculate Critical Speed Of A Ball Mill\n\n20191013 ball mill morrell method samac e model calculations for ball mill power draw by steve morrell horse power calculations to rotate a ball mill calculate horse power of a ball mill get a free quote technical notes 8 grinding get price using the smc test to predict read more. More Details Ball Mill Motor Hp …\n\n#### Milling Horsepower Calculator - APT Machine Tools ...\n\nCalculate Horse Power Of A Ball Mill. Where Mc is the total mass of the charge in the mill and Tf is the torque required to overcome friction. Figure 8.5 Effect of mill filling on power draft for ball mills. Get Quote. Calculating Circulating Load Crusher.\n\n#### Horse Power Calculations To Rotate A Ball Mill\n\nBall Mill Finish Calculator. The Ball Mill Finish Calculator can be used when an end mill with a full radius (a ball mill) is used on a contoured surface. The tool radius on each side of the cut will leave stock referred to as a scallop. The finish of the part will be determined by the height of the scallop, amd the scallop will be determined ...\n\n#### Ball mills -\n\nHorse Horse Power Calculations To Rotate A Ball The ball mill motor power requirement calculated above as 1400 hp is the power that must be applied at the mill drive in order to grind the tonnage of feed from one size distribution the following shows how the size or select the matching mill required toHorse Horse Power Calculations To Rotate A Ball Mill...We are a professional mining machinery ...\n\n#### calculate horse power of a ball mill - Wild Tapas\n\nto calculate ball mill drive hp. To calculate ball mill drive hp ball mill designpower calculation the ball mill motor power requirement calculated above as hp is the power that must be applied at the mill drive in order to grind the tonnage of feed from one size distribution the following shows how the size or select the matching mill required to draw this power is\n\n#### Surface Finish Calc - Villa Machine Associates, Inc.\n\nCalculate the horsepower required for a milling operation based on the feed rate and depth of cut, which are used to determine the material removal rate (or metal removal rate). Also required is the unit power, which is a material property describing the amount of power required to cut that material. The horsepower at both the spindle and the ...\n\n#### Milling Formula Calculator - Carbide Depot\n\nMilling Step-over Distance Calculator In many milling operations, the cutting tool must step over and make several adjacent cuts to complete machining a feature. As a result, a small cusp of material, called a scallop, will remain between these cuts on any surrounding walls or on the machined surface if a ball end mill is used.\n\n#### Milling Step-over Distance Calculator\n\nJun 26, 2017· Ball Nose Milling Without a Tilt Angle. Ball nose end mills are ideal for machining 3-dimensional contour shapes typically found in the mold and die industry, the manufacturing of turbine blades, and fulfilling general part radius requirements.To properly employ a ball nose end mill (with no tilt angle) and gain the optimal tool life and part finish, follow the 2-step process below (see Figure 1).\n\n#### TECHNICAL NOTES 8 GRINDING R. P. King\n\ncalculate horse power of a ball mill,,Ball Mill,horse power calculations to rotate a ball mill,ball mill motor power, Get Price motor power calculation for ball and tube mill YouTube,YouTube Oct 15, 2013,1:26 Shanghai Changlei grinding mills horse power calculations to rotate a ball mill by,1:26 famous crusher grinding .get price.\n\n#### calculation of motor kw hp of ball mill up to 1200 kw of ...\n\nBALL MILL POWER - Page 1 of 2. 12.11.2013· ball mill power . dear experts. we have a ball mill in combi mode for cement grinding . roller press before the ball mill. the supplier has given 28% grinding media filling, with shaft power 2270 kw only. ball mill size is 4.4*11 m. grinding media size is max 30mm . it is a monochamber mill designed to grind the roller press product. if we calculate ...\n\n#### horse power calculations to rotate a ball mill in kenya\n\nCalculate The Primary Drive For Ball Mill. To Calculate Ball Mill Drive Hp. The FL ball mill is designed for mill drive lining types and end product in the mill The ball charge mill consists of grinding media in various sizes to ensure Max power is calculated at Ch1 30 of total power …\n\n#### Average Horsepower Of A Ball Mill\n\nthe mill is used primarily to lift the load (medium and charge). Additional power is required to keep the mill rotating. 8.1.3 Power drawn by ball, semi-autogenous and autogenous mills A simplified picture of the mill load is shown in Figure 8.3 Ad this can be used to establish the essential features of a model for mill power.\n\n#### End Mill Speed and Feed Calculator - Martin Chick & Associates\n\nto calculate ball mill drive hp. To calculate ball mill drive hp ball mill designpower calculation the ball mill motor power requirement calculated above as hp is the power that must be applied at the mill drive in order to grind the tonnage of feed from one size distribution the following shows how the size or select the matching mill required to draw this power is\n\n#### Ball Mill Design/Power Calculation\n\nhow do calculate moter h p require for ball millhow do calculate moter h p require for ball mill. 21.03.2009 Multiply MRR times unit hp. (For aluminum, milling,\n\n#### Ball Mill Design/Power Calculation\n\ncalculate horse power of a ball mill. UBC MINE331 Lecture Notes - SAGMILLING . Nov 11, 2015 ... There are three power-based methodologies for sizing SAG mills that are ... Figure 2: Ball mill for determining Bond ball mill work index ..... ically quote motor sizes in US horsepower, so look for a motor of around 10 700 hp.\n\n#### power draw calculator of ball mill - Caesar\n\nThe mill used for this comparison is a 4eter diameter by 13meter long ball mill with a 5000 hp table 2 typical motor calculate horse power of a ball mill small btener precio how do calculate moter hrequire for ball mill he ball mill is key equipment in grinding.\n\n• grinding machin planet power ppg 115\n• how to calculate vertical roller mill table speed formula\n• power driven hammmer mill\n• start tamiya worldcrushersstart up power for ball mill\n• mill fineness test of coal mill in thermal power plant\n• coal mill construction and working in thermal power plant 210mw in india ppt\n• maintenance of coal mill in thermal power plant\n• separator vertical mill formula of power\n• mill pulverized for power station 23269\n• how to calculate p80 ball mill\n• Cement Mill Power Calculation 30587\n• la milpa grinder power mill\n• Specific Power Of Vertical Cement Mill\n• ball mill power calculation pdf\n• power consumption for coal grinding in Indonesia\n• how to calculate pulverisers mills and crushers power requir\n• coal grinding power plant\n• Randell Mills Blacklight Power\n• low power consumption best selling laboraroty ball mills\n• Power Feed Kit Hitorque Mini Mill\n• coal mill pulverizer vertical ball mill for power plant\n• vertical coal mill manufacturer for power plant\n• how to calculate cbm of continuous ball mill 3200 12500 juno\n• power gristmill grinding burrs 5 5 dia\n• ### Grinding Equipment\n\nThe PC Hammer Mill, MTW, MTM Medium Speed Trapezium Mill, Ball Mill and the LM, LUM Vertical Mill cover all requirements of crude, fine and ultrafine powder production in the industrial milling field. Production of free combination from 0 to 2500 meshes can be realized. No matter which industry you are in, chemistry, energy, construction material or metallurgy field, SBM will always meet all your demands." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8408477,"math_prob":0.96732515,"size":407,"snap":"2021-21-2021-25","text_gpt3_token_len":82,"char_repetition_ratio":0.1439206,"word_repetition_ratio":0.08695652,"special_character_ratio":0.1891892,"punctuation_ratio":0.013333334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96775913,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-16T00:47:09Z\",\"WARC-Record-ID\":\"<urn:uuid:5d535a58-dfda-436f-89f8-6a11c9517e9f>\",\"Content-Length\":\"27302\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:189669a8-c3ff-480f-846d-d0999fa48552>\",\"WARC-Concurrent-To\":\"<urn:uuid:cbd98cb5-068d-459e-8f7e-bf8bd104ed8f>\",\"WARC-IP-Address\":\"104.21.55.94\",\"WARC-Target-URI\":\"https://www.cmentarzwawerski.pl/WERmill/6153/calculate_horse_power_of_a_ball_mill.html\",\"WARC-Payload-Digest\":\"sha1:IA4ZW6ZSTS7MKXCLA2467LLONMQMJ536\",\"WARC-Block-Digest\":\"sha1:RNH3PIXRMA7X2C7346NM2MYH6UMC7JH4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487621699.22_warc_CC-MAIN-20210616001810-20210616031810-00626.warc.gz\"}"}
https://ch.mathworks.com/matlabcentral/cody/problems/1043-generate-a-string-like-abbcccddddeeeee/solutions/1814163
[ "Cody\n\n# Problem 1043. Generate a string like abbcccddddeeeee\n\nSolution 1814163\n\nSubmitted on 13 May 2019 by Michael Eldredge\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nn = 1; y_correct = 'a'; assert(isequal(your_fcn_name(n),y_correct))\n\n2   Pass\nr = 2; y_correct = 'abb'; assert(isequal(your_fcn_name(r),y_correct))\n\n3   Pass\nn = 5; y_correct = 'abbcccddddeeeee'; assert(isequal(your_fcn_name(n),y_correct))\n\n4   Pass\nty =18 s_correct ='abbcccddddeeeeeffffffggggggghhhhhhhhiiiiiiiiijjjjjjjjjjkkkkkkkkkkkllllllllllllmmmmmmmmmmmmmnnnnnnnnnnnnnnoooooooooooooooppppppppppppppppqqqqqqqqqqqqqqqqqrrrrrrrrrrrrrrrrrr' assert(strcmp(your_fcn_name(ty),s_correct))\n\nty = 18 s_correct = 'abbcccddddeeeeeffffffggggggghhhhhhhhiiiiiiiiijjjjjjjjjjkkkkkkkkkkkllllllllllllmmmmmmmmmmmmmnnnnnnnnnnnnnnoooooooooooooooppppppppppppppppqqqqqqqqqqqqqqqqqrrrrrrrrrrrrrrrrrr'" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6619143,"math_prob":0.7205998,"size":823,"snap":"2019-51-2020-05","text_gpt3_token_len":323,"char_repetition_ratio":0.30647132,"word_repetition_ratio":0.0,"special_character_ratio":0.22114216,"punctuation_ratio":0.125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9737643,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T09:59:40Z\",\"WARC-Record-ID\":\"<urn:uuid:e666b67c-e9dc-47a0-849e-91623b556d82>\",\"Content-Length\":\"74236\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb776f71-0ea7-4012-862c-b7c6fc3f329f>\",\"WARC-Concurrent-To\":\"<urn:uuid:860fc4f4-2f5a-4807-bdcd-9cd995c510be>\",\"WARC-IP-Address\":\"104.96.217.125\",\"WARC-Target-URI\":\"https://ch.mathworks.com/matlabcentral/cody/problems/1043-generate-a-string-like-abbcccddddeeeee/solutions/1814163\",\"WARC-Payload-Digest\":\"sha1:GJEBAMJY5YXNVDNRQZM3FITPSXI2PF4Y\",\"WARC-Block-Digest\":\"sha1:PONDT73QBO433ECBEAFZS2OI232UCFT6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250598217.23_warc_CC-MAIN-20200120081337-20200120105337-00053.warc.gz\"}"}
https://www.differenttypes.net/types-of-mathematics/
[ "# 26 Different Types of Mathematics\n\nMathematics, to put it simply, is the study of numbers. But it is not merely numbers. Mathematics also involves structure, space, and change. Mathematics can be studied as its own discipline or can be applied to other field of studies.\n\nApplied mathematics is those which are used in other sciences such as engineering, physics, chemistry, medicine, even social sciences, etc. Pure mathematics on the other hand is the theoretical study of the subject, and practical applications are discovered through its study.\n\n## 26 Types of Math\n\nThe word mathematics was coined by the Pythagoreans in the 6th century from the Greek word μάθημα (mathema), which means “subject of instruction.” There are many different types of mathematics based on their focus of study. Here are some of them:\n\n### 1. Algebra", null, "Algebra is a broad division of mathematics. Algebra uses variable (letters) and other mathematical symbols to represent numbers in equations. It is basically completing and balancing the parts on the two sides of the equation.\n\nIt can be considered as the unifying type of all the fields in mathematics. Algebra’s concept first appeared in an Arabic book which has a title that roughly translates to ‘the science of restoring of what is missing and equating like with like.’ The word came from Arabic which means completion of missing parts.\n\n### 2. Geometry", null, "The word geometry comes from the Greek words ‘gē’ meaning ‘Earth’ and ‘metria’ meaning ‘measure’. It is the mathematics concerned with questions of shape, size, positions, and properties of space.\n\nIt also studies the relationship and properties of set of points. It involves the lines, angles, shapes, and spaces formed.\n\n### 3. Trigonometry", null, "Trigonometry comes from the Greek words ‘trigōnon’ which means ‘triangle’ and ‘metria’ which means ‘measure’. As its name suggests, it is the study the sides and angles, and their relationship in triangles.\n\nSome real life applications of trigonometry are navigation, astronomy, oceanography, and architecture.\n\n### 4. Calculus", null, "Calculus is an advanced branch of mathematics concerned in finding and properties of derivatives and integrals of functions. It is the study of rates of change and deals with finding lengths, areas, and volumes.\n\nCalculus is used by engineers, economists, scientists such as space scientists, etc.\n\n### 5. Linear Algebra", null, "Linear algebra is a branch of mathematics and a subfield of algebra. It studies lines, planes, and subspaces. It is concerned with vector spaces and linear mappings between those spaces.\n\nThis branch of mathematics is used in chemistry, cryptography, geometry, linear programming, sociology, the Fibonacci numbers, etc.\n\n### 6. Combinatorics", null, "The name combinatorics might sound complicated, but combinatorics is just different methods of counting. The word was derived from the word ‘combination’, therefore it is used to combine objects following rules of arranging those objects.\n\nThere are two combinatorics categories: enumeration and graph theory. Permutation, an arrangement where order matters, is often used in both of the categories.\n\n### 7. Differential Equations", null, "As the name suggest, differential equations are not really a branch of mathematics, rather a type of equation. It is any equation that contains either ordinary derivatives or partial derivatives.\n\nThe equations define the relationship between the function, which represents physical quantities, and the derivatives, which represents the rates of change.\n\n### 8. Real Analysis", null, "Real analysis is also called the theory of functions of a real variable. It is concerned with the axioms dealing with real numbers and real-valued functions of a real-variable.\n\nIt is pure mathematics, and is good for people who like plane geometry and proving.\n\n### 9. Complex Analysis", null, "Complex analysis is also called the theory of functions of a complex variable. It deals with complex numbers and their derivatives, manipulation, and other properties. Complex analysis is applied in electrical engineering, when launching satellite, etc.\n\n### 10. Abstract Algebra", null, "Sometimes called modern algebra, abstract algebra is an advanced field in algebra concerning the extension of algebraic concepts such as real number systems, complex numbers, matrices, and vector spaces.\n\nOne application of abstract algebra is cryptography; elliptic curve cryptography involves a lot of algebraic number theory and the likes.\n\n### 11. Topology", null, "Topology is a type of geometry developed in the 19th century. Its name’s Greek origin, which is ‘topos’, means place. Unlike the other types of geometry, it is not concerned with the exact dimensions, shapes, and sizes of a region.\n\nIt studies the physical space a surface unaffected by distortion contiguity, order, and position. Topology is applied in the study of the structure of the universe and in designing robots.\n\n### 12. Number Theory", null, "Number theory, or higher arithmetic, is the study of positive integers, their relationships, and properties. It is sometimes referred to as “The Queen of Mathematics” because of its foundational function in the subject.\n\n[crp]\n\n### 13. Logic", null, "Logic is the discipline in mathematics that studies formal languages, formal reasoning, the nature of mathematical proof, probability of mathematical statements, computability, and other aspects of the foundations of mathematics.\n\nIt aims to eliminate any confusion that can be caused by the vagueness of the natural language.\n\n### 14. Probability", null, "Probability is the branch of mathematics calculating the chances of some things to take place based on the number of the possible cases to the whole number of cases possible. Numbers from 0-1 are used to express the chances of something to occur.\n\n0 means it can never happen and 1 means it will always happen. Real-life applications are in gambling, lottery, sports analysis, games, weather forecasting, etc. Even the chance of an earthquake or a volcano erupting are given a probability.\n\nRead Also: Different Types of Magma\n\n### 15. Statistics", null, "Statistics are the collection, analysis, measurement, interpretation, presentation and summarization of data. Statistics is used in many fields such as business analytics, demography, epidemiology, population ecology, etc.\n\n### 16. Game Theory", null, "Game theory is a branch of mathematics which also involves psychology, economics, contract theory, and sociology. It analyses strategies for dealing with competitive strategies where the outcome also depends on other actions of other partaker in the activity.\n\nIt is applied in business, wars, political sciences, biology, philosophy, etc.\n\n### 17. Functional Analysis", null, "Functional analysis is under the field of mathematical analysis. Its foundation is the study of vector spaces that has limit-related structure such as topology, inner product, norm, etc.\n\nIt was developed through the study of functions and the formulation of properties of transformation. Functional analysis is found to be useful for differential and integral equations.\n\n### 18. Algebraic Geometry", null, "Algebraic geometry is a branch of mathematics that uses algebraic expressions to describe geometric properties of structures.\n\n### 19. Differential Geometry", null, "Differential geometry is a field in mathematics that utilizes different mathematical techniques (differential calculus, integral calculus, linear algebra, and multilinear algebra) to study geometric problems.\n\nIt is used in different studies of electromagnetism, econometrics, geometric modeling, digital signal processing in engineering, study of geological structures.\n\n### 20. Dynamical Systems (Chaos Theory)", null, "Dynamical Systems (also referred to as chaos theory) is a mathematical concept where the relationship of a point in space to time is described a fixed set of rules. This concept explains the swinging of a clock pendulum, flow of water in a pipe, number of fish in a lake during springtime, etc.\n\n### 21. Numerical Analysis", null, "Numerical analysis is an area in mathematics which develops, evaluates, and applies algorithms for numerically solving problems that occur throughout the natural sciences, social sciences, medicine, engineering and business.\n\n### 22. Set Theory", null, "Set theory is a discipline in mathematics that is concerned with the formal properties of a well-defined set of objects as units (regardless of the nature of each element) and using set as a means of expression of other branch of math.\n\nEvery object in the set has something similar or follows a rule, and they are called the elements.\n\n### 23. Category Theory", null, "Category theory is a formalism that is used for representing and manipulating concepts and symbolic representations of domains. Here, the collection of objects and of arrows formalizes mathematical structure.\n\n### 24. Model Theory", null, "Model theory in mathematics is the study of different structures from a logical standpoint. It involves interpretation of formal and natural languages and the kinds of classifications they can make.\n\n### 25. Mathematical Physics", null, "Mathematics as mentioned earlier is used in many different other fields. Physics is just one of them. Mathematical physics refers to the mathematical methods applied for different studies and development in physics.\n\n### 26. Discrete Mathematics", null, "Unlike the many other ones mentioned above, discrete mathematics is not a branch, but a description of the study of mathematical structures that are discrete rather than continuous.\n\nDiscrete objects, in simple languages, are the countable objects such as integers. Therefore, discrete mathematics does not include calculus and analysis.\n\n### 10 thoughts on “26 Different Types of Mathematics”\n\n2. I have a question. A teacher of mine discussed a class she took in college converting language into mathematics. She mentioned the name of the class, but I suffer from TBI and forget details. I remember her saying all of language is mathematics and not just the operative words. I would like to read up on that.\n\n4. i’m going for set theory\n\n5. If you think about it, in language there are nouns and adjectives to describe things and algebra, topology, set theory, describe things. There are also verbs and adjectives to describe actions and that’s more like calculus and analysis. So, yeah, language is lime mathematics except np ore general, less precise . My BA degree was in English but after graduating I went back to school a few years later and studied undergraduate and graduate level mathematics but then life got in the way and I never finished graduate school. But I love both mathematics and language and have always thought about of them as quite similar. I spent most of my working life as a computer programmer and now that I’m retired I still spent at least part of every day messing about with math.\n\n6. In Section 6, you use the word “in” instead of “it”:\n\ntherefore in is used to combine objects following rules of arranging those objects." ]
[ null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Algebra.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Geometry-300x200.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Trigonometry-300x192.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Calculus.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Linear-Algebra.jpg", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Combinatorics-300x133.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Differential-Equations.jpg", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Real-Analysis-300x150.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Complex-Analysis-300x225.jpg", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Abstract-Algebra-300x175.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Topology-233x300.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Number-Theory-300x246.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Logic-300x210.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Probability.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Statistics.jpg", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Game-Theory-300x214.jpg", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Functional-Analysis-300x225.jpg", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Algebraic-Geometry-300x300.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Differential-Geometry.jpg", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Dynamical-Systems-Chaos-Theory.jpg", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Numerical-Analysis-265x300.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Set-Theory.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Category-theory-300x200.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Model-Theory-300x286.png", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Mathematical-Physics.jpg", null, "https://www.differenttypes.net/wp-content/uploads/2016/09/Discrete-Mathematics-300x198.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9422695,"math_prob":0.91164696,"size":11258,"snap":"2023-40-2023-50","text_gpt3_token_len":2205,"char_repetition_ratio":0.14146082,"word_repetition_ratio":0.01777523,"special_character_ratio":0.1895541,"punctuation_ratio":0.13000494,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99510676,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T06:26:06Z\",\"WARC-Record-ID\":\"<urn:uuid:01f5fb07-09c2-4cfe-9eff-414c5b0b7c17>\",\"Content-Length\":\"128522\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:557392f0-5f52-4e27-af63-e9a9b3bb709a>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e766af7-629d-4a08-8c18-dc418e5ded91>\",\"WARC-IP-Address\":\"172.67.153.97\",\"WARC-Target-URI\":\"https://www.differenttypes.net/types-of-mathematics/\",\"WARC-Payload-Digest\":\"sha1:IOCRWN4R7NOXQRAUAUJC3SD7JGK2KS3I\",\"WARC-Block-Digest\":\"sha1:MXOJA3GM252IMREIXVFKXCELPLX62HZ7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510498.88_warc_CC-MAIN-20230929054611-20230929084611-00058.warc.gz\"}"}
https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Delftse_Foundations_of_Computation/03%3A_Sets_Functions_and_Relations/3.06%3A_Counting_Past_Infinity/3.6.02%3A_Counting_to_infinity
[ "Skip to main content\n\n# 3.6.2: Counting to infinity\n\nSo far, we have only discussed finite sets. N, the set of natural numbers {0, 1, 2, 3, . . . }, is an example of an infinite set. There is no one-to-one correspondence between N and any of the finite sets Nn. Another example of an infinite set is the set of even natural numbers,E = {0,2,4,6,8,...}. There is a natural sense in which the sets N and E have the same number of elements. That is, there is a one-to-one correspondence between them. The function f : N → E defined by f(n) = 2n is bijective. We will say that N and E have the same cardinality, even though that cardinality is not a finite number. Note that E is a proper subset of N. That is, N has a proper subset that has the same cardinality as N.\n\nWe will see that not all infinite sets have the same cardinality. When it comes to infinite sets, intuition is not always a good guide. Most people seem to be torn between two conflicting ideas. On the one hand, they think, it seems that a proper subset of a set should have fewer elements than the set itself. On the other hand, it seems that any two infinite sets should have the same number of elements. Neither of these is true, at least if we define having the same number of elements in terms of one-to-one correspondence.\n\nA set A is said to be countably infinite if there is a one-to-one correspondence between N and A. A set is said to be countable if it is either finite or countably infinite. An infinite set that is not countably infinite is said to be uncountable. If X is an uncountable set, then there is no one-to-one correspondence between N and X.\n\nThe idea of ‘countable infinity’ is that even though a countably infinite set cannot be counted in a finite time, we can imagine counting all the elements of A, one-by-one, in an infinite process. A bijective function f : N → A provides such an infinite listing:( f (0), f (1), f (2), f (3), . . . ). Since f is onto, this infinite list includes all the elements ofA. In fact, making such a list effectively shows that A is countably infinite, since the list amounts to a bijective function from N to A. For an uncountable set, it is impossible to make a list, even an infinite list, that contains all the elements of the set.\n\nBefore you start believing in uncountable sets, you should ask for an example. In Chapter 3, we worked with the infinite sets Z (the integers), Q (the rationals), R (the reals), and R ∖ Q (the irrationals). Intuitively, these are all ‘bigger’ than N, but as we have already mentioned, intuition is a poor guide when it comes to infinite sets. Are any of Z, Q, R, and R ∖ Q in fact uncountable?\n\nIt turns out that both Z and Q are only countably infinite. The proof that Z is count- able is left as an exercise; we will show here that the set of non-negative rational numbers is countable. (The fact that Q itself is countable follows easily from this.) The reason is that it’s possible to make an infinite list containing all the non-negative rational numbers. Start the list with all the non-negative rational numbers n/m such that n + m = 1. There is only one such number, namely 0/1. Next come numbers with n + m = 2. They are0/2 and 1/1, but we leave out 0/2 since it’s just another way of writing 0/1, which is already in the list. Now, we add the numbers with n + m = 3, namely 0/3, 1/2, and2/1. Again, we leave out 0/3, since it’s equal to a number already in the list. Next come numbers with n + m = 4. Leaving out 0/4 and 2/2 since they are already in the list, we add 1/3 and 3/1 to the list. We continue in this way, adding numbers with n + m = 5, then numbers with n + m = 6, and so on. The list looks like\n\n$$\\left(\\frac{0}{1}, \\frac{1}{1}, \\frac{1}{2}, \\frac{2}{1}, \\frac{1}{3}, \\frac{3}{1}, \\frac{1}{4}, \\frac{2}{3}, \\frac{4}{2}, \\frac{4}{5}, \\frac{1}{5}, \\frac{5}{1}, \\frac{1}{6}, \\frac{2}{5}, \\ldots\\right)$$\n\nThis process can be continued indefinitely, and every non-negative rational number will eventually show up in the list. So we get a complete, infinite list of non-negative rational numbers. This shows that the set of non-negative rational numbers is in fact countable.\n\n• Was this article helpful?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9454786,"math_prob":0.9896897,"size":4142,"snap":"2021-21-2021-25","text_gpt3_token_len":1114,"char_repetition_ratio":0.15877235,"word_repetition_ratio":0.05613577,"special_character_ratio":0.27788508,"punctuation_ratio":0.14330874,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99745774,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-17T23:34:00Z\",\"WARC-Record-ID\":\"<urn:uuid:fd7df124-2214-4707-a756-92b06ae4d2c8>\",\"Content-Length\":\"92112\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9a62b011-ac3b-43e4-be4c-ee3b2a8ed256>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae3097ef-b204-4193-ae58-2e4e2b9fbe22>\",\"WARC-IP-Address\":\"13.249.43.86\",\"WARC-Target-URI\":\"https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Delftse_Foundations_of_Computation/03%3A_Sets_Functions_and_Relations/3.06%3A_Counting_Past_Infinity/3.6.02%3A_Counting_to_infinity\",\"WARC-Payload-Digest\":\"sha1:44QKDTYHIT5COLOGXRLXCTRO65IRQBWC\",\"WARC-Block-Digest\":\"sha1:S64LGQLCWIT4ENEPEH6V6W3WQW4CXSVV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487634576.73_warc_CC-MAIN-20210617222646-20210618012646-00203.warc.gz\"}"}
https://www.jiskha.com/similar?question=Solve+using+the+elimination+method.+Show+your+work.+If+the+system+has+no+solution+or+an+infinite+number+of+solutions%2C+state+this.+2x+%2B+10y+%3D+58+-14x+%2B+8y+%3D+140&page=140
[ "Solve using the elimination method. Show your work. If the system has no solution or an infinite number of solutions, state this. 2x + 10y = 58 -14x + 8y = 140\n\n92,096 questions, page 140\n1. Chemistry (concentration)\n\nH2SO4(aq) + 2LiOH(aq) → Li2SO4(aq) + 2H2O(l) 3.2 moles of LiOH enter the reaction and are fully consumed. The molar mass of Li2SO4 is 110 g/mol. The final aqueous solution has a volume of 4.0 L. What is the final concentration of Li2SO4 in g/L? 44\n\nasked by Summer on April 11, 2018\n2. CHEM-KSP/KA PROBLEMS\n\nreally don't quite understand how to set up a caluculation to figure out whether a precipitate will form and i'm struggling with the second half too! When 25.75 mL of 0.00826 M lead II nitrate is mixed with 75.10 mL of 0.0183 M sodium chloride solution,\n\nasked by Sigurd on April 8, 2008\n3. physics\n\n2 forces pull horizontally on a heavy box. force b is directed 20 west of north and pulls twice as hard as force a. the resultant of these two pulls is 400N directly northward. using vector components to show two possible solutions to this problem: a. what\n\nasked by courtney on August 21, 2015\n4. chem lab\n\nwhen an aqueous solution suspected of containing BA2+ or Pb2+ or both is mixed with sulfuric acid a precipitate forms. In another test, when the original solution is mixed with an aqueous solution of sodium sulfide no precipitate forms. What do these two\n\nasked by natash on March 25, 2008\n5. Chemistry\n\nIn a student's experiment, when 40 mL of 6% (w/w) hydrogen peroxide solution was reacted with 10 mL 0.5 M FeCl3 solution, a ΔrH of -3,382 J was calculated. Calculate ΔH for this reaction in kJ mol-1. Give your answer to 2 decimal places, do not include\n\nasked by Anon on April 19, 2016\n6. Chemistry\n\nA buffer solution is prepared by adding 14.37g of NaC2H3O2 and 16.00g of acetic acid to enough water to make 500mL of solution. What is the initial concentration of C2H3O2 in the solution? Ka(HC2H3O2)=1.8e-5 HC2H3O2 = H3O+ + c2h3O2 the molar mass of\n\nasked by Hannah on April 14, 2012\n7. science(Chemistry)\n\nCalculate the concentration of the NaOH solution. If 30,0cm3 of a standard oxalic acid[(COOH)2] solution of concentration 0,5 mol.dm-3,is used to neutralize 25,0 cm3 of a sodium hydroxide (NaOH) solution. The balanced chemical equation for the reation is\n\nasked by Ntombikayise on July 14, 2011\n8. Computer Science\n\nI need to write a program to solve for the following: Suppose that the file USPRES.txt contains the names fo the frist 43 United States presidents and the file USSENATE.TXT contains the names of all former and present US Senators. Write a program with\n\nasked by Wally on March 4, 2011\n9. Chemistry\n\nCalculate the molar concentration for each of the following. a)The concentration of a brine solution containing 225 g for pure sodium chloride dissolved in 4.00 L of solution. b)The volume that must be added to 4.2 mol of magnesium chloride to make a 0.045\n\nasked by Lyn on November 22, 2015\n10. Estimating DecimalsProducts and Quotients\n\nDetermine whether each product or quotient is resonable.If not resonable,find a resonable result. 1. 62.77(29.8)=187.0546 2.16.132/2.96=54.5 The numbers are correct except for the decimal places. The first one looks like 60*30 or 1800, so determine where\n\nasked by Bowier4 on October 9, 2006\n11. Math\n\nEvan is painting a stool that has a cylindrical seat and four rectangular prism legs. The seat has a diameter of 8 inches and a height of 2 inches. Each leg is 2 inches by 2 inches by ten inches. How much area will Evan have to paint? Use three point one\n\nasked by Matthew on January 20, 2012\n12. Precalculus\n\na farmer has available 1032 feet of fencing and wishes to enclose a rectangular area. If x represents the width of the rectangle for what value of x is the area the largest A) 256.5 feet b) 258 feet c) 256 feet d) 257 feet please show work! i need help\n\nasked by Help now fast due tommorow!!!!!!!!!!!!!!!!!!!!!!!! on March 29, 2010\n13. Precalculus\n\na farmer has available 1032 feet of fencing and wishes to enclose a rectangular area. If x represents the width of the rectangle for what value of x is the area the largest A) 256.5 feet b) 258 feet c) 256 feet d) 257 feet please show work! i need help\n\nasked by Help now fast due tommorow on March 29, 2010\n14. Precalculus help fast\n\na farmer has available 1032 feet of fencing and wishes to enclose a rectangular area. If x represents the width of the rectangle for what value of x is the area the largest A) 256.5 feet b) 258 feet c) 256 feet d) 257 feet please show work! i need help\n\nasked by Arielt on March 29, 2010\n15. computer programming\n\n: Create a program like KBC game show where program ask user 3 different question one after another and the user should answer them. If the answer are correct then pop up the message as “you got the right answer” and must ask user does he/she wants to\n\nasked by ravi on January 21, 2012\n16. Algebra\n\nThe owner of a hair salon charges \\$20 more per haircut than the assistant. Yesterday the assistant gave 12 haircuts. The owner gave 6 haircuts. The total earnings from haircuts were \\$750. Hw much does the owner charge for a haircut? Solve by writing and\n\nasked by Cierra on February 24, 2013\n17. Alg 1\n\nThe owner of a hair salon charges \\$20 more per haircut than the assistant. Yesterday the assistant gave 12 haircuts. The owner gave 6 haircuts. The total earnings from the haircuts were \\$750. How much does the owner charge for a haircut? Solve by writing\n\nasked by Emma on January 30, 2012\n18. math\n\nPlease help, 1) How do you solve 2017^2017 without a calculator? 2) What is the last digit of the solution to this problem: 2017^(2017^2017)? Thank you!\n\nasked by Onet on March 10, 2017\n19. math\n\nIn a concert hall, 60% of the seats were taken. 520 seats were empty. What is the total number of seats in the concert hall? Show your work\n\nasked by mav on October 1, 2018\n20. math\n\nThe denominator of a fraction is 4 more than the numerator. If both the denominator and the numerator are increased by 1, the resulting fraction equals 1/2. What is the original fraction? **Need to show all work** Thanks for the help.\n\nasked by Dylan on March 26, 2011\n21. Geometry\n\nHow much more pizza is in a 12-in. diameter pizza than an 8-in. diameter pizza? Round your answer to the nearest hundredth. Be sure to show all work and label your answer!\n\nasked by Maria on April 7, 2017\n22. Algebra\n\nThe height of the flagpole is three fourths the height of the school. The difference in their heights is 4.5 m. What's the height of the school? Choices for school's height: 18, 20, 24. Please show work\n\nasked by Anonymous on August 23, 2012\n23. algebra\n\nIn 2020 a​ city's budget is projected to be ​\\$24 million. This is approximately 12/7 of the 2000 budget. What was the​ city's budget in​ 2000? please show all work\n\nasked by merna on September 13, 2016\n24. Algebra I\n\nThe cost of 4 scarves and 6 hats is \\$52. The cost of two hats is \\$1 more than the cost of one scarf. What are the costs of one scarf and one hat? Please show me how to set this up and work it.\n\nasked by Jane on February 21, 2013\n25. Algebra\n\nThe height of the flagpole is three fourths of the height of the school. The difference in their heights is 4.5 m. What's the height of the school? Choices for school's height: 18, 20, 24. Please show work\n\nasked by Anonymous on August 23, 2012\n26. Math\n\nSolve for x 2^x+1 - 2^x = 112 Put this in exponent form: (3square root of x) ( square root of 3 cubed) Put into equation form: Horizontal stretch of 2, up 3, left 6, reflection on x-axis. Please help me with these three so I can work on the rest using\n\nasked by Nancy on April 18, 2013\n27. Math- fraction equations\n\nHow would I solve this equation? d-1 = -1 1 ____ ___ 4 2 (d-1) divided by 4 = -1 and 1/2 Thanks, Ok, I'll set up your problem this way: (d-1)/4 = -1 1/2 -1 1/2 is the same as -3/2. Therefore: (d-1)/4 = -3/2 Multiply each fraction by a common denominator,\n\nasked by Sarah on March 6, 2007\n28. gr 11 Math\n\nSolve for x 2^x+1 - 2^x = 112 Put this in exponent form: (3square root of x) ( square root of 3 cubed) Put into equation form: Horizontal stretch of 2, up 3, left 6, reflection on x-axis. Please help me with these three so I can work on the rest using\n\nasked by Nancy on April 18, 2013\n29. passing on species\n\nI have to write an essay. I need some suggests as to where to start or some ideas as to what to write about. The topic is: One main biological objective for any living organism is to continue its species by reproducing and passing along genetic info.\n\nasked by Liz on June 27, 2009\n30. Co-op\n\nI have to writeabout the work setting and work environment of a pharmacy technician... I am allowed to copy and paste from a website as long as you reference it so I did... but all in all this is what I wrote.: Work environment: Pharmacy technicians work\n\nasked by Katelyn on January 6, 2013\n31. UIC\n\nGiven a diprotic acid, H2A, with two ionization constants of Ka1 = 3.57× 10–4 and Ka2 = 3.14× 10–12, calculate the pH and molar concentrations of H2A, HA–, and A2– for each of the solutions below. (a) a 0.182 M solution of H2A (b) a 0.182 M\n\nasked by Anonymous on October 18, 2015\n32. Chemistry\n\nthe lead (II) nitrate in 25.49 ml of a 0.1338M solution reacts with all of the aluminum sulfate in 25.00 ml solution. What is the molar concentration of the aluminum sulfate in the original aluminum sulfate solution? For a question like this how do I find\n\nasked by CL on February 23, 2017\n33. english letter\n\nHello everybody! Today a I have to write a complain letter to \"my\"boss.I should write about my difficulties,and about what's bothering me. In the letter I should propose a solution as well. The situation is: Me(Julie,47 years old)has worked as a secretary\n\nasked by Lulu on April 14, 2010\n\nloga(x-5)-loga(x+9)=loga(x-7)-loga(x+9) all the a's are lower please show work\n\nasked by anonymous on January 9, 2013\n35. physics\n\nplease show me a simple formula to solve this find the final equilibirum temperature when 10.0 g of milk at 10.0degC is added to 1.60 * 10^2 g of coffee with a temperature of 90.0degC. assume the specific heats of coffee and milk are the same as for water\n\nasked by Bella on February 22, 2009\n36. chemistry\n\nconsider a crystallization of sulfanilamide in which 10 mL of hot 95% ethyl alcohol is added to 0.10 g of impure sulfanimide. after the solid has dissolved, the solution is cooled to room temp. and then placed into an ice-water bath. no crystals for, even\n\nasked by BOB on October 7, 2008\n37. Math\n\nI have a math question I am stuck on. It is on the applications of linear systems. This is the question: A club raised 5000\\$ for a trip, they invested part of the money into a savings account with 4% interest, they put the rest of the money into a\n\nasked by Vanessa on December 10, 2018\n38. chemistry\n\nA solution of iron(III) thiocyanate was produced by dissolving 7.98 milligrams solid iron(III) thiocyanate in enough water to produce a 100.00 mL solution. What is the concentration, in molarity, of the solution?\n\nasked by Anonymous on November 6, 2013\n39. Chemistry\n\nA saturated solution of Mg(OH)2 is prepared having a large excess of Mg(OH)2. Sn(NO3)2 is added to the solution. Ksp = 1.8 10-11 for Mg(OH)2 and Ksp = 5.1 10-26 for Sn(OH)2. (a) What [Sn2+] is required to start the precipitation of Sn(OH)2? (b) What [Sn2+]\n\nasked by Gregory on March 14, 2013\n40. Chemistry\n\nConsider the titration of 50 mL of 0.250 M HCl with 0.1250 M NaOH. Calculate the pH of the resulting solution after the following volumes of Na OH have been added. a) 0.00 mL b) 50.00 mL c) 99.90 mL d) 100.00 mL e) 100.1 mL The question is, I get, for\n\nasked by George on April 27, 2013\n41. Chemistry\n\nPowdered AgOH is slowly added to 2.0 L of a 0.020 M solution of NaOH until no more AgOH will dissolve. If Ksp = 1.4 x 10 -8 for AgOH: a. What is [Ag +] in the saturated solution? b. What mass of AgOH had to be added to the solution to produce this\n\nasked by joseph lungu on April 19, 2018\n42. chemistry\n\nihave two solutions. in the first solution , 1.0 moles of sodium chloride is disslved to make 1.0 liters of solution . in the second one, 1.0 moles of sodium chlorine is added to 1.0 liters of water.is the molarity of each solution the same?\n\nasked by wendy on February 13, 2012\n43. biology a\n\nRead the scenario. A cell contains 90% water and 10% salt. The external solution contains 75% water and 25% salt. Which statement is correct? A>> The external solution is hypotonic, so the cell will shrink B>> The external solution is hupotonic, so the\n\nasked by Rylee on February 12, 2018\n44. Physics\n\nTo supply the plumbing system of a New York office building, water needs to be pumped to a tank on the roof, where its height will provide a \"head\" of pressure for all the floors. The vertical height between the basement pump and the level of the water in\n\nasked by Steven on June 15, 2012\n45. Physics\n\nThe method by which the Mars exploration rovers, Spirit and Opportunity, landed on the surface of Mars in January of 2004 was quite elaborate. The rovers began their descent through the thin Martian atmosphere on a parachute until they reached an altitude\n\nasked by Frances on September 7, 2007\n46. Foreign languages\n\nCan you please check these sentences, Writeacher? 1) We...... (sit) in the sun for ... when she (tell) us to get inside. 2)We had been sitting in the sun for about half an hour when she told us to get inside. 3) Peter confessed to eating all the cake.\n\nasked by John on July 2, 2012\n47. Chemistry\n\nA 2.80-g sample of haematite ore containing Fe3+ ions was dissolved in a concentrated acid and the solution was diluted to 250 mL. A 25.0 mL aliquot was reduced with Sn2+ to form a solution of Fe2+ ions. This solution of Fe2+ ions required26.4mL of a\n\nasked by Luke on August 23, 2016\n48. Physics\n\nWhen one gallon of gasoline is burned in a car engine, 1.20 108 J of internal energy is released. Suppose that 0.99 108 J of this energy flows directly into the surroundings (engine block and exhaust system) in the form of heat. If 5.0 105 J of work is\n\n49. physics\n\nIn another solar system, a 100,000 kg spaceship and a 200,000 kg spaceship are connected by a motionless 90 m long tunnel. The rockets start their engines at the same time, and each produces 50,000 N of thrust in opposing directions. What is the system's\n\nasked by Anonymous on December 11, 2012\n50. chemistry\n\nA stock solution was prepared by dissolving exactly 0.4000 g of pure ASA (180.16 g/mol) in 10.00 mL of NaOH and heating the solution to a gentle boil. After cooling to room temperature, the solution was poured into a 250-mL volumetric flask and diluted to\n\nasked by Tasha on January 28, 2010\n51. Chemistry\n\nA solution is made by mixing 12.0 g of NaOH and 75.0 ml of 0.200 M HNO3 a) write a balanced equation for the reaction that occurs between the solutes. b) Caclulate the concentration of each ion remaining in solution. c) Is the resultant solution acidic or\n\nasked by Sara on September 18, 2007\n52. CHEMISTRY\n\nA solution is made by mixing 12.0 g of NaOH and 75.0 ml of 0.200 M HNO3 a) write a balanced equation for the reaction that occurs between the solutes. b) Caclulate the concentration of each ion remaining in solution. c) Is the resultant solution acidic or\n\nasked by Sara on September 18, 2007\n53. chemistry\n\nIn a volumetric analysis experiment, a solution of sodium oxalate (Na2C2O4) n acidic solution is titrated with a solution of potassium permanganate (KMnO4) according to the following balanced chemical equation: 2KMnO4(aq)+8H2SO4(aq)+5Na2C2O4(aq)->\n\nasked by Monteshia on November 17, 2014\n54. Chemistry\n\n1)A buffer solution that is 0.10 M sodium acetate and 0.20 M acetic acid is prepared. Calculate the initial pH of this solution. The Ka for CH3COOH is 1.8 x 10-5 M. As usual, report pH to 2 decimal places. *A buffered solution resists a change in pH.*\n\nasked by JUNDY on March 10, 2015\n55. SS\n\nWhat is the difference in the mission system used by the spanish and the empresario system? A. The Spanish sent out missionaries sponsored by the state and the empresario system had empresarios that settled on their own land. ** B. Both the Spanish and\n\nasked by Check my answer on October 17, 2017\n56. College Chemistry\n\nPhosphoric Acid is usually obtained at an 85% phosphoric acid solution, by mass. If this solution is 15 M, what is the density of the solution? Express your answer in g/ml\n\nasked by Tabatha on September 14, 2010\n57. algebra\n\n1. is (0,3) a solution to the equation y=x+3 (1 point) yes** no 2. is (1,4) a solution to the equation y=-2x (1 point) yes no** 3. (4,0),(3,-1),(6,3),(2,-4) Which are solution to y=x-4?Choose all correct answers. (2 points) (6,3) (4,0)** (3,-1) (2,-4)** Im\n\nasked by blah on August 28, 2014\n58. algebra\n\nAn industrial chemist will mix two acid solutions. One is 37% solution and the other is a 27% solution. How many liters of each should be mixed to get 50 liters of a 36% acid solution?\n\nasked by domo on May 23, 2014\n\nA 15.0mL sample of an H2SO4 solution is titrated with 24.0 mL of a 0.245 Molarity NaOH solution. What is the molarity of the H2SO4 solution?\n\nasked by heather on April 21, 2016\n60. chemistry\n\nwhat is the molarity of the solution obtained by mixing 25.0 ml of a 3.00 M methyl alcohol solution with 225.0 ml of a 0.100 M methyl alcohol solution?\n\nasked by mitch on December 13, 2012\n61. Chemistry (concentration)\n\nA 200 mL NaCl solution with a concentration of 4.0 g/L is mixed with a 600 mL solution containing 8% NaCl (m/v). What is the final concentration of salt in the solution in g/L?\n\nasked by Summer on March 24, 2018\n62. Chemistr\n\nTo 10ml of a 0.10 acetic acid solution a 0.10M NaOH solution is added what is ph of the solution when 9.5ml of NaOH have been added\n\nasked by Juma on January 8, 2018\n63. Math\n\nAt the first tri-city meeting, there are 8 people from town A, 7 people from town B, and 5 people from town C. If a council consisting of 5 people is randomly selected, find the probability that 3 are from town A and 2 are from town B. How would I show my\n\nasked by Punkie on December 6, 2009\n64. Computer Science\n\n0 down vote favorite I have to print: Ain't no sunshine when she's gone It's not warm when she's away. Ain't no sunshine when she's gone And she's always gone too long Anytime she goes away. How would I used methods to print these no loops? Would it be\n\nasked by Jake on September 26, 2015\n65. chemistry\n\nhow many grams of sodium carbonate are needed to prepare 0.250 L of an 0.100 M aqueous solution of sodium ions? well normally you go moles=grams/molar mass than M= moles/L but when looking for grams? what do you do........multiple .250 by .100 than divide\n\nasked by STACY on December 14, 2010\n\nSuppose you are in the market for a new home and are interested in a new housing community under construction in a different city. a) The sales representative informs you that there are two floor plans still available, and that there are a total of 70\n\nasked by KiKi on March 22, 2010\n67. MGT HELP ASAP\n\n1. Read the following scenario: Dan is an employee at a regional mid-size computer company that has recently been sold to a larger, national computer manufacturing corporation. Dan is in charge of computer production orders, and because of this recent\n\nasked by troyer0269 on November 23, 2008\n68. science\n\nA radio wave travels through space with a frequency of 2 x 104 Hz. If the speed of the radio wave is 3 x 108 m/s, what is the wavelength of this wave? 6 x 1012 m 1.5 x 104 m 1 x 104 m 6.7 x 10-4 m Can someone show me how to solve this?\n\nasked by dianni on January 23, 2019\n69. chemistry Molarity\n\nYou have 353 mL of a 1.25 M potassium Chloride solution, but you need to make a 0.50 M potassium Chloride solution. How many milliliters of water must you add to the original 1.25 M solution to make the 0.50 M potassium Chloride solution\n\nasked by Kevin on December 11, 2010\n70. chemistry\n\nWhich solution would show the GREATEST change in pH on the addition of 10.0 mL of 1.0 M NaOH to 1.0 L of each of the following solutions? a. 0.50 M CH3COOH + 0.50 M NaCH3COO. b. 0.10 M CH3COOH + 0.10 M NaCH3COO. c. 0.50 M CH3COOH. d. 0.10 M CH3COOH. e.\n\nasked by wite2khin on May 13, 2010\n71. Java\n\n1 public class testOperators 2 { 3 public static void main(String[] args) 4 { 5 int x; 6 int y = 12 7 double z = 13.0; 8 x = 14 9 System.out.println(“x + y + z = “ + (x + y +z)); 10 x += y; 11 y--; 12 z++; 13 z *= x; 14 System.out.println(“x + y + z\n\nasked by SCCC on January 31, 2009\n72. Health Care Fnancial Accounting\n\nthe local school system asks to submit a proposal to do pre-employment physicalsfor 60 bus drivers. What financial or accounting informaton is needed to submit the proposal? what shall I charge the school system?\n\nasked by Jerry on August 19, 2010\n73. english\n\nCan someone please tell me how this paragraph would be rated? Literary technique paragraph using the second paragraph of the Jan 2011 regent. In passage 2, the author Marge Piercy uses similie to portray the hard work people he loves best do. Similes\n\nasked by Naviva on June 16, 2011\n74. CHEMISTRY\n\nat 1500 celcius the system 2NON2+O2 was allowed to come to equilibrium. the equilibrium concentrations were NO=0.00035M N2=0.040M O2=0.040M What is the value of Kc for the system at this temperature? The answer they give is 1.3 x10^4 Im getting 2.2 x10^-1\n\nasked by HELPP ME on October 26, 2011\n75. physics\n\nIn the figure the coefficient of static friction between mass (MA) and the table is 0.40, whereas the coefficient of kinetic friction is 0.28 ? part a) What minimum value of (MA) will keep the system from starting to move? part b) What value of(MA) will\n\nasked by ally on May 7, 2012\n76. pre ap chem\n\ndetermine the partial pressure of hydrogen if it's collected over water at 23 degrees celsius and the system has a total pressure of 724 mmHg then determine the mole fraction of water vapor in the system\n\nasked by g on March 6, 2013\n\nIn the figure the coefficient of static friction between mass (MA) and the table is 0.40, whereas the coefficient of kinetic friction is 0.28 ? part a) What minimum value of (MA) will keep the system from starting to move? part b) What value of(MA) will\n\nasked by ally on May 8, 2012\n78. physics\n\nIn the figure the coefficient of static friction between mass (MA) and the table is 0.40, whereas the coefficient of kinetic friction is 0.28 ? part a) What minimum value of (MA) will keep the system from starting to move? part b) What value of(MA) will\n\nasked by ally on May 8, 2012\n\nIn the figure the coefficient of static friction between mass (MA) and the table is 0.40, whereas the coefficient of kinetic friction is 0.28 ? part a) What minimum value of (MA) will keep the system from starting to move? part b) What value of(MA) will\n\nasked by ally on May 8, 2012\n80. Physics\n\nConsider the system of capacitors shown in the figure below (C1 = 5.00 µF, C2 = 7.00 µF). Find the equivalent capacitance of the system. Find the charge on each capacitor. Find the potential difference across each capacitor. Find the total energy stored\n\nasked by chuck on September 22, 2010\n81. Kaplan University\n\nCreate a mathematical model for determining the total costs involved in driving from Atlanta to New York City. Be sure to think critically about all possible costs included in the trip and include them in your model. Assume that you have \\$2,000 available\n\nasked by Anonymous on June 16, 2010\n82. Geography\n\n5. Although politically united with England since 1907, Scotland has retained all of the following except its own a. system of laws b. religion c. parlimentary system d. system of education I think this can either be A or C 6. Most of the Nordic region has\n\nasked by mysterychicken on September 16, 2009\n83. Chemistry\n\nNeed help with this lab: (sorry so long) Hypothesis: The amount of product will be regulated by the limiting reactant. Procedure: First, I took two test tubes from the Glassware shelf. I added 6 mL of the copper sulfate solution to one, and 6 mL of the\n\nasked by anonymous on November 14, 2009\n84. programming\n\nSuppose that you can select the pivot of the quick sort algorithm with three methods: you can select the first item, last item, or middle item as the pivot. Indicate the method(s) that gives the most efficient result to sort the following arrays. Note that\n\nasked by juju on May 6, 2014\n85. Algebra\n\nHow do I answer this question/problem? I can't input the following scientific notations in a calculator and would like to know how to solve it. Thank you tutors. The four other recognized dwarf planets in the solar system and their estimated masses in\n\nasked by anon on December 2, 2015\n86. Algebra\n\nAn object is dropped from a height of 1000 cm above the ground after t seconds is given by the formula below- d = 10tsquare + 1000 How far above the ground is the object, when it has fallen for 6 seconds? Please show your work.\n\nasked by Shipra on December 6, 2016\n\n1.) values of f(t) are given in the following table: t 0 2 4 6 8 10 f(t) 137 112 88 68 49 34 Estimate (f prime) f^' (2) and f^' (8) Please show work so I can understand. I'm really stuck. if the table comes out messed up, the values are: (0, 137) (2, 112)\n\nasked by JJ on February 22, 2012\n88. algebra\n\nUse the five steps for problem solving to answer the following question. Please show all of your work. The average of two quiz scores is 81. If one quiz score is six more than the other quiz score, what are the two quiz scores?\n\nasked by franklin on October 29, 2010\n89. math\n\nA pelican flies 2 and 1/4 feet above the surface of the ocean. A dolphin swims 10 and 1/2 feet below the surface. How far apart are the dolphin and the pelican? Show your work. I got 8 and 1/4. I think it's right but I just want to get another's persons\n\nasked by Wossum on September 3, 2016\n90. Math\n\nRoss uses nylon string as a border around a square picture. The string is 60 inches long. What is the area of the picture in square inches? Show your work.\n\nasked by Amy on February 15, 2017\n91. Physic II\n\nA person tries to heat up her bath water by adding 5.0 L of water at 80 degrees celcious to 60 L of water at 30 degrees celcious. What is the final temperature of the water? Show work please !\n\nasked by hulune2 on April 7, 2014\n92. English\n\n1. Read the sentence from paragraph 1 below. It looms menacingly over the road from the roof of a farmhouse, a flying reptile with a seven-foot wingspan. Which of the following correctly describes this sentence? A. Simple sentence B. Complex sentence C.\n\nasked by Cassie on March 29, 2013\n93. math\n\nEvan is painting a stool that has a cylindrical seat and four rectangular prism legs. The seat has a diameter of twelve inches and a height of 2 inches. Each leg is 3 inches by 3 inches by fourteen inches. How much area will Evan have to paint? Use three\n\nasked by drae on December 3, 2011\n94. World History\n\nI needed to match the item name to what best describes it. I only have these five left but I'm not sure if I got them right. Can someone check my answers? Thanks. Home Rule: self-government Geopolitical region: area that shares similar political and\n\nasked by World History on August 23, 2019\n95. Chemistry\n\nA metallurgist decomposes an unknown ore to the following metals. What is the chemical formula of the ore? (Show all the chemical equation and solution steps) ore = x metal products from ore decomposition: Al: 162.0 grams Sb: 730.8 grams Ag: 1,294 grams\n\nasked by jermain 32 on February 2, 2012\n96. Trigonometry\n\nA person on a ship sailing due south at the rate of 15 miles an hour observes a lighthouse due west at 3p.m. At 5p.m. the lighthouse is 52degrees west of north. How far from the lighthouse was the ship at a)3p.m.? b)5p.m.? c)4p.m.? Please Show The Solution\n\nasked by Amber on June 23, 2012\n97. Statistics\n\nA building contractor claims that they can renovate a 200 sq foot kitchen and dining room in 40 work hours plus minus 5 (the mean and standard deviation respectively). The work includes plumbing, electrical installation, cabinets, flooring, painting and\n\nasked by LINDA on March 13, 2007\n98. MATH\n\nTime and money are always significant. Look at this scenario. You can either do your own yard work or you can pay someone to do it for you. You find that it takes you about 50 minutes to clip, mow, and otherwise clean up your yard each time you do it. You\n\nasked by URGENT DUE TODAY on September 9, 2010\n99. Math\n\nTime and money are always significant. Look at this scenario. You can either do your own yard work or you can pay someone to do it for you. You find that it takes you about 50 minutes to clip, mow, and otherwise clean up your yard each time you do it. You\n\nasked by Tania on September 7, 2010\n100. astronomy/conversion\n\nThere are 365.25 days in a year, 24 hours in a day, 60 minutes in an hour, and 60 seconds in a minute. Convert years into seconds. Show your work and keep 5 significant figures. Ok, so finding it is easy (multiply 24, 60, 60 and 365.25) but how does the\n\nasked by carl on January 17, 2013" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8995222,"math_prob":0.87152237,"size":34278,"snap":"2019-43-2019-47","text_gpt3_token_len":11053,"char_repetition_ratio":0.12408823,"word_repetition_ratio":0.12626193,"special_character_ratio":0.40261975,"punctuation_ratio":0.10150376,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9687896,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T21:09:46Z\",\"WARC-Record-ID\":\"<urn:uuid:e1bcf135-847c-4235-80d3-9b9b73adcfaa>\",\"Content-Length\":\"325254\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4c57630a-40cf-435c-831f-1d60ef2fc99c>\",\"WARC-Concurrent-To\":\"<urn:uuid:cddabf58-b917-4762-ace7-2ce0fca81efa>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/similar?question=Solve+using+the+elimination+method.+Show+your+work.+If+the+system+has+no+solution+or+an+infinite+number+of+solutions%2C+state+this.+2x+%2B+10y+%3D+58+-14x+%2B+8y+%3D+140&page=140\",\"WARC-Payload-Digest\":\"sha1:QZ7CZWLRA2EIT2E6KG3GGML7QTPONZ7F\",\"WARC-Block-Digest\":\"sha1:2HHQKOTETCL2J5HH4SS6WFJBOYTLNTRP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986697760.44_warc_CC-MAIN-20191019191828-20191019215328-00346.warc.gz\"}"}
https://learn.careers360.com/jobs/question-directions-given-two-statements-verify-the-conclusions-and-mark-the-answer-as-given-belowstatementsi-11185/
[ "# Directions : Given two statements, verify the conclusions and mark the answer as given below. Statements: I.          No bars are coins                                  II.          All coins are books Conclusions: I.          All coins are books                   II.          Some books are not bars   Option 1)   if only conclusion I follow.   Option 2)    If only conclusion II follows.   Option 3)    If either conclusion I or II follows.   Option 4)   If neither of the two conclusions follows.     Option 5)   If both conclusions follow", null, "" ]
[ null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAYEAAACqCAYAAABLRfNlAAAYaUlEQVR4Xu2dd8wWxRbGB3vvMRrh2nvvXQn4hzFqILZgo0bQCGgsXAIqXIpg79LUiBCJIO0KqAi2GBvYe7C3GCH2Ehs3Z7j78X77ve+7U87slH32H+/l3Tlz5vecnWdmdtF2K1euXClwgQAIgAAIVJJAO5hAJXXHoEEABEBAEoAJoBBAAARAoMIEYAIVFh9DBwEQAAGYAGoABEAABCpMACZQYfExdBAAARCACaAGQAAEQKDCBGACFRYfQwcBEAABmABqAARAwIhAu3btjNrZNMJfa7KhV78tTICfKSKCQFIEGk32PibkZsbjI58UhIYJpKAixgACTATqTbKxTK753GPJm0k64zAwAWN0aAgC8RNIeeJMeWyclQcT4KSJWCAQAYHaybFKq+WqjruoJGECRYTwOwgkQAATYGsRwWM1D5hAAg84hgAC9QhgolOri6pzggmo1QnuAoEoCFR9QrMVqYr8YAK2VYP2IBAAgWzyqtIZv2vsVWEKE3BdSYgPAg4JVGWicoiwMHTqjGEChSWAG0AgPAKpT0zhERciVeYwgRCrDTmBQAMCqU5EMQmemgYwgZiqD7lWlkBqE08KQqaiCUwghWrEGJIlkMpEk6xA9N/o/f+/SC/Wl/IwgZSrE2OLlkDsE0u04C0Sj1UzmICF6GgKAi4I0GQS66rSBY/YYsamH0wgtgpDvskSiHUlmawgFgOLSUuYgIXQaAoCXARiWz1yjTv1ODHoChNIvQoxvuAJxDBRBA8x4ARD3xXABAIuHqSWNoHQJ4e06Zc/ulDNHiZQfi2gRxCQnxXi5W/1CiFE3WEC1atDjNgzgRAnAs9IKtV9aPrDBCpVfhisTwI4/vFJP6y+Q6oFmEBYtYFsEiUQ2uovUczRDSuEukjCBDJXdVUBOLt1RbYacUN40KtBOs5R+q6PKE0gP+m7nqTL7i/OUkbW9Qj4fsChShwEfNZJFCYQ2iQcWj5xlHn1suR8sDl3u64XTdVTmmfEnPWik1GwJlBb9KEXbUy56hQH7jUnYPNA15vwOZ8BLGLMdXXd0qZuTHMLygRSmExTGINpMaHdKgImD7LPuoEphFW5JvVjM4IgTCCkz6VsYObbpjouTkapxdJ9gEOskRBzSq1OisajW0dF8Zr97tUEqlJsVRmnTSGm0FbnwY2hJmLIMYW6aTQGnXqy4eDFBKpaXFUdt02BxtJW9YGNsQZizDmWuinKU7WuiuIEsxNAMa0+M6b/xfmyz6YI0NaOgMqDmkLtpzAGO6X9tFapL5vMStsJuB6IDQRfbcHEF3nefpvpmOLEmeKYeCuCN5rrecK5CaBgmhcE+PA+MGVHKzKAlHd7riensrUMuT+XrJ2agMvEQxbMJDewMqHmt00jzapk7FUaq99qM/v0WCVnZyaASU0Ff+t7wEyfma8WzQwg5dV/I96o3XIq0QVndhPAysCuGMDPjl9Zres9jC4e0LLGw9FP1cfPwbAohgvGrCbgIsEiKKn+DpbhKgsDaKwN6tZ93XIzZjMB7sTcowy/BzANTyMYQLEmqNtiRrZ3cDJmMQHOhGzhpNYebMNSNK8H9KmvD7i4rVtOvtYmwJmMW2zxRgfjMLSDAejpgLrV46V7NxdfKxPgSkJ38FW8H6z9qg4DMOOPujXjptqKg6+xCXB0rjpQ3LeKAJj7q4Ra9tBBTwfw0uOlczcHW5iADnHP93II7nkIUXYPA7CXDbVrz7BRBFu2RiZg26k7HOlHBvvyNYYJ2DNH3dozDMYEIKY7MVUjQwNVUvb3wQDsGWYRULd8LPORbNhq7QRsOnI3/GpGhhbl6J5xBm8e3uDIwxEm4IZjVFHxMLmXCwbghjFqNyyuyjsBCOdGOJuo0MSGXnFbmEAxI5M7ULcm1IrbmHJVMgHT4MVp4w5bAtDGlmD99jAAN1zxfiA8rjABt5o4jw4TcIMYJuCGK0wgPK6FJoBJxq1oHNGhEQfF1jGIKV1V/G8D8NNsvtsqq78q9GMyF8AEEqgME+ETGLazIWAX4Axtq8CoWzecdbk2NQHdYG6GhKgqBKCVCiW1e7ALUOPEcRfqloNi212szg4WJsCvgZeIeJj4sMME+FgWRULdFhHS/12XKUxAn3GQLXSFD3IQASQFAyhfBNQuP3Mdpg1NQCcI/xAQ0YQANDOh1nYrTX+is52277XaEVC3/PrrMIUJ8PP3FlFHeG9JBt4xdgLlC4S65WeuwxQmwM/fW0Qd4b0lGXDHMAB/4qB2+dmrMoUJ8LP3FlFVdG8JBt4xTMCfQKhdfvaqTOuagGpj/rQR0ZYAtDMnCBMwZ2fb0kXd/vXXX2LttdcW6667rlhjjTXkPw8//HBxxx13iF122UWm/M8//4ibb75ZTJw4UXz88cdi8803F6eccooYNWqU2HrrrVuG9cgj/xWjRo0Wr7/+uox59NFHy3sOPPBAec8TTzwh+vbrKz5c9mFLmz59+ohvv/1WzJw5U6y55ppi9uzZYuTIkeLTTz+V///QQw8Vt912m9hxxx1t8dVtr8oUJuAEv7+gqsL7yzDcnmEC/rRxUbeZCXz++eeiffv24tdffxWXXnqp+Oyzz8SCBQvkYAcMGCAeffRRcdddd4nDDjtMfPHFF+Kqq64S77//vnjttdfEWmutJabPmC769O4jzYIM4vfffxf33nuvuOGGG6Qp7LTTTm1MYMSIEbKPRYsWifXXX19QDvvtt5/si4yIYgwZMkQ899xz4oUXXnACXpUpTMAJfn9BVYX3l2G4PYOdX224+edNgEb32OOPiQH9B8hJniZmmsBpst97771bBv/333+LvfbaS1xxxRWCVvO77babNI8LL7ywFSD6jSbzKVOmtDKB+++/X1x33XXi2WefFVtssYVss3DhQtG/f3/x3nvvtcT47bffxNdffy1zcHGp8ozeBFS2fBngo446Svzyyy/SvbOrtn32Z5tuuqk4+eSTxa233io22mgj+cdlb+VMi0JVeNP4qbbj3gXQJ6a0cpwwYYI8Zthqq63E6aefLkaPHi023HBDifG7776Tq85Zs2aJ5cuXi/Yd2ose3XuIwYMHyxUoXdttt52YNm2aOPbYY8UxxxwjV7NLliyRxxt0ff/992KHHXaQ/6Tr7bffFpdffrl45ZVX5FEHTTB0bHHCCScELx137eZN4Oeffxb9+vUT2267rbj++uvF1KlTxbXXXiveeuutNmxIl3fffVce15AGdKxDGtZetMo/66yzpHbZcdD4ceOlcZABdOjQoeX2H374QRoLzUG9e/eWx0kbb7yxc01UmCZjAs22fLUPBz1c9JCRGHTVWy1888034rzzzhP777+/LBYfWznT6lAR3TR2yu24TYBWkTNmzJAmcMQRR8gjiMsuu0zQKpNWhX/++af8czp3Hjt2rDwXfvXVV8VFF10kJ/tx48bVNQGqRTpGuOCCC+qawO677y4GDhwoJzsa0/Tp00WvXr1k/9mqNFQduWs3e7Y32WQTyYJMYNdddxVz5syRq/u7775bPPzww3ICz1/020MPPSSPiegYh/TKX2QStIMgs6UYZPKkJ5k7mcAee+zRqgmt+mlhOX/+fLkj6NSpkxgzZow44IADnEmiwjQ5EyCatVu+jC49lOTE9HKIBCKRG5kA/fmkSZPkCozE9bGVM60KFdFNY6fcjtMEaGVIq0eqMzpnzi6ahOjogFaCNMHQYmTZsmVinXXWabmHdg00sZAh0ISe3wn07NlT7h5oAqIda+1OgCYqikWTzTbbbNMSk44+6EUovYwM+eKu3fwCj/7/woWPix49esojoKVLl4grrxwk3nnnnTZYrr76anlkdM8998gVOy0Ma18UU4NFixeJs7udLX+jeYJ2BXTKQKcGZB4vvfRSy0lCvgPaWdAug+YhMugNNtjAiTQqTJMzgfyWj8jS6otWANmLnj333FN88MEH0hDq7QS++uor0a1bN9GxY0cxfPhw4WsrZ1IVKqKbxE29DacJzJk7RwwcMFB88sknDbHRSp4mbPpSJX/R0Q9NKBdffHEbExgzdoyY9uA02famm25qcxxELy7JBC655BLRuXNnefQRy8Vdu/WebWJBK28yUtpx0XEZvZytXY3Tyn6fffYRgwYNEt27dxcHH3ywOP/88+UOq/bq27ev3CHQS+L810GkHx0JktnT9fLLL8t/0hdB2UW/09EgHd3ldw1cmqkwTcYEGm35COa8+fPEA5MfkCt7ukjQU089VW7f8ltGMgwqAnoA6Q1/tkrzsZUzKQQV0U3ipt6G0wQmT54sV4LNvvqgSWLfffcVQ4cObYOWfqNd6zXXXNPGBOgMm3YK9PtTTz0lV/y17wToRSUdQdExx4svvignlyFDh4gzTj8jeAm5a7eeCTzzzDPipJNOkjstWhgS4ylTp4hJEyeJI488UqxYsUIe29EOjVby9O6FJvjTTjtNHuV06dJF/tmdd94pX/6++eab8sujvAn89NNP0jzIKCjefffdJ+cT+lyUDOePP/6QOpGetFigz05dXCpMkzGB7J1AfstHK6EzzzxTnsNlL9voHjqPmzt3bpudAG3t6MF5/vnnG7pzWVs5k6JQEd0kbuptOE3gySefFOeee6748ssvG2Kjl7f0kpfMIn8dd9xxcidKX6Pkj4No0qCdwu233y5r+sEHH2xlArWx6OsTMgN6P0AvMenTxJAv7trNf/RBkze9exk2fFiLKWYv8MePHy8n480220wuEOmsfsstt2zBtWDBfPGf/4xo+XsC9E6R3uXQ+wK66v09gTfeeENqRTvDjsd3lLs+0psWlOutt5446KCDZD+0GHB1qTBNzgQymNmWj7bE5Pj0QGareioOcm9ycfrLIeTCmYlQezoPXLp0qZg3b54M52srZ1IYKqKbxE29DacJ0Dn99ttvL2bOmik6d+rcgo5W6XRMc+ONN4rFixfLVeJHH30kJ4TsovNhOq6kCWTnnXduaAJUw/ThAr0kppfJ1Ce1pTNpOhKqvU488US5giUzCPlC7fKro8I0SROo3fKRQz/99NMtR0EZZjoSOuSQQ+QDlDeBH3/8UT6AtMqiT+t8beVMSkJFdJO4qbfhNAFiRX8zlI4MJk6cII4/vqNc/dGZMp0B01dDdNE7JzIAOtunr1XoU0U6g6aFC/0ZXY12AvQbfQBB9bti+QppAvQik15E08vMrl27ymMLOjKiY08yHTKNkC/ULr86KkyTMQF6yUtXfstHn+HRw0fb69pr9uxZYuTIUfLcNm8CdN8tt9wiJ386O6SYPrZyJiWhIrpJ3NTbcJsAHTNQDdGnnnTMQGf355xzjhg2bFjLjpQ+OKCVPJ0T0xdFHf7VQfTu1Vu+kMy+5GlmAqQJHV3Qoif7ewK06KE+6Oshqlt68Tl48L9Fly5dg5cQtcsvkQrT6E2AH1vcEVVEj3uEbrLnNgE3WaYdFbXLr68KU5gAP3evEVVE95pgoJ3DBPwLg9rl10CFKUyAn7vXiCqie00w0M5hAv6FQe3ya6DCFCbAz91rRBXRvSYYaOcwAf/CoHb5NVBhChPg5+4toorg3pILvGOYgH+BUL/8GqgwxX9ZjJ+7t4gqgntLLvCOYQL+BUL98mugwhQmwM/dW0QVwb0lF3jHMAH/AqF++TVQYQoT4OfuLaKK4N6SC7xjmIB/gVC//BqoMG1oApSOSgD+tBHRhAC0MqG2ug1MwI4fR2vUMAfF1jFUmMIE+Ll7iagitpfEIuk04weOfgQDdzfcVbg2NQHsBtwI4yKqitgu+k0lJkzAr5KoXzf8VbjCBNywLzWqitBcCZXZF1fOKnFgAiqU3N2Tal25I6YWWYUrTECNZdB3qQjNOQCO/jhicI4p2/XSP+lf/oarXAIh1kO5BNz0psK10ARwJORGHK6oKiJz9VUbx+ZFqq+cizhgN1BEyM3vodaDm9GWF1WVK0ygPE2c9KQqtJPONb8gszEOV/nnjY12Ab6ZljHWkPoAbzdqqHJVMgHsBtyIZBtVVWTbforaq+Shck9RP65/x07ANeH68WOoDT9k7HpV5apsAjACO0G4W6sKzN1vo3jNVvmh5dpsDNn7gFhyLktfV/2Asyuy6n/PCybgTgOnkUN9ePJ5hZpnkRHElrfTYnMYHJzdwNXhqmUC2A24EUw3qo7AurE57s92BRQrti9tatmGzplDK58xwNcdfR222iYAI3AnnErk2glW5X7cw0MgNjPjGbXbKDoTldtM0ouuw9bIBGAEfopGR1g/GbY+hwz9a6CiIyHUubsqiqGW3Y3ebWRdtjABt3qwRtcVl7VzhWCN8gs97/zQcCSkILblLbHVhOVwS22uy9bYBLBKKlXXoL9dV1nx6xZmuXRb9xb7y22f7FT6jqkWVMYT2j26fK1MAEZQjvy6opaT1apedHJTMYsyc2/WF3YD7pTQqRl3WaQZ2YSttQnoTgRponc3KhNR3WXTfMWs2m/IY8rGgN2Aqpp698Wgvd6IwrrbhC+LCcAI3BSCiaBuMuGPGvrY6uUXes78KvFGBD9envWimTBmMwEYAa/AJmLyZoBoMAK+GkA987FsFMmUMasJZEZA/8R31Waix3RubjbCeFrBBPi0Mp2g+DJIP5IpY3YTaHSmmr4E9iM0FdG+Z0TQWV1BJ716AS89XiZ32zB2ZgI4HtKT0kZEvZ5wtw6BRrpALzWK4KTGyfYuG85OTQDHQ8XS4vinmJHvO2AEZgrYTExmPVazlS1n5yaA46HGhWkrXjVLvvxRN9MJGtbXA1zKq1Nb1qWZAHYFq4sCq//yHhCunmAE6iRtJyX1nnAnB+tSTaB2V0D/u2pfEGHyj/ehLXrYin6Pd+R6mYODHi/buzl4ezGBqpkBJn/bUg+jfdEDV/R7GKNwl0XVx++OrNsjN68mkLoZYPIv+7Fw31/RRFf0u/sM/fRQ1XH7oa337+wqyjEIE8ibQcxHRdnEH/MYioqmyr+rTHZVMv8qjTWkulepQ9V8gzKB2qRjmkxjylW1MHBfYwKqD6DqfbGyTn18oerCzT1YE2hkCCGssGsn/RDyCbVYU85L9UFMcaWc4phiqVXVutMZTxQmkB9QmZNwvi9M+jrlle69ug9jChNnCmOIvSJ1605lvFGaQJEpNBp47Sep9Sb3eu2q9hmrStHgnlUETB7IGCfSGHNOsUZN6k2FQxImoDJQ3AMCLgiYPpgxTKwx5OhC0xBjmtaZylhgAiqUcA8INCFg84CGONGGmFOVC9CmvlS4wQRUKOEeEHBkAllY31+Y+e4fBdaYAEwA1QECERDgfFBdf4zgOn4EckWTImddNRo0dgLRlAMSDZ2Aywe26EOG7AOGovuIIT52CL2SVuXnsp5qCcAE4qgHZBkJgbIe3EhwIE1DAmXWEUzAUCQ0A4GG2+t27bDaRnkYEyjTAOSOYyX2hsZioSEIwAhQA5wEyjYAmACneogFAjkCPh5oiBAvAV/1gp1AvDWDzCMg4OvBjgANUqwh4LNOYAIoRRBwTMDnA+54aAjPQMB3fcAEGERECBAoIuD7QS/KD7/7IRBCXcAE/GiPXitIAP86hgqK3mDIIdUCTAB1CQIlEwhh9VfykNFdIOf/9YSACaA8QcADARiBB+gBdBmi7jCBAAoDKVSTQIgTQjWVcD/qkI5/8qOFCbjXHz2AQEMCIU8OkI2HQOhmDxPg0RlRQMCKQOgThdXgKtw4Bl1hAhUuUAw9LALYFYSlh002MWkJE7BRGm1BwAGBGFaPDoadTMjY9IMJJFN6GEhKBGJaSabE3WYssWoGE7BRHW1BwDGBWCcWx1iCCh+7RjCBoMoJyYBAfQKxTzQp6pqKJjCBFKsTY0qWQCoTT8wCpaYBTCDmakTulSWQ2kQUg5CpMocJxFB9yBEEGhBIdWIKSfDUGcMEQqo25AIChgRSn6gMsVg1qwpTmIBVmaAxCIRFIJu4KCv858P1takiP5iAfp2gBQhEQaCKE5qJMFXnBBMwqRq0AYHICFR9osvLBR6ricAEInuYkS4I2BKo6gRY1XEX1QtMoIgQfgeBhAnUToypvUdIeWycJQkT4KSJWCAQOYFYJ8583qkZmsuyggm4pIvYIBA5gXqTazYkH18fNcrHRy6RS9uSPkwgFSUxDhAomUAzg3CVCiZ7frIwAX6miAgCIAAC0RCACUQjFRIFARAAAX4CMAF+pogIAiAAAtEQgAlEIxUSBQEQAAF+AjABfqaICAIgAALREIAJRCMVEgUBEAABfgIwAX6miAgCIAAC0RCACUQjFRIFARAAAX4CMAF+pogIAiAAAtEQgAlEIxUSBQEQAAF+AjABfqaICAIgAALREIAJRCMVEgUBEAABfgIwAX6miAgCIAAC0RD4H/k2MOS7txVEAAAAAElFTkSuQmCC", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86100835,"math_prob":0.49503565,"size":405,"snap":"2020-45-2020-50","text_gpt3_token_len":91,"char_repetition_ratio":0.22194514,"word_repetition_ratio":0.0,"special_character_ratio":0.2074074,"punctuation_ratio":0.17857143,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9668844,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-25T07:55:03Z\",\"WARC-Record-ID\":\"<urn:uuid:0edfc513-d5b9-43a3-8459-ccf77a1ccffe>\",\"Content-Length\":\"778921\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f58ebe8b-986f-436e-b0c9-fa0004e8ccd5>\",\"WARC-Concurrent-To\":\"<urn:uuid:63351aac-f257-48c3-ab24-7c802253d743>\",\"WARC-IP-Address\":\"13.232.234.184\",\"WARC-Target-URI\":\"https://learn.careers360.com/jobs/question-directions-given-two-statements-verify-the-conclusions-and-mark-the-answer-as-given-belowstatementsi-11185/\",\"WARC-Payload-Digest\":\"sha1:J5H3BQYZL3UEVMSKFD3MB36462BOHED2\",\"WARC-Block-Digest\":\"sha1:7OIMAH5DVC65E2CIBADVTICGQWWJMHOU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107888402.81_warc_CC-MAIN-20201025070924-20201025100924-00314.warc.gz\"}"}
http://www.radiotec.ru/article/10654
[ "", null, "Publishing house Radiotekhnika \"Publishing house Radiotekhnika\":scientific and technical literature.Books and journals of publishing houses: IPRZHR, RS-PRESS, SCIENCE-PRESS Тел.: +7 (495) 625-9241\n::Journals\n::Books\n\nArticle:\n\n# Parametric bio impedance models for indentification of the live systems functional condition\n\nKeywords:\n\nKaboos Derhim Ali Kassim, I.A. Klyuchikov, O.V. Shatalova, Ya Zar Doe\n\nThe modeling method presented in this work allows one to make a parametric model of bio impedance. This parametric model of the basic element of the tissue impedance is based on the three-element passive RC two-terminal network. The model is based on the general regression. Its’ approximation functions are defined by the calculation of the real and imaginary parts of the biological tissue of the three-element passive RC two-terminal network. The model differs in that it is synthesized on the basis of the general form of regression, function approximation is determined by computing the real and imaginary parts of the total electrical resistance of a three-element RC - two-terminal network. Parameters of the regression model of the general form match those of the three-element RC - two-terminal, and the initial value of the model parameters determined by the specially created tables derived from experimental studies and reflect the minimization of the error modeling for real and imaginary parts RC - two-terminal network. The model takes into account the dissipative properties of the graph by choosing Cole corresponding set of frequency sub-bands, in which the construction of mrdeli and classified by means of parametric biomaterial triads Algorithm to obtain a parametric model uses the entire frequency range of the selected segment. It calculates the tetrad informative features in the selected frequency segment and checks the adequacy of models. When a satisfactory value for a transition to a new calculation of the tetrad in the new frequency range. If the adequacy of the model is not satisfactory, then the data can be either ignored or included in the model maker. Also, it may be decided to change the criterion of adequacy.\nReferences:\n\n© Издательство «РАДИОТЕХНИКА», 2004-2017            Тел.: (495) 625-9241                   Designed by [SWAP]Studio" ]
[ null, "http://www.radiotec.ru/static/images/h1.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89800125,"math_prob":0.91085863,"size":1833,"snap":"2020-10-2020-16","text_gpt3_token_len":355,"char_repetition_ratio":0.15090214,"word_repetition_ratio":0.048780486,"special_character_ratio":0.17894162,"punctuation_ratio":0.074074075,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95204705,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-24T21:16:30Z\",\"WARC-Record-ID\":\"<urn:uuid:26458e68-d309-4cf3-b836-61a3c8613808>\",\"Content-Length\":\"30511\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a8126f1f-077a-46ea-a9f6-37806e031987>\",\"WARC-Concurrent-To\":\"<urn:uuid:114d4bac-2fe7-4bb8-b8a5-0f909afbecc5>\",\"WARC-IP-Address\":\"81.177.3.94\",\"WARC-Target-URI\":\"http://www.radiotec.ru/article/10654\",\"WARC-Payload-Digest\":\"sha1:SCKFHRDVJPAKL437WGB5BJX3HASPKV7N\",\"WARC-Block-Digest\":\"sha1:WTLHTEW6YUYBHBD7AHXYSVWXTD3BHNWT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145981.35_warc_CC-MAIN-20200224193815-20200224223815-00480.warc.gz\"}"}
https://essayparlour.com/academic-help/2020/09/07/order-the-answer-to-in-this-assignment-you-will-implement-a-program-calledcall_info-cpp-that-uses-three-functions-inp/
[ "", null, "+1 (347) 474-1028\n\n## Order the answer to: In this assignment you will implement a program called”call_info.cpp” that uses three functions, inp\n\nAcademic Help net programming Order the answer to: In this assignment you will implement a program called”call_info.cpp” that uses three functions, inp\n\n# Order the answer to: In this assignment you will implement a program called”call_info.cpp” that uses three functions, inp\n\nQuestion In this assignment you will implement a program called”call_info.cpp” that uses three functions, input, output, andprocess. You must use input and output parameters when implementingthis program. The function input will get the input from the user,the function process will perform the necessary calculationsrequired by your algorithm, and the function output will print theresults and any output that needs to be printed. The program “call_info.cpp” will calculate the net cost of acall (net_cost), the tax on a call (call_tax) and the total cost ofthe call (total_cost). The program should accept a cell phonenumber (cell_num), the number of relay stations(relays), and thelength in minutes of the cal (call_length). consider thefollowing 1) The tax rate (in percent) on a call (call_rate) is simplybased on the number of relay stations (relays) used to make thecall (1<= relays <=5 then tax_rate = 1%; 6<= relays<=11 then tax_rate = 3%; 12<= relays <=20 then tax_rate =5%; 21<= relays <=50 then tax_rate = 8%; relays >50 thentax_rate =12%) . 2) The net cost of a call is calculated by the followingformula: net_cost = ( relays / 50.0 * 0.40 * call_length). 3) The tax on a call is calculated by the following formula:call_tax = net_cost * tax_rate / 100. 4). The total cost of a call (rounded to the nearest hundredth)is calculated by the following formula: total_cost = net_cost +call_tax . All tax and cost calculations should be rounded to thenearest hundredths. Use the following format information to printthe variables: Input Example: (Your program should prompt the user forinput) Enter your Cell Phone Number: 9548267184 Enter the number of relay stations: 40 Enter the length of the call in minutes: 56 Output Example: (Your output should look lit this) ***************************************************** Cell Phone Number: 9548267184 ***************************************************** Number of Relay Stations: 40 Length of Call in Minutes: 56 Net Cost of Call: 17.92 Tax of Call: 1.43 Total Cost of Call: 19.35 Ask the user if more calculations are necessary with thefollowing prompt: Would you like to do another calculation foranother employee (enter y or n)? . . . net programming\nReady to try a high quality writing service?", null, "" ]
[ null, "https://www.facebook.com/tr", null, "https://essayparlour.com/academic-help/wp-content/uploads/sites/13/2016/08/accepted-cards.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8232967,"math_prob":0.972898,"size":2224,"snap":"2020-34-2020-40","text_gpt3_token_len":532,"char_repetition_ratio":0.16261262,"word_repetition_ratio":0.011764706,"special_character_ratio":0.30170864,"punctuation_ratio":0.12886597,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99087036,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-23T15:57:21Z\",\"WARC-Record-ID\":\"<urn:uuid:a10a3181-8968-4042-9e7d-42b4ea2fffcc>\",\"Content-Length\":\"31451\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b043bb4-6789-4998-acd2-f55443027c2c>\",\"WARC-Concurrent-To\":\"<urn:uuid:711744d5-3ef6-4f86-a78d-79ae25f48bb4>\",\"WARC-IP-Address\":\"104.18.60.240\",\"WARC-Target-URI\":\"https://essayparlour.com/academic-help/2020/09/07/order-the-answer-to-in-this-assignment-you-will-implement-a-program-calledcall_info-cpp-that-uses-three-functions-inp/\",\"WARC-Payload-Digest\":\"sha1:FFQRRQUQAKJLYQUCJZEHZGXDRSBHFCWY\",\"WARC-Block-Digest\":\"sha1:ZNKVEA2YQR623M44DJ4PACXXHUGMNKHB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400211096.40_warc_CC-MAIN-20200923144247-20200923174247-00603.warc.gz\"}"}
https://dsp.stackexchange.com/questions/64865/understanding-the-difference-between-map-estimation-and-ml-estimation/64868
[ "# Understanding the Difference Between MAP Estimation and ML Estimation\n\nThere are a number of possible criteria to use in making decisions. Can someone elaborate on the difference between ML and MAP for a sequence of BPSK symbols impaired by Gaussian noise ?\n\n• Read sec 1.1 and 1.2 of ee.stanford.edu/~cioffi/doc/book/chap1.pdf One of the best digital communication fundamentals in the whole world. Mar 26, 2020 at 17:41\n• @jithin thanks for the great link! Apr 1, 2020 at 3:56\n\nMaximium A Posteriori (MAP) and Maximum Likelihood (ML) are both approaches for making decisions from some observation or evidence.\n\nMAP takes into account the prior probability of the considered hypotheses. ML does not. This set of probabilities, known as \"a priori\" probabilities or simply \"priors\", is often known imperfectly, but even rough approximations are often better than nothing.\n\nThe approaches are the same in the case where the prior probability is truly uniform. For example in the roll of a single fair die, or deciding a bit in a random binary message.\n\nMAP and ML can be quite different. For an anecdote on how the obvious answer can be exactly wrong, search for \"Abraham Wald and the Missing Bullet Holes\" from the excellent book \"How Not to be Wrong\" by Jordan Ellenberg.\n\nA simpler case of imbalanced priors is found in the old saying,\n\n## When you hear hooves, think 'horses' not 'zebras'\n\nOne can think of MAP decision/estimation as a rigorous version of Occam's razor.\n\nLet's say we observed event\n\n• $$A$$ := Audible hooves\n\ngoal: Decide what animal is thundering towards us.\n\nand we live in world with only 3 ungulate (hooved) species:\n\n• $$Z$$ebras,\n• $$H$$orses,\n• $$C$$hevrotain\n\n(such worlds are common in post-apocalyptic fiction and math problems)\n\nLet's say we know when each of these animals is present, their chance of making $$A$$udible hoof sounds is as follows\n\n• P(A | Z) = .91 ( read as probability of event A, given that Z is true)\n• P(A | H) = .9\n• P(A | C) = .2\n\nThese are known as likelihoods on $$Z,H,C$$ when $$A$$ is true.\n\nThe above is all the info we need to make the ML decision = Zebras. The zebras have a higher likelihood of generating the observation when they are present.\n\nWhat we really want to decide is: which is highest in {P(Z|A),P(H|A),P(C|A)} i.e. what is the chance a particular animal is present, given that we observed $$A$$?\n\nTo decide that, we need to know more about the abundance of the different animals. In the state where you live there are 100 wild chevrotain , 100 zebras, and 800 horses. These are the prior probabilities (scaled by a constant).\n\nWe can order our posterior probabilities (Omitting common factors in the posterior probabilities found via Bayes' Rule) in the following order\n\n$$P(H|A) \\propto P(A|H)P(H) = .9 \\times 800 = 720$$ $$P(Z|A) \\propto P(A|Z)P(Z) = .91 \\times 100 = 91$$ $$P(C|A) \\propto P(A|C)P(C) = .2 \\times 100 = 20$$\n\nSo the MAP decision is overwhelmingly $$H$$orses. In fact, by using the law of total probability, we can say that there is an 86% chance it is horses (= 720/(720+91+20))\n\nOf course, by the time you've done all this math, you've probably been trampled.\nSometimes an easy answer gives you what you need to act.\n\n• Great answer Mark! Thanks!! (including realizing that I've already been trampled by the time I got to the end, good point). Apr 1, 2020 at 3:59\n\nYou have a set of message set $$m_i$$, $$0 \\le i \\le N-1$$. (For example, QPSK will be $$N=4$$). For the transmitted message $$m_i$$, the corresponding symbol vector is $$\\textbf{x}_i$$, and the received symbol vector is $$\\textbf{y} = \\textbf{x} + \\textbf{w}$$, where $$\\textbf{w}$$ is the AWGN at the receiver. The above is a simplified baseband model assuming a simple Line-Of-Sight(LOS) channel without any delay.\n\nAt the receiver, after observing $$\\textbf{y}$$ you arrive at a decision for the transmitted symbols, $$\\tilde{\\textbf{x}}_i$$. The decision you make is such that the probability of error $$P_e = P(\\tilde{m_i} \\ne m_i)$$ is minimum. In other words, the Probability of being correct $$P_c = 1 - P_e$$ has to be maximized. Using a hypothetical rule you have made a decision $$\\tilde{m}_i$$ based on $$\\textbf{y}$$, so $$P_c(\\tilde{m} = m_i;\\textbf{y}) = P(\\tilde{\\textbf{x}} = \\textbf{x}_i;\\textbf{y}) = P(\\textbf{x}_i|\\textbf{y})P(\\textbf{y})$$ Here I have made use of formula $$P(AB) = P(A|B)P(B)$$. Event A is the event that $$x_i$$ was transmitted. Our decision is correct if $$\\tilde{x_i}$$ is interpreted as $$x_i$$. We are trying to maximize the probability of getting this decision correct. In order to maximize the above term among all message set $$0 \\le i \\le N-1$$, the term $$P(\\textbf{y})$$ can be ignored since it does not depend on $$i$$. Hence\n\n$$\\tilde{m_i} = argmax_i\\,\\, P(\\textbf{x}_i|\\textbf{y})$$ You are deciding the $$i$$ which maximizes the above probability after observing $$\\textbf{y}$$. Hence the whole method above is called Maximum A-posteriori Probability decision because you are maximizing a-posteriori probability $$P(\\textbf{x}_i|\\textbf{y})P(\\textbf{y})$$\n\nThe term above can be re-written as $$P(\\textbf{x}_i|\\textbf{y}) = P(\\textbf{y}|\\textbf{x}_i)P(\\textbf{x}_i)/P(\\textbf{y})$$.\n\nIf we assume equal probability for all messages in the set $$\\textbf{m}$$ then we can ignore the term $$P(\\textbf{x}_i)$$ since they are equal for all values of $$i$$. Hence $$\\tilde{m_i} = argmax_i\\,\\, P(\\textbf{y}|\\textbf{x}_i)$$ This is the Maximum Likelihood detection rule. We are maximizing the likelihood probability $$P(\\textbf{y}|\\textbf{x}_i)$$.\n\nA brief, non-mathy explanation:\n\nML assumes that all hypothesis are equally likely. MAP does not make this assumption. MAP is the optimum criterion, but under some conditions ML is optimum too.\n\nWhen using BPSK, if the bits are independent and equally likely, then ML and MAP are equivalent and ML is optimum.\n\nIf the bits are not equally likely, then you should use MAP in order to minimize the probability of error. You can also use ML (to simplify the receiver), but since it's no longer optimum, your error rate will be larger than the minimum." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8706044,"math_prob":0.9988072,"size":6554,"snap":"2023-40-2023-50","text_gpt3_token_len":1833,"char_repetition_ratio":0.1280916,"word_repetition_ratio":0.015369836,"special_character_ratio":0.27433628,"punctuation_ratio":0.102009274,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99996555,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T03:55:07Z\",\"WARC-Record-ID\":\"<urn:uuid:55f077b1-6853-42d9-9f2e-4887c2f2c3d8>\",\"Content-Length\":\"193643\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:70145056-5c8d-44d1-a990-d3de7630ed91>\",\"WARC-Concurrent-To\":\"<urn:uuid:9dc2e42d-4f7c-4184-bd44-af1f76404f2e>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/64865/understanding-the-difference-between-map-estimation-and-ml-estimation/64868\",\"WARC-Payload-Digest\":\"sha1:HWZQWYMAFWXRGBTXMA7OOEVRJ5NEPP62\",\"WARC-Block-Digest\":\"sha1:Y3WVSZQJKUW57ORZE25BRHK6BMMHHGQB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100523.4_warc_CC-MAIN-20231204020432-20231204050432-00445.warc.gz\"}"}
https://arachnoid.com/NormalDistribution/index.html
[ "Home | Mathematics | * Consumer Loan Calculator * Applied Mathematics * Calculus * Finance Calculator * Is Mathematics a Science? * Maxima * Sage * Trigonometric Relations * Unit Conversions Area of an Irregular Polygon Binomial Probability Coronavirus Math Equities Myths Graphinity Graphitude Introduction to Statistics LaTeX Editor Mandelbrot Set Peak People Polygon Calculator Polynomial Regression Data Fit Polynomial Regression Data Fit (Java version) Prime Numbers Quadratic Equation Solver Randomness Signal Processing The Mathematics of Pi The Mathematics of Population Increase The Physics Behind Stopping a Car Why PDF?", null, "", null, "", null, "Share This Page\nIntroduction to Statistics\nA practicum for statistical analysis\n\n(double-click any word to see its definition)\n\nNOTE: This article covers methods applied to continuous statistical distributions. For discrete-value statistical analysis, my Binomial Probability article is probably a better choice.", null, "Figure 1: Graph of a normal or Gaussian distribution (σ = unit of standard deviation)\n\nIntroduction\n\nFigure 2: Binary \"random walks\" converge to a Gaussian curve\nWalks: 00000\n\nThis article introduces, and provides online tools for the exploitation of, the normal or Gaussian distribution (\"bell curve\"), a key idea with wide application in modern statistical analysis. Topics covered include a summary of the underlying theory, online calculators for analyzing statistical results and reducing data sets to their statistical properties (mean, variance, standard deviation, standard error), an in-depth mathematical description of the methods underlying the Guassian distribution, and a discussion of algorithm design.\n\nOverview\n\nThe goal of statistical analysis is to be able to make general statements about a modeled system based on limited samples. A surprising number of natural systems and processes can be successfully characterized by analysis of a limited set of measurements using a Gaussian-distribution model (Figures 1 and 2).\n\nFor an example of how often one encounters a Gaussian distribution in nature, Figure 2 models a \"random walk\" consisting of, in a manner of speaking, many flips of a fair coin that guide the steps of a random walker initially located at the central location marked \"μ\". If the random coin flip comes up \"heads\", the walker takes one step to the right, otherwise left. At the end of the walk, the column nearest the walker's final position becomes taller. Even though the process is entirely random, the outcome approximates a Gaussian curve.\n\nThe elements in this class of statistical analysis are:\n\n• An average or \"mean\" value, the sum of the values divided by their number.\n• A \"variance\", which quantifies how much the samples differ from the mean value.\n• A standard deviation, which is the variance in a more useful form.\n• A standard error, an indication of how well the analysis reflects reality.\n\nWith these values in hand, one can predict the properties of the system from which the measurements were acquired. Here's an example — let's say you build widgets that are expected to be 100 cm long but that, when constructed, have some variation in their lengths. You want to be able to predict the number of production rejects based on quality control acceptance limits and a limited set of production measurements. Using statistical analysis methods, you would:\n\n• Acquire a set of measurements of typical manufactured items, as many measurements as practical.\n• Process the data set and acquire mean, variance and standard deviation (square root of variance) values using a data processor like that included in this article.\n• Use the acquired values, the established manufacturing aceptance limits, and a Gaussian curve calculator (also included in this article) to estimate the rate of manufacturing rejects.\n\nThe above is just an example of statistical data analysis. There are many applications for these methods in everyday life — measures of people's height, weight, IQ, and many similar quantities are appropriate to these methods and can provide insight into them.\n\nIt should be noted that this kind of statistical analysis has a degree of uncertainty related to the number of samples or measurements taken. In the analysis method described here, this uncertainty is quantified by the \"standard error\" value, which is computed along with the values described above and which provides a measure of confidence in the analysis.\n\nTheory\n\nAnalysis based on a normal or Gaussian distribution is most appropriate for data sets having an innate normal distribution of its own, that is to say, a centrally weighted grouping of data with decreasing examples far from the average value of the data (the \"mean\"). Figure 1 shows the proportions and percentages one expects to see in a data set for which a Gaussian analysis is appropriate.\n\nCaveat: it cannot be overemphasized that many data sets have properties that make them unsuitable for this treatment, and there are any number of stories of misapplication of the Gaussian distribution where another kind of analysis would better fit the data and circumstances.\n\nA normal (Gaussian) distribution is defined this way:\n\n(1) $\\displaystyle f(x,\\mu,\\sigma) = \\frac{1}{\\sigma \\sqrt{2 \\pi}} e^{-\\frac{(x-\\mu)^2}{2 \\sigma^2}}$\n\nWhere:\n\n• $e$ = base of natural logarithms.\n• $x$ = argument.\n• $\\mu$ = (Greek letter mu) mean or average value.\n• $\\sigma$ = (Greek letter sigma) standard deviation (σ2 = variance).\n\nIf $\\mu$ = 0 and $\\sigma$ = 1 the distribution is called a standard normal distribution or unit normal distribution. This special form is to statistics what a normalized function is to general analysis — a function whose range of values is normalized to the multiplicative identity (i.e. 1) to maximize flexibility. The unit normal distribution has this abbreviated definition:\n\n(2) $\\displaystyle f(x) = \\frac{1}{\\sqrt{2 \\pi}} e^{-\\frac{x^2}{2}}$\n\nComputing an Area", null, "Figure 3: Area bounded by ±1σ\n\nAs it happens, computing a single value on the normal distribution is easily accomplished using one of the above equations, but many statistical problems require that one compute an area with a definite integral. For example, given an analysis of population IQ scores that produces a mean (μ) of 100 and a standard deviation (σ) of 15, one might want to know what percentage of the population is predicted to lie between scores of 85 and 115. To solve such a problem, one would use this form:\n\n(3) $\\displaystyle f(a,b,\\mu,\\sigma) = \\int_a^b \\frac{1}{\\sigma \\sqrt{2 \\pi}} e^{-\\frac{(x-\\mu)^2}{2 \\sigma^2}} dx$\n\nTo apply equation (3) to the above-stated IQ problem, one would provide these arguments:\n\n• a = 85 (integral lower bound)\n• b = 115 (integral upper bound)\n• $\\mu$ = 100 (mean)\n• $\\sigma$ = 15 (standard deviation)\n\nThe result for these arguments is approximately 0.6827, a canonical numerical result with which my readers will likely become familiar over time — it's the area of the unit normal distribution between -1σ and 1σ (Figure 3). Expressed another way, this is the two-tailed outcome for a standard deviation of 1, or the proportion of population values that lie within ±1σ (standard deviation) of the mean.\n\n68-95-99.7 Rule\n\nThis might be a good time to briefly address the so-called 68-95-99.7 rule. In statistical work, it is common to ask what proportion of the measured population's values lies within particular bounds. Refer to Figure 1 above to see the relationship between σ (standard deviation) values and corresponding areas of the normal distribution. For example, it seems that 34.1% of the values lie between 0σ and 1σ — this is called a one-tailed result. For the more commmon two-tailed case shown in Figure 3, in which the area between -σ and +σ is taken, here are some of the classic values:\n\n±σvaluevalue\n168.27%31.73%\n295.45%4.55%\n399.73%0.27%\n499.99%0.01%\n\nTable 1: Normal distribution areas for ±σ arguments\n\nNote about Table 1 that the value column refers to the total area outside the specified ±σ bounds. For example, in the earlier IQ problem, we found that 68.27% of the population have IQs between 85 and 115. This means 31.73% of the population have IQs that are either above or below this range (15.87% below, 15.87% above). (I hasten to add that the IQ problem is hypothetical — IQ scores cannot be reliably fit to a normal distribution in such a simple way.)\n\nAlgorithmic Limitations\n\nLet's turn now to the problem of creating practical numerical results. Unfortunately, in a much-lamented limitation of Calculus, no closed-form integral exists for equation (3) shown above — one must use a numerical method to acquire an approximate result. Because of the importance of this integral to applied statistics, a carefully designed numerical function is provided in many computing environments named the error function, erf(x), with this definition:\n\n(4) $\\displaystyle \\text{erf}(x) = \\frac{2}{\\sqrt{\\pi}} \\int_0^x e^{-t^2} dt$\n\nRemember again about erf(x), notwithstanding that it can be expressed so easily, that it has no closed-form solution and one must use numerical methods to acquire approximate results. Using erf(x), one would express equation (3) this way:\n\n(5) $\\displaystyle f(a,b,\\mu,\\sigma) = \\frac{1}{2} \\, \\text{erf}\\left(-\\frac{\\sqrt{2} (a - \\mu) }{2 \\,\\sigma}\\right) - \\frac{1}{2} \\, \\text{erf}\\left(-\\frac{\\sqrt{2} (b - \\mu)}{2 \\, \\sigma}\\right)$\n\nFor the IQ problem described above, in a computing environment one would acquire a result this way:\n\n(6) y = f(85,125,100,15) (with result 0.6826895)\n\nTo convert the outcome of equation (6) into a practical result, one need only multiply it by the population size. For example one might ask, of a population of a million people, how many have an IQ at or above 135? Assuming that our statistical parameters are legitimate, we can calculate:\n\n(7) y = f(135, 1000000 ,100,15) * 1000000 (with result 9815.32)\n\nNotice about this result that an arbitrary constant was used for the upper bound. Ideally, one wants to compute the area between 135 and +infinity, but in a computing environment, one must choose a finite value to approximate infinity.\n\nCalculator\n\nThis section has a Gaussian curve calculator able to produce results for the equations and methods described above. With appropriate data entries the calculator can answer practical questions like these:\n\nProblemabμσp\nFor the unit normal distribution, what percentage of the samples lies between -1σ and +1σ?-1101100\nIn a population of one million people, given an IQ μ of 100 and σ of 15, how many people have an IQ at or above 135?1351000000100151000000\nIn a manufacturing process, a widget's variable dimensions fall on a normal distribution. Production measurements show that the part's average length is 100 cm and the standard deviation is 2 cm. The manufacturing acceptance bound from the mean is ±2.5 cm. What percentage of the parts are likely to be accepted? (Note that in this problem, $\\overline{r}$ = percentage rejected.)-97.5102.51002100\nTo justify announcing a new discovery like the Higgs Boson, experimental physicists require that their data have a p-value equal to or less than 5σ. What is the numerical value of 5σ when expressed as a p-value? (Click here for a more detailed description of this problem.)51000000011\n\nFeel free to compute solutions for problems of your own. Enter values into the green data windows, then press Enter or press the Compute button:", null, "Figure 4: Graph showing relationship\nbetween $a$, $b$, $r$ and $\\overline{r}$\nGausssian Curve Calculator\n\nCalculate result for $\\displaystyle f(a,b,\\mu,\\sigma,p) = p \\, \\int_a^b \\frac{1}{\\sigma \\sqrt{2 \\pi}} e^{-\\frac{(x-\\mu)^2}{2 \\sigma^2}} dx$\n\n Mean $\\mu$: Standard Deviation $\\sigma$: Lower bound a: Upper Bound b: Population Size p: Result $r$: Result $\\overline{r}$: Number format:   Use default   Use computer exponential   Use scientific\nData Processor\n\nThe above calculator and exposition show how to analyze and interpret the values (mean and standard deviation) derived from data acquisition and processing. This section shows how to acquire and process the required data. (The order of this article's topics is intentional — it's easier to understand how to apply mean and standard deviation values than it is to process the data required to obtain them.)\n\nEquations\n\nWhile processing a data set, one acquires and then exploits the following quantities:\n\nNameMethodComment\nnn = Number of samplesSample size, used in all later steps.\n$\\mu$$\\displaystyle \\mu = \\frac{\\sum{x}}{n}Mean (average) value: the sum of data values divided by sample size n. v$$ \\displaystyle v = \\frac{\\sum{(x-\\mu)^2}}{n}$Variance: Sum of differences squared divided by sample size n.\n$\\sigma$$\\displaystyle \\sigma = \\sqrt{\\frac{\\sum{(x-\\mu)^2}}{n}}Standard deviation: square root of sum of differences squared divided by sample size n. v_u$$ \\displaystyle v_u = \\frac{\\sum{(x-\\mu)^2}}{n-1}$Unbiased1 variance: Sum of differences squared divided by sample size n-1.\n$\\sigma_u$$\\displaystyle \\sigma_u = \\sqrt{\\frac{\\sum{(x-\\mu)^2}}{n-1}}$Unbiased1 standard deviation: square root of sum of differences squared divided by sample size n-1.\nse$\\displaystyle se = \\frac{\\sigma}{\\sqrt{n}}$Standard error: standard deviation divided by square root of sample size n.\n1  The terms \"unbiased sample variance\" and \"unbiased standard deviation\" are more fully explained here.\n\nUser Data Processor\n\nThe above methods and equations are computed by the data processor below. Just enter (or paste: Ctrl+V) data into the green data window, then press \"Compute\" to acquire a result:\n\n Data window: 1 2 3 4 5 6 7 8 Result window: Press \"Compute\" for results Number format:   Use default   Use computer exponential   Use scientific Sample bias:   n-1 (default: \"unbiased\")   n   n+1\n\nDiscussion\n\nThe data processor above has a control to adjust the sample bias in the range {-1,1}. After much reading on this topic, and after seeing a number of different terms for the same things used by different authors, I decided I wasn't going to be able to provide a predefined set of options that would satify everyone. Just remember that the bias control influences the variance and standard deviation in various ways confusingly explained here.\n\nThe data processor's results can be exported intwo ways:\n• The user can click the \"Transfer\" button to transfer mean (μ) and standard deviation (σ) results to the Gaussian curve calculator that appears earlier in the article.\n• The user can pass his mouse cursor across the results window, then press Ctrl+C to copy the result table to the system clipboard.\n\nRemember that large data sets produce smaller standard error values, reflecting the idea that more samples should increase the accuracy of the result. But there's another use for standard error and different sized data sets — one can submit a small data sample to the processor, record the mean and standard deviation values, then increase the size of the data set to see if the mean and standard deviation change significantly. This procedure is meant to discover how many measurements are required to produce reliable results.\n\nAlgorithms\n\nThe algorithms that power this page are located here, a JavaScript source file released under the GPL. The source has these main sections:\n\n• Stats.calc() provides the Gaussian curve calculation functions.\n• Stats.process() provides the data processor functions.\n• Stats.animate() provides the graphics functions used by Figure 2 above.\n\nThe calculator requires an embodiment of the error function, provided here by Stats.erf() and Stats.erfc(). Again, integrating the Gaussian curve must be performed numerically, which leads to a certain amount of algorithmic complexity.\n\nI have created other versions of the gaussian curve calculator:\n• This Python script relies on the extensive Python scientific and technical libraries to provide the Gaussian calculator functions with much less complexity.\n• This Python script produces the same results as the above data processor.\n• This Java calculator project includes a statistics section that provides the same functions, but with the same level of complexity as the JavaScript source for lack of adequate high-level libraries in Java. Unfortunately because of security issues, Java applets are no longer a practical way to deliver technical Web content, but the calculator still works as a desktop application.\n\nThe statistical data processing is comparatively straightforward and should be easily understood by reading the JavaScript source.\n\nReferences\n\n Home | Mathematics | * Consumer Loan Calculator * Applied Mathematics * Calculus * Finance Calculator * Is Mathematics a Science? * Maxima * Sage * Trigonometric Relations * Unit Conversions Area of an Irregular Polygon Binomial Probability Coronavirus Math Equities Myths Graphinity Graphitude Introduction to Statistics LaTeX Editor Mandelbrot Set Peak People Polygon Calculator Polynomial Regression Data Fit Polynomial Regression Data Fit (Java version) Prime Numbers Quadratic Equation Solver Randomness Signal Processing The Mathematics of Pi The Mathematics of Population Increase The Physics Behind Stopping a Car Why PDF?", null, "", null, "", null, "Share This Page" ]
[ null, "https://arachnoid.com/images/leftarrow.png", null, "https://arachnoid.com/images/rightarrow.png", null, "https://arachnoid.com/images/addthis16.gif", null, "https://arachnoid.com/NormalDistribution/graphics/ndgraph.png", null, "https://arachnoid.com/NormalDistribution/graphics/sigma1graph.png", null, "https://arachnoid.com/NormalDistribution/graphics/boundsgraph.png", null, "https://arachnoid.com/images/leftarrow.png", null, "https://arachnoid.com/images/rightarrow.png", null, "https://arachnoid.com/images/addthis16.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88471407,"math_prob":0.987358,"size":15246,"snap":"2023-40-2023-50","text_gpt3_token_len":3422,"char_repetition_ratio":0.12793596,"word_repetition_ratio":0.020164609,"special_character_ratio":0.23678342,"punctuation_ratio":0.096693024,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99928254,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null,1,null,1,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T06:25:26Z\",\"WARC-Record-ID\":\"<urn:uuid:ffcad43b-138b-4f83-bc60-7aa323f0d2be>\",\"Content-Length\":\"41600\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f50c8bb-2921-4c0d-9af5-e130847f2a52>\",\"WARC-Concurrent-To\":\"<urn:uuid:1d452b28-4d3a-4a2b-8e60-bcee02539c5d>\",\"WARC-IP-Address\":\"142.11.206.210\",\"WARC-Target-URI\":\"https://arachnoid.com/NormalDistribution/index.html\",\"WARC-Payload-Digest\":\"sha1:GLPJB2LWPUUPAEC7YNQFEB762AG7MWB4\",\"WARC-Block-Digest\":\"sha1:MKDCW6F3ABHBBRP2CMMACDV3YMY6BBBN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100056.38_warc_CC-MAIN-20231129041834-20231129071834-00292.warc.gz\"}"}
http://mathspizza.com/folder_list/22
[ "# Our Downloads\n\n• Folder Name : GGB-IX MATHS\n\n• ### GGB > GGB-IX MATHS\n\n##### 7_Area of Circle\n\n7_Area of Circle\n\n##### 7_Fraction Multiplication\n\n7_Fraction Multiplication\n\n##### 8_Identity (a+b)^2-(a-b)^2=4ab\n\n8_Identity (a+b)^2-(a-b)^2=4ab\n\n##### 8_Identity a^2-b^2-second method\n\n8_Identity a^2-b^2-second method\n\n##### 8_Identity a^2-b^2\n\n8_Identity a^2-b^2\n\n##### 9_CIRCLE-concept of circle\n\n9_CIRCLE-concept of circle\n\n##### 9_COORDINATE GEOMETRY-plotting coordinates\n\n9_COORDINATE GEOMETRY-plotting coordinates\n\n##### 9_Geometry- SAS RULE\n\n9_Geometry- SAS RULE\n\n##### 9_IDENTITY-identity expansion of (ax+b)(cx+d)\n\n9_IDENTITY-identity expansion of (ax+b)(cx+d)\n\n##### 9_IDENTITY-identity square of (x+y+z)-NEW\n\n9_IDENTITY-identity square of (x+y+z)-NEW\n\n##### 9_LINES AND ANGLES-parallel lines and transversal -8 angles\n\n9_LINES AND ANGLES-parallel lines and transversal -8 angles\n\n##### 9_Polynomials-Algebraic identity\n\n9_Polynomials-Algebraic identity\n\n##### 9_STATISTICS-Mean Machine (AM)\n\n9_STATISTICS-Mean Machine (AM)\n\n##### 9_TRIANGLE-Angle Sum Property in a triangle\n\n9_TRIANGLE-Angle Sum Property in a triangle\n\n##### 9_TRIANGLE-exterior angle sum of triangle -audi car\n\n9_TRIANGLE-exterior angle sum of triangle -audi car\n\n##### 9_CIRCLE-concyclic points\n\n9_CIRCLE-concyclic points\n\n##### 9_CIRCLE-cyclic quadrilateral thm\n\n9_CIRCLE-cyclic quadrilateral thm\n\n##### 9_CIRCLE-cyclic quadrilateral thm- Special case\n\n9_CIRCLE-cyclic quadrilateral thm- Special case\n\n##### 9_MENSURATION-area of trapezium\n\n9_MENSURATION-area of trapezium\n\n##### 9_PARALLELOGRAM area of parallelogram\n\n9_PARALLELOGRAM area of parallelogram\n\n##### 9_PARALLELOGRAM-area_of_triangles_on_the_same_base between s\n\n9_PARALLELOGRAM-area_of_triangles_on_the_same_base between s\n\n##### 9_PARALLELOGRAM-parallelogram ppty- opp sides equal, opp ang\n\n9_PARALLELOGRAM-parallelogram ppty- opp sides equal, opp ang\n\n##### 9_POLYNOMIAL-FACTORING\n\n9_POLYNOMIAL-FACTORING\n\n##### 9_POLYNOMIAL-Poly zeros-final version\n\n9_POLYNOMIAL-Poly zeros-final version\n\n##### 9_TRIANGLE_angle sum ppty triangle using congruent triangle\n\n9_TRIANGLE_angle sum ppty triangle using congruent triangle\n\n##### 9_Euclids 5th postulate\n\n9_Euclids 5th postulate\n\n##### 9_TRIANGLE-exterior angle ppty of triangle\n\n9_TRIANGLE-exterior angle ppty of triangle\n\n##### 9_CIRCLE-equal chords equidistant\n\n9_CIRCLE-equal chords equidistant\n\n##### 8_Linear eqn\n\n8_Linear eqn\n\nSubscribe to our Newsletter to get notifications" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51366746,"math_prob":0.7927488,"size":342,"snap":"2019-35-2019-39","text_gpt3_token_len":83,"char_repetition_ratio":0.13905326,"word_repetition_ratio":0.0,"special_character_ratio":0.1871345,"punctuation_ratio":0.022727273,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99754655,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-19T18:39:32Z\",\"WARC-Record-ID\":\"<urn:uuid:891702b4-12e9-4c7a-b124-3bf0973726ba>\",\"Content-Length\":\"94237\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0292777b-e84a-47a5-a0aa-a70e8dfe980e>\",\"WARC-Concurrent-To\":\"<urn:uuid:f4f29b10-b60f-4d1f-bfa8-5270e2cbfc42>\",\"WARC-IP-Address\":\"103.44.220.135\",\"WARC-Target-URI\":\"http://mathspizza.com/folder_list/22\",\"WARC-Payload-Digest\":\"sha1:S3HYQCAWKB3KP7WWX5OTJWDNKW5DOKR3\",\"WARC-Block-Digest\":\"sha1:HC3E5UD7G3GEX5GPHJJYQ2XZ7CIQPHLG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027314904.26_warc_CC-MAIN-20190819180710-20190819202710-00062.warc.gz\"}"}
http://ayotzinapasomostodos.com/lib/correlation_coefficient.htm
[ "correlation coefficient\n\nCorrelation Coefficient\n\nA number that is a measure of the strength and direction of the correlation between two variables. Correlation coefficients are expressed using the variable r, where r is between 1 and –1, inclusive. The closer r is to 1 or –1, the less scattered the points are and the stronger the relationship. Only data points with a scatterplot which is a perfectly straight line can have r = –1 or r = 1. When r < 0 the data have a negative association, and when r > 0 the data have a positive association." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9058281,"math_prob":0.99638444,"size":651,"snap":"2019-43-2019-47","text_gpt3_token_len":150,"char_repetition_ratio":0.114374034,"word_repetition_ratio":0.018867925,"special_character_ratio":0.21505377,"punctuation_ratio":0.08695652,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9919218,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T09:39:18Z\",\"WARC-Record-ID\":\"<urn:uuid:94afb899-73bc-47ac-bce7-d5992b83f0c2>\",\"Content-Length\":\"4124\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f1420ed-1f92-46c3-9e6e-ea270113fe0b>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d61903e-3691-4b26-8ab3-574a7e16382f>\",\"WARC-IP-Address\":\"173.208.172.18\",\"WARC-Target-URI\":\"http://ayotzinapasomostodos.com/lib/correlation_coefficient.htm\",\"WARC-Payload-Digest\":\"sha1:JD4F3CG6DVAOAITS5CCARNSPEPN44PGE\",\"WARC-Block-Digest\":\"sha1:QHYOKIJ36O6FMDJEYDSF2WUTCRNRO345\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986692723.54_warc_CC-MAIN-20191019090937-20191019114437-00398.warc.gz\"}"}
https://www.wikibacklink.com/search/yards-to-feet-conversion-chart-pdf
[ "Keyword Analysis & Research: yards to feet conversion chart pdf\n\nKeyword Research: People who searched yards to feet conversion chart pdf also searched\n\nWhat is the formula to convert feet to yards?\n\nConvert feet to yards with this simple formula: yards = feet × 0.333333. Converting a foot length measurement to a yard measurement involves multiplying your length by the conversion ratio to find the result. A foot is equal to 0.333333 yards, so to convert simply multiply by 0.333333. For example: 5' = (5 × 0.333333) = 1.666667 yd.\n\nHow do you convert feet into yards?\n\nTo convert feet to yards quickly and easily, you can divide the measurement in feet by 3 to convert it to yards because each yard contains 3 feet. For example, if you have a measurement of 12 feet, you would divide the number by 3 to convert the measurement to 4 yards.\n\nHow do you convert linear feet into square yards?\n\nTo convert linear feet to square yards, it is necessary to find the square feet first by multiplying the length by the width and then dividing the square feet by 9. To illustrate, a carpet with a linear feet or length of 10 feet and a width of 12 feet is 120 square feet." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8444101,"math_prob":0.98959136,"size":1344,"snap":"2022-05-2022-21","text_gpt3_token_len":403,"char_repetition_ratio":0.1597015,"word_repetition_ratio":0.024691358,"special_character_ratio":0.3690476,"punctuation_ratio":0.12828948,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9982821,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-23T16:41:21Z\",\"WARC-Record-ID\":\"<urn:uuid:7d7bc24f-f5f6-4cb3-97b8-0fe9958c7e2d>\",\"Content-Length\":\"97236\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81ce6438-2f1f-4ba4-a60c-ae440e6b8ef6>\",\"WARC-Concurrent-To\":\"<urn:uuid:3ba7bd72-4bd9-4454-84e3-ce76b408285b>\",\"WARC-IP-Address\":\"172.67.152.189\",\"WARC-Target-URI\":\"https://www.wikibacklink.com/search/yards-to-feet-conversion-chart-pdf\",\"WARC-Payload-Digest\":\"sha1:BFVSEZ7C3A76K457GX3UMYXURZOH3ZT5\",\"WARC-Block-Digest\":\"sha1:5PAC7SHYEXZJ2UUJ6EM5TGYU2WNBCWP2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304287.0_warc_CC-MAIN-20220123141754-20220123171754-00224.warc.gz\"}"}
http://230nsc1.phy-astr.gsu.edu/hbase/balpen.html
[ "# Ballistic Pendulum\n\nThe ballistic pendulum is a classic example of a dissipative collision in which conservation of momentum can be used for analysis, but conservation of energy during the collision cannot be invoked because the energy goes into inaccessible forms such as internal energy. After the collision, conservation of energy can be used in the swing of the combined masses upward, since the gravitational potential energy is conservative.", null, "Calculation\nIndex\n\nCollision concepts\n\n HyperPhysics***** Mechanics R Nave\nGo Back\n\n# Ballistic Pendulum", null, "In the back courtyard of the munitions factory hung an old, scarred block of wood. As quality control for the cartridges coming off the assembly line, someone would regularly take a gun to the courtyard and fire a bullet into the block. Measuring the height of the swing revealed the speed of the bullet, but since the block was increasing in mass with the added bullets, the mass of the block had to be checked as well as the mass of the bullet being fired.\n\nIf a bullet of mass m = grams\n\nand a muzzle velocity u = m/s = km/h = mi/h\n\nis fired into a block of mass M = grams\n\nthen the velocity of the block and bullet after the impact would be\n\nv = m/s = km/h = mi/h\n\nand the combination of block and bullet would swing above its original height by an amount\n\nh = m = cm = ft.\n\nComments on calculation: If a value for the velocity of the bullet, u, or either of the masses is entered, the velocity v after the collision and the height of swing is calculated. If either the velocity v after the collision or the height h is entered, then the other values will be calculated presuming the current values of the masses. A value of g = 9.8 m/s2 is assumed in the calculation.\n\n Further discussion\nIndex\n\nCollision concepts\n\n HyperPhysics***** Mechanics R Nave\nGo Back" ]
[ null, "http://230nsc1.phy-astr.gsu.edu/hbase/imgmec/bulletblock.gif", null, "http://230nsc1.phy-astr.gsu.edu/hbase/imgmec/bpen.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9318346,"math_prob":0.9500581,"size":877,"snap":"2020-34-2020-40","text_gpt3_token_len":190,"char_repetition_ratio":0.16609393,"word_repetition_ratio":0.025157232,"special_character_ratio":0.21208666,"punctuation_ratio":0.09039548,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9791352,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-14T06:07:08Z\",\"WARC-Record-ID\":\"<urn:uuid:4a91e014-5a8e-47bc-9ba5-1e2a3dbd2122>\",\"Content-Length\":\"6156\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8df1c7cf-f0dd-434d-8306-f532b270eb04>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d56d39d-4cd3-4dc0-8a45-a87d0691823c>\",\"WARC-IP-Address\":\"131.96.55.77\",\"WARC-Target-URI\":\"http://230nsc1.phy-astr.gsu.edu/hbase/balpen.html\",\"WARC-Payload-Digest\":\"sha1:VXZCTFGCWHZJ2KFSHVEWNZE2J3KNJPIT\",\"WARC-Block-Digest\":\"sha1:V4XH2VX4TVMPVLKHKB5MJZAV2NCM626A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439739177.25_warc_CC-MAIN-20200814040920-20200814070920-00356.warc.gz\"}"}
https://primes.utm.edu/curios/page.php/6071.html
[ "# 6071\n\nThis number is a composite.", null, "6071 is the only known number n such that 0^1 + 1^2 + 2^3 + 3^4 + .... + (n-2)^(n-1) + (n-1)^n is prime (as of 2018). [Antonious]" ]
[ null, "https://primes.utm.edu/gifs/check.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8809484,"math_prob":0.9999068,"size":157,"snap":"2021-43-2021-49","text_gpt3_token_len":68,"char_repetition_ratio":0.09150327,"word_repetition_ratio":0.0,"special_character_ratio":0.522293,"punctuation_ratio":0.14634146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991567,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-08T18:29:24Z\",\"WARC-Record-ID\":\"<urn:uuid:ba1730d8-65a4-44a0-bb2e-b9540979206f>\",\"Content-Length\":\"9446\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9bb02088-fc6e-4060-9172-6195e99359d3>\",\"WARC-Concurrent-To\":\"<urn:uuid:b2d0aa1f-44ac-418c-b86f-d34149454612>\",\"WARC-IP-Address\":\"208.87.74.54\",\"WARC-Target-URI\":\"https://primes.utm.edu/curios/page.php/6071.html\",\"WARC-Payload-Digest\":\"sha1:XPY6QP6QD7KVCFIXAI4NBWMXSRIEUSHQ\",\"WARC-Block-Digest\":\"sha1:2BQLMLDZSPUFHXNF45JFOSF7HASXH2RE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363520.30_warc_CC-MAIN-20211208175210-20211208205210-00174.warc.gz\"}"}
https://www.fractii.ro/fractions-multiplication-results-explained.php?multiplied_fractions=14/20*-22/42
[ "# Multiply fractions: 14/20 × - 22/42 = ? Multiplication result of the ordinary (simple, common) fractions explained\n\n## The latest fractions multiplied\n\n 14/20 × - 22/42 = ? Jan 19 14:17 UTC (GMT) 100/40 × 100/250 = ? Jan 19 14:17 UTC (GMT) 38/13 × - 55/15 = ? Jan 19 14:17 UTC (GMT) 600 × 2/5 = ? Jan 19 14:17 UTC (GMT) 209/5 × 6/33 = ? Jan 19 14:16 UTC (GMT) - 32/47 × 221/25 = ? Jan 19 14:16 UTC (GMT) 24/31 × 22/28 = ? Jan 19 14:16 UTC (GMT) 65/86 × 17/88 = ? Jan 19 14:16 UTC (GMT) - 19/21 × 13/20 × - 12/27 = ? Jan 19 14:16 UTC (GMT) 24/45 × - 62/26 = ? Jan 19 14:16 UTC (GMT) 39/59 × - 1,845/40 = ? Jan 19 14:16 UTC (GMT) 12/30 × - 30/16 = ? Jan 19 14:16 UTC (GMT) 13/14 × 7/11 = ? Jan 19 14:16 UTC (GMT) see more... ordinary (common) fractions multiplied by users\n\n## How to multiply two fractions?\n\n#### When we multiply ordinary fractions, the end fraction will have:\n\n• as a numerator, the result of multiplying all the numerators of the fractions,\n• as a denominator, the result of multiplying all the denominators of the fractions.\n• a/b × c/d = (a × c) / (b × d)\n• a, b, c, d are integer numbers;\n• if the pairs (a × c) and (b × d) are not coprime (they have common prime factors) the end fraction should be reduced (simplified) to lower terms.\n\n### How to multiply ordinary fractions? Steps.\n\n• Start by reducing fractions to lower terms (simplifying).\n• Reduce math fractions to lower terms, online, with explanations.\n• Factor the numerators and the denominators of the reduced fractions: break them down to their prime factors.\n• Calculate the prime factors of numbers, online calculator\n• Above the fraction bar we write the product of all the prime factors of the fractions' numerators, without doing any calculations.\n• Below the fraction bar we write the product of all the prime factors of the fractions' denominators, without doing any calculations.\n• Cross out all the common prime factors that appear both above and below the fraction bar.\n• Multiply the remaining prime factors above the fraction bar - this will be the numerator of the resulted fraction.\n• Multiply the remaining prime factors below the fraction bar - this will be the denominator of the resulted fraction.\n• There is no need to reduce (simplify) the resulting fraction, since we have already crossed out all the common prime factors.\n• If the resulted fraction is an improper one (without considering the sign, the numerator is larger than the denominator), it could be written as a mixed number, consisting of an integer and a proper fraction of the same sign.\n• Write improper fractions as mixed numbers, online.\n• Multiply ordinary fractions, online, with explanations." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79918295,"math_prob":0.9910975,"size":4569,"snap":"2021-04-2021-17","text_gpt3_token_len":1377,"char_repetition_ratio":0.1754655,"word_repetition_ratio":0.17334779,"special_character_ratio":0.38586125,"punctuation_ratio":0.13294117,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9985246,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-19T14:17:24Z\",\"WARC-Record-ID\":\"<urn:uuid:2ea16a36-7b62-46bb-b32b-a277e49fbd9f>\",\"Content-Length\":\"44831\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:909ac2e9-b384-440b-b430-5103d868d30f>\",\"WARC-Concurrent-To\":\"<urn:uuid:80539e73-1fbe-4eb5-a4f4-0b654942a152>\",\"WARC-IP-Address\":\"93.115.53.187\",\"WARC-Target-URI\":\"https://www.fractii.ro/fractions-multiplication-results-explained.php?multiplied_fractions=14/20*-22/42\",\"WARC-Payload-Digest\":\"sha1:OGYUBYKTNVUY64BYWWCRUSWHTP6PW2O2\",\"WARC-Block-Digest\":\"sha1:3OAQF4BPXIJDQGYOE6LU44DVPSBBMXDZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703519395.23_warc_CC-MAIN-20210119135001-20210119165001-00058.warc.gz\"}"}
https://www.mrexcel.com/board/threads/converting-cell-formula-max-to-vba-formula.1097497/
[ "# Converting Cell Formula (MAX) to VBA Formula\n\n#### Rufus Clupea\n\n##### Board Regular\nHi Folks,\n\nJust a quickie (I think/hope)...", null, "=TRUNC(MAX(((MAX(E5-5,(E8-4)))/10),1))\n\nthat I needed to move into code, i.e.:\nSheet1.Range(\"J18\") = TRUNC(Max(((Max(E5 - 5, (E8 - 4))) / 10), 1))\n\nThe VBE/Compiler doesn't like it in this form.\n\n• Putting Sheet1.Range and quotes around E5 & E8\n• Putting WorksheetFunction. in front of the 2 Maxes\n• Using: MyValue = Application.WorksheetFunction.Max()\n\nbut I don't think I'm implementing it correctly. (I tried defining it as a function).\n\nCode:\n``````Public Function MyValue()\nMyValue = Application.WorksheetFunction.Max()[FONT=verdana]\n\nSheet1.Range(\"J18\") = TRUNC(MyValue(((Myvalue(E5 - 5, (E8 - 4))) / 10), 1))\n[/FONT]``````\nI've also tried every combination of the above that I could think of.\n\nI'm stumped.\n\nLast edited:\n\n### Excel Facts\n\nFormula for Yesterday\nName Manager, New Name. Yesterday =TODAY()-1. OK. Then, use =YESTERDAY in any cell. Tomorrow could be =TODAY()+1.\n\n#### Fluff\n\n##### MrExcel MVP, Moderator\nRe: Need Help Converting Cell Formula (MAX) to VBA Formula\n\nIf you just want the value, try\nCode:\n``Range(\"J18\").Value = [TRUNC(MAX(((MAX(E5-5,(E8-4)))/10),1))]``\n\n•", null, "Rufus Clupea\n\n#### Rufus Clupea\n\n##### Board Regular\nRe: Need Help Converting Cell Formula (MAX) to VBA Formula\n\nWow, this is the first time I've seen that notation!", null, "Of course it worked; I'm just not sure why...\nThe page I first looked it up on says:\n... the square brackets are a replacement for the Range/Parentheses/Quotation Marks construct.\nOK, then it goes on to say:\nIt can be used on either side of the equal sign.\nHuh?", null, "I'm having a little trouble getting my head around/understanding that.\n\nGuess I've got some more reading/hair-pulling ahead...", null, "Thanks for the tip.\n\nNow, how do I ask this without sounding like a jerk... Is there another different solution I should know about for similar situations in the future?", null, "Last edited:\n\n#### Fluff\n\n##### MrExcel MVP, Moderator\nRe: Need Help Converting Cell Formula (MAX) to VBA Formula\n\nWith VBA there are always different ways to do something", null, "The [] are a shorthand version of\nCode:\n``Range(\"J18\").Value = Evaluate(\"TRUNC(MAX(((MAX(E5-5,(E8-4)))/10),1))\")``\n\n#### Rufus Clupea\n\n##### Board Regular\nRe: Need Help Converting Cell Formula (MAX) to VBA Formula\n\nAha, YES!", null, "Odd that neither of these... techniques(?) are even mentioned in the VBA books I have (or maybe not?) :banghead:\n\nThank you.\n\n#### Fluff\n\n##### MrExcel MVP, Moderator\nRe: Need Help Converting Cell Formula (MAX) to VBA Formula\n\nYou're welcome & thanks for the feedback\n\n#### Rufus Clupea\n\n##### Board Regular\nRe: Need Help Converting Cell Formula (MAX) to VBA Formula\n\nThat should have been...\nOdd that neither of these... techniques(?) are even mentioned in the VBA books I have (or maybe not so odd?) :banghead:\nI think I'm collecting quite a bit of feedback for authors/potential authors (having done some writing myself in the distant past...", null, ") Lots has to do with their indices (and things that should be there, but are hidden in obscure places...", null, ")\n\nLast edited:" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.865146,"math_prob":0.8097314,"size":1342,"snap":"2020-24-2020-29","text_gpt3_token_len":421,"char_repetition_ratio":0.09192825,"word_repetition_ratio":0.5728643,"special_character_ratio":0.3353204,"punctuation_ratio":0.18928571,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9746638,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-27T00:38:05Z\",\"WARC-Record-ID\":\"<urn:uuid:a0683567-2ccf-4108-a734-6f8cbb2c95f3>\",\"Content-Length\":\"85116\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a578ba5c-7216-457a-bee7-4f7caefeab25>\",\"WARC-Concurrent-To\":\"<urn:uuid:c538e47e-a454-4808-bbc7-d34ebdae67c8>\",\"WARC-IP-Address\":\"216.92.17.166\",\"WARC-Target-URI\":\"https://www.mrexcel.com/board/threads/converting-cell-formula-max-to-vba-formula.1097497/\",\"WARC-Payload-Digest\":\"sha1:Y746IEWEJZR4FNN4XULUSJASJ664DYJ3\",\"WARC-Block-Digest\":\"sha1:OG3E3JXGJY5NM7I65N5LGLTCFKVGQ66M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347391923.3_warc_CC-MAIN-20200526222359-20200527012359-00167.warc.gz\"}"}
https://www.dummies.com/article/academics-the-arts/math/pre-algebra/how-to-find-the-mean-and-median-in-quantitative-data-191316/
[ "##### Statistics Essentials For Dummies", null, "Quantitative data assigns a numerical value to each member of a statistical sample. You can use this information to find the mean and median values.\n\nThe following sample — which is purely fictional — uses five members of Sister Elena’s basketball team. Suppose that the information in the following list has been gathered about each team member’s height and most recent spelling test.", null, "You can use this information to find the mean and median for both sets of data. Both terms refer to ways to calculate the average value in a quantitative data set. An average gives you a general idea of where most individuals in a data set fall so you know what kinds of results are standard. For example, the average height of Sister Elena’s fifth grade class is probably less than the average height of the Los Angeles Lakers. As you will learn here, an average can be misleading in some cases, so knowing when to use the mean versus the median is important.\n\n## How to find the mean\n\nThe mean is the most commonly used average. In fact, when most people use the word average, they’re referring to the mean. Here’s how you find the mean of a set of data:\n1. Add up all the numbers in that set.\n\nFor example, to find the average height of the five team members, first add up all their heights:\n\n55 + 60 + 59 + 58 + 63 = 295\n\n2. Divide this result by the total number of members in that set.\n\nDivide 295 by 5 (that is, by the total number of boys on the team):\n\n295 ÷ 5 = 59\n\nSo the mean height of the boys on Sister Elena’s team is 59 inches.\n\nSimilarly, to find the mean number of words that the boys spelled correctly, first add up the number of words they spelled correctly:\n\n18 + 20 + 14 + 17 + 18 = 87\n\nNow divide this result by 5:\n\n87 ÷ 5 = 17.4\n\nAs you can see, when you divide, you end up with a decimal in your answer. If you round to the nearest whole word, the mean number of words that the five boys spelled correctly is about 17 words.\n\nThe mean can be misleading when you have a strong skew in data — that is, when the data has many low values and a few very high ones, or vice versa.\n\nFor example, suppose that the president of a company tells you, “The average salary in my company is \\$200,000 a year!” But on your first day at work, you find out that the president’s salary is \\$19,010,000 and each of his 99 employees earns \\$10,000. To find the mean, add up the total salaries:\n\n\\$19,010,000 + (99 \\$10,000) = \\$20,000,000\n\nNow divide this number by the total number of people who work there:\n\n\\$20,000,000 ÷ 100 = \\$200,000\n\nSo the president didn’t lie. However, the skew in salaries resulted in a misleading mean.\n\n## How to find the median\n\nWhen data values are skewed (when a few very high or very low numbers differ significantly from the rest of the data), the median can give you a more accurate picture of what’s standard. Here’s how to find the median of a set of data:\n1. Arrange the set from lowest to highest.\n\nTo find the median height of the boys in the above table, arrange their five heights in order from lowest to highest.\n\n55 58 59 60 63\n\n2. Choose the middle number.\n\nThe middle value, 59 inches, is the median average height.\n\nTo find the median number of words that the boys spelled correctly (see the above table), arrange their scores in order from lowest to highest:\n\n14 17 18 18 20\n\nThis time, the middle value is 18, so 18 is the median score.\n\nIf you have an even number of values in the data set, put the numbers in order and find the mean of the two middle numbers in the list. For instance, consider the following:\n\n2 3 5 7 9 11\n\nThe two center numbers are 5 and 7. Add them together to get 12 and then divide by 2 to get their mean. The median in this list is 6.\n\nNow recall the company president who makes \\$19,010,000 a year and his 99 employees who each earn \\$10,000. Here’s how this data looks:\n\n10,000 10,000 10,000 ... 10,000 19,010,000\n\nAs you can see, if you were to write out all 100 salaries, the center numbers would obviously both be 10,000. The median salary is \\$10,000, and this result is much more reflective of what you’d probably earn if you were to work at this company." ]
[ null, "https://catalogimages.wiley.com/images/db/jimages/9781119590309.jpg", null, "https://www.dummies.com/wp-content/uploads/216059.image0.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69154817,"math_prob":0.97445387,"size":24131,"snap":"2022-40-2023-06","text_gpt3_token_len":6726,"char_repetition_ratio":0.12807229,"word_repetition_ratio":0.8507384,"special_character_ratio":0.33007336,"punctuation_ratio":0.23734055,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98888725,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T12:28:34Z\",\"WARC-Record-ID\":\"<urn:uuid:c8549a98-009d-45cd-b8e6-be71c0a2cadb>\",\"Content-Length\":\"76695\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:53ccd11b-5062-41b0-b727-c01d02005476>\",\"WARC-Concurrent-To\":\"<urn:uuid:6bc7f002-6015-4560-9c5a-04d4d484a5fb>\",\"WARC-IP-Address\":\"104.18.13.160\",\"WARC-Target-URI\":\"https://www.dummies.com/article/academics-the-arts/math/pre-algebra/how-to-find-the-mean-and-median-in-quantitative-data-191316/\",\"WARC-Payload-Digest\":\"sha1:FBJAN34H7XBTE7CPIASDDZKCRQGSWK5L\",\"WARC-Block-Digest\":\"sha1:H7FBMW5TVNI7YESVHFYF65D6CJPBNHP6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499713.50_warc_CC-MAIN-20230129112153-20230129142153-00039.warc.gz\"}"}
https://docs.moodle.org/38/en/MathType
[ "# MathType\n\n## What is MathType?\n\nMathType is a software application created by Design Science, that allows the creation of mathematical notation for word processors (Microsoft Word, Apple Pages, OpenOffice, Google Docs), web pages, desktop publishing, and presentations (PowerPoint, Keynote), as well as for TeX, LaTeX, and MathML documents.\n\nMathType works with Moodle to create equations for your course. You can use MathType's built-in Moodle: TeX filter translator to add mathematical notation directly into Moodle Assignment, Chat, Feedback, Forum, Glossary, Lesson, Quiz, and Wiki modules. Note that you can also add MathType equations to Moodle modules as GIF images. Refer to MathType Help for guidance with images.\n\n## Add a MathType equation to Moodle\n\nIf you have MathType equations in existing documents or presentations and you'd like to use these in a Moodle course, or if you want to create a new equation to use in Moodle, follow these steps. This will work in the Moodle modules mentioned above. There are more detailed instructions for Quiz and Chat modules later in this article; instructions for other modules are similar, and the steps in this section should be all you need.\n\n1. Open the equation in MathType, or create it if it doesn't already exist.\n2. In MathType's Preferences menu, choose Cut and Copy Preferences.\n3. From the Equation for application or website group, choose the Moodle: TeX filter translator and click OK. You don't need to perform this step for every equation. Once you choose the appropriate translator, future equations will be processed using this translator until you choose a different one. Note: If you're using a Macintosh, procedures will be the same, but the dialog will look slightly different from the one shown here.\n4. Select the equation that you created or opened in step 1. If you want to use only part of the equation in the MathType editing area, select only the part you want.\n5. Copy the equation or equation fragment. If there were no errors during translation, MathType will display a Status Bar message similar to this one:\n\nIn rare cases, you may get an error message to the effect that there is \"No translation available for [name of symbol]\". The most common instance of this error is after using one of the expanding integral templates, which you can create in MathType by depressing the Shift key as you choose any integral from the Integral templates palette. If you have used expanding integral templates, you'll need to replace them with non-expanding integrals before you translate the equations to LaTeX.\n\n6. Click in the Moodle Resource or Activity where you want the equation to appear, then paste the equation. Note: If you're using the HTML Editor, you may prefer to click on the \"Toggle HTML Source\" icon", null, ", and paste the equation into the HTML source, but that's not required.\n\nThe \"built-up\" equation won't appear until you click the \"Save changes\" button, the \"Post to forum\" button, or other appropriate button in Moodle.\n\n### Example: Add equations to Moodle assessments\n\nIn this example and the next, we'll show how to use MathType with two of the most popular Moodle modules -- quiz and chat. With MathType and the Moodle TeX filter, you can add mathematical expressions both to an assessment question and to its answer choices. To show this capability, we'll write a quiz question for a multiple choice quiz. We assume you're familiar with the quiz module, so the steps here will not be complete instructions in using this module.\n\n1. Let's say we want to write a quiz about adding rational expressions, and in one of our questions we want the students to identify the least common denominator (LCD) of two expressions. Write \"Find the LCD: \" in the question editor:\n2. In MathType, write the expressions we want to include in the question. For our example, we'll use these two rational expressions:", null, "and", null, ". Choose MathType's Moodle: TeX filter translator as described above. Copy the expressions from MathType and paste into the question editor:\n3. We'll use these 4 answer choices: x – 2, x + 2, x² – 4, and 1. You can use MathType and the TeX filter for these, or not -- your choice. We'll use MathType for the first 3, and for the last one we'll just type in the number \"1\". Copy each answer choice from MathType and paste it into the respective space for the answer choice:\n4. For the correct answer choice, choose Grade 100% from the dropdown box, and click Save at the bottom of the page.\n5. This is how our question looks in the preview:\n\n### Example: Add equations to Moodle chat\n\nYou can use MathType to add equations to Moodle chats. Here's how:\n\n1. Choose MathType's Moodle: TeX filter translator as described above.\n2. Once you're in the chat session, you can create mathematical expressions in MathType, then copy them and paste into the chat text entry window. The math will not convert to a \"built-up\" equation until you press Enter:\n3. It's not necessary for other chat participants to have MathType in order to be able to see your equations. They will be visible to everyone in the chat session.\n4. If you need to use one of the chat equations again, simply drag it from the chat session and drop it into MathType (or use copy & paste), as described in the next section. Then you can edit it and paste the modified equation back in a new reply. You can see an example of this in the first 11:57 entry in this chat:\n\n## Copy equations from Moodle and paste into MathType\n\nIf you have an equation in your own course, or if you find an equation in another course that you want to use in a document or presentation outside of Moodle, it's possible to do that by using MathType. Not all filters have this capability though, and not all versions of MathType. If you try these steps and it doesn't work, it's possible that the equation was created with a Moodle filter other than the TeX filter.\n\n1. One way to tell if an equation can be copied into MathType is to hover the mouse pointer over an equation. If you get this type of pop-up when you hover the mouse pointer over the equation, the following steps will allow you to use the equation in MathType, just as you would any MathType equation. If you don't see the pop-up though, it doesn't mean the equation won't work. It might mean simply that your browser isn't displaying the pop-up. It's still worth a try to see if these steps will work.\n2. Select the equation by dragging the mouse pointer across it:\n3. Copy the equation. Don't use the right-click (Macintosh: Ctrl+click) contextual menu to choose the \"Copy Image\" or \"Copy Picture\" command, because doing so won't work. If the contextual menu for your browser has a \"Copy\" command, you can use this command if you want.\n4. Open MathType and paste the equation into the MathType editing area. The equation is now a MathType equation that you can edit and use just like any other MathType equation.\n\n## Feedback\n\nIf you have experience using MathType with Moodle please send questions, comments and suggestions directly to Design Science at [email protected] or @MathType on Twitter." ]
[ null, "https://docs.moodle.org/38/en/images_en/9/90/toggle_html_source.gif", null, "https://docs.moodle.org/38/en/images_en/e/e4/Fraction1.gif", null, "https://docs.moodle.org/38/en/images_en/d/db/Fraction2.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90289474,"math_prob":0.8114209,"size":6521,"snap":"2021-21-2021-25","text_gpt3_token_len":1399,"char_repetition_ratio":0.17032377,"word_repetition_ratio":0.01604278,"special_character_ratio":0.21070388,"punctuation_ratio":0.109412715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9862886,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-13T07:23:11Z\",\"WARC-Record-ID\":\"<urn:uuid:1f701f06-0a2e-4fda-acee-5523589a767f>\",\"Content-Length\":\"37274\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c3b29f03-987b-45dd-a3cc-16acc0604514>\",\"WARC-Concurrent-To\":\"<urn:uuid:42b35904-c4e5-4f28-97b7-fb0aac6e7b89>\",\"WARC-IP-Address\":\"52.214.122.38\",\"WARC-Target-URI\":\"https://docs.moodle.org/38/en/MathType\",\"WARC-Payload-Digest\":\"sha1:WGYRCIF7FRA4ZAUKIKVDFYWLYFTV32J6\",\"WARC-Block-Digest\":\"sha1:YK7FG7Z5U7NPXJC6XLEPQN5B5HSXIRBF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991537.32_warc_CC-MAIN-20210513045934-20210513075934-00228.warc.gz\"}"}
https://www.grc.nasa.gov/WWW/k-12/airplane/nozzled.html
[ "", null, "+ Text Only Site\n+ Non-Flash Version\n+ Contact Glenn", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Ramjets, scramjets, and rockets all use nozzles to accelerate hot exhaust to produce thrust as described by Newton's third law of motion. The amount of thrust produced by the engine depends on the mass flow rate through the engine, the exit velocity of the flow, and the pressure at the exit of the engine. The value of these three flow variables are all determined by the nozzle design. A nozzle is a relatively simple device, just a specially shaped tube through which hot gases flow. Ramjets and rockets typically use a fixed convergent section followed by a fixed divergent section for the design of the nozzle. This nozzle configuration is called a convergent-divergent, or CD, nozzle. In a CD nozzle, the hot exhaust leaves the combustion chamber and converges down to the minimum area, or throat, of the nozzle. The throat size is chosen to choke the flow and set the mass flow rate through the system. The flow in the throat is sonic which means the Mach number is equal to one in the throat. Downstream of the throat, the geometry diverges and the flow is isentropically expanded to a supersonic Mach number that depends on the area ratio of the exit to the throat. The expansion of a supersonic flow causes the static pressure and temperature to decrease from the throat to the exit, so the amount of the expansion also determines the exit pressure and temperature. The exit temperature determines the exit speed of sound, which determines the exit velocity. The exit velocity, pressure, and mass flow through the nozzle determines the amount of thrust produced by the nozzle. On this slide we derive the equations which explain and describe why a supersonic flow accelerates in the divergent section of the nozzle while a subsonic flow decelerates in a divergent duct. We begin with the conservation of mass equation: mdot = r * V * A = constant where mdot is the mass flow rate, r is the gas density, V is the gas velocity, and A is the cross-sectional flow area. If we differentiate this equation, we obtain: V * A * dr + r * A * dV + r * V * dA = 0 divide by (r * V * A) to get: dr / r + dV / V + dA / A = 0 Now we use the conservation of momentum equation: r * V * dV = - dp and an isentropic flow relation: dp / p = gam * dr / r where gam is the ratio of specific heats. This is Equation #10 on the page which contains the derivation of the isentropic flow relations We can use algebra on this equation to obtain: dp = gam * p / r * dr and use the equation of state p / r = R * T where R is the gas constant and T is temperature, to get: dp = gam * R * T * dr gam * R * T is the square of the speed of sound a: dp = (a^2) * dr Combining this equation for the change in pressure with the momentum equation we obtain: r * V * dV = - (a^2) * dr V / (a^2) * dV = - dr / r - (M^2) * dV / V = dr / r using the definition of the Mach number M = V / a. Now we substitute this value of (dr /r) into the mass flow equation to get: - (M^2) * dV / V + dV / V + dA / A = 0 (1 - M^2) * dV / V = - dA / A This equation tells us how the velocity V changes when the area A changes, and the results depend on the Mach number M of the flow. If the flow is subsonic then (M < 1) and the term multiplying the velocity change is positive (1 - M^2 > 0). An increase in the area (dA > 0 ) produces a negative increase (decrease) in the velocity (dV < 0). For our CD nozzle, if the flow in the throat is subsonic, the flow downstream of the throat will decelerate and stay subsonic. So if the converging section is too large and does not choke the flow in the throat, the exit velocity is very slow and doesn't produce much thrust. On the other hand, if the converging section is small enough so that the flow chokes in the throat, then a slight increase in area causes the flow to go supersonic. For a supersonic flow (M > 1) the term multiplying velocity change is negative (1 - M^2 < 0). Then an increase in the area (dA > 0) produces an increase in the velocity (dV > 0). This effect is exactly the opposite of what happens subsonically. Why the big difference? Because, to conserve mass in a supersonic (compressible) flow, both the density and the velocity are changing as we change the area. For subsonic (incompressible) flows, the density remains fairly constant, so the increase in area produces only a change in velocity. But in supersonic flows, there are two changes; the velocity and the density. The equation: - (M^2) * dV / V = dr / r tells us that for M > 1, the change in density is much greater than the change in velocity. To conserve both mass and momentum in a supersonic flow, the velocity increases and the density decreases as the area is increased. Activities: Guided Tours Navigation ..", null, "", null, "", null, "Beginner's Guide Home Page", null, "+ Inspector General Hotline + Equal Employment Opportunity Data Posted Pursuant to the No Fear Act + Budgets, Strategic Plans and Accountability Reports + Freedom of Information Act + The President's Management Agenda + NASA Privacy Statement, Disclaimer, and Accessibility Certification", null, "Editor: Nancy Hall NASA Official: Nancy Hall Last Updated: May 05 2015 + Contact Glenn" ]
[ null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/logo_nasa.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/title_find_it_sm.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/spacer.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/button_go.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/nav_top_0_0.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/nav_top_1_0.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/nav_top_2_0.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/nav_top_3_0.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/nav_top_4_0.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/nav_top_5_0.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/nozzled.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/butpi.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/buthi.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/butci.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/logo_first_gov.gif", null, "https://www.grc.nasa.gov/WWW/k-12/airplane/Images/logo_nasa_self.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8619737,"math_prob":0.9914389,"size":4735,"snap":"2020-24-2020-29","text_gpt3_token_len":1261,"char_repetition_ratio":0.14817163,"word_repetition_ratio":0.054603856,"special_character_ratio":0.25174233,"punctuation_ratio":0.085653104,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99634767,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-04T02:37:20Z\",\"WARC-Record-ID\":\"<urn:uuid:4973c2d2-facb-4b13-a274-65c4b035a0c2>\",\"Content-Length\":\"14419\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7cb98f15-b10a-47b0-8bf0-70e8a3775840>\",\"WARC-Concurrent-To\":\"<urn:uuid:74d0bc77-e2c0-44db-a42a-62ddda8e96fa>\",\"WARC-IP-Address\":\"128.156.253.25\",\"WARC-Target-URI\":\"https://www.grc.nasa.gov/WWW/k-12/airplane/nozzled.html\",\"WARC-Payload-Digest\":\"sha1:B6VJTTEZIXE6VP5ID7R5JV7CP2KKSJG3\",\"WARC-Block-Digest\":\"sha1:4QYS3HFMPQ55AH74Z5BTGTGBURQFIWXK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655883961.50_warc_CC-MAIN-20200704011041-20200704041041-00305.warc.gz\"}"}
https://jackwestin.com/resources/mcat-content/geometrical-optics/combination-of-lenses
[ "MCAT Content / Geometrical Optics / Combination Of Lenses\n\n### Combination of lenses\n\nTopic: Geometrical Optics\n\nAn array of simple lenses with a common axis can be used to multiply the magnification of an image. The real image formed by one lens can be used as the object for another lens, combining magnifications.\n\nLenses in contact: The simplest case is where lenses are placed in contact: if the lenses of focal lengths f1 and f2 are “thin”, the combined focal length f of the lenses is given by", null, "Since 1/f is the power of a lens, it can be seen that the powers of thin lenses in contact are additive.\n\nSeparated lenses: If two thin lenses are separated in air by some distance d (where d is smaller than the focal length of the first lens), the focal length for the combined system is given by", null, "The distance from the second lens to the focal point of the combined lenses is called the back focal length (BFL).", null, "As d tends to zero, the value of the BFL tends to the value of f given for thin lenses in contact.\n\nIf the separation distance is equal to the sum of the focal lengths (d = f1+f2), the combined focal length and BFL are infinite. This type of system is called an afocal system since it produces no net convergence or divergence of the beam: transform a parallel beam into another parallel beam (collimated). Although the system does not alter the divergence of a collimated beam, it does alter the width of the beam. The magnification can be found by dividing the focal length of the two lenses.\n\nTwo lenses at this separation form the simplest type of optical telescope. A schematic of a simple telescope is a good example of the use of two lenses to focus the image of one lens:", null, "Practice Questions\n\nConverging and diverging lenses in a lab\n\nA mirror in an operating room\n\nMCAT Official Prep (AAMC)\n\nKey Points\n\n• If the lenses of focal lengths f1 and f2are “thin”, the combined focal length f of the lenses is given by 1/f=1/f1+1/f2\n\n• If the lenses are separated by some distance d, then the combined focal length is given by 1/f=1/f1+1/f2−d/(f1f2)\n\n• If the separation distance is equal to the sum of the focal lengths (d = f1+f2), the combined focal length is infinite. This type of system is called an afocal system (a simple optical telescope).\n\nKey Terms\n\nafocal system: an optical system that produces no net convergence or divergence of the beam.\n\nBilling Information\nWe had trouble validating your card. It's possible your card provider is preventing us from charging the card. Please contact your card provider or customer support.\n{{ cardForm.errors.get('number') }}\n{{ registerForm.errors.get('zip') }}\n{{ registerForm.errors.get('coupon') }}\nTax: {{ taxAmount(selectedPlan) | currency spark.currencySymbol }}\n\nTotal Price Including Tax: {{ priceWithTax(selectedPlan) | currency spark.currencySymbol }} / {{ selectedPlan.interval | capitalize }}" ]
[ null, "https://i0.wp.com/cms.jackwestin.com/wp-content/uploads/2020/03/500FB059-42C8-4B5D-848E-EC6575EDF55E-300x137.jpeg", null, "https://i2.wp.com/cms.jackwestin.com/wp-content/uploads/2020/03/22660E65-0AAD-45D5-9ED5-942EDE051660.jpeg", null, "https://i1.wp.com/cms.jackwestin.com/wp-content/uploads/2020/03/682FA9AB-A85B-46C5-8164-C5C2BEB85602.jpeg", null, "https://i0.wp.com/cms.jackwestin.com/wp-content/uploads/2020/03/34A04C25-FEC3-4361-A41F-C3D2CFF0E2B0.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8861713,"math_prob":0.97469443,"size":2226,"snap":"2022-40-2023-06","text_gpt3_token_len":514,"char_repetition_ratio":0.18676868,"word_repetition_ratio":0.19354838,"special_character_ratio":0.2228212,"punctuation_ratio":0.065022424,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9887389,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T04:20:24Z\",\"WARC-Record-ID\":\"<urn:uuid:790dba38-50a2-4af9-8e2d-c13e1a2a266e>\",\"Content-Length\":\"117848\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98d60d19-d0bf-41c9-8541-1a4fb3648288>\",\"WARC-Concurrent-To\":\"<urn:uuid:5472dd75-90ee-469f-8dc1-c3e31103882c>\",\"WARC-IP-Address\":\"165.22.180.248\",\"WARC-Target-URI\":\"https://jackwestin.com/resources/mcat-content/geometrical-optics/combination-of-lenses\",\"WARC-Payload-Digest\":\"sha1:PVX2FHPUUGV5I5AEXJI5HVH7A2C4EAL5\",\"WARC-Block-Digest\":\"sha1:Y2L6XDXROXDQBEZU4ANXVK32GIGP7YGI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337723.23_warc_CC-MAIN-20221006025949-20221006055949-00563.warc.gz\"}"}
https://doc.arcgis.com/en/cityengine/latest/cga/cga-split-area.htm
[ "# splitArea operation\n\n### Syntax\n\n• splitArea(axis) { area1 : operations1 | area2 : operations2 | ... | arean-1 : operationsn-1 }\n• splitArea(axis) { area1 : operations1 | area2 : operations2 | ... | arean-1 : operationsn-1 }*\n• splitArea(axis, adjustMode) { area1 : operations1 | ... | arean-1 : operationsn-1 }\n• splitArea(axis, adjustMode) { area1 : operations1 | ... | arean-1 : operationsn-1 }*\n\n### Parameters\n\n1. axisselector\n{ x | y | z }—Name of axis to split along. This is relative to the local coordinate system (i.e. the scope).\n{ adjust | noAdjust }—Optional selector to control scope calculation of the calculated shapes. The default is to adjust the scope to the geometry's bounding box. Using noAdjust avoids this, and therefore, the scopes of the resulting shapes fill the parent's scope without gaps.\n3. areafloat\nSplit area. The current shape is split such that the resulting shape has the specified area. Depending on the prefix, the area is interpreted in the following way:\n• no prefix(absolute) —The new shape will have exactly the same area.\n• ' (relative) —The new shape's size will be area * current geometry's area.\n• ~ (floating) —With the ~ prefix the remaining spaces between the split parts with absolute dimensions are automatically adapted. If multiple floating parts are defined within a split, the dimensions are weighed proportionally.\n4. operations\nA sequence of shape operations to execute on the newly created shape.\n5. *\nRepeat switch: the repeat switch triggers the repetition of the defined split into the current shape's scope, as many times as possible. The number of repetitions and floating dimensions are adapted to the best solution (best number of repetitions and least stretching).\n\n### Description\n\nThe splitArea operation subdivides the geometry of the current shape along the specified scope axis into a set of shapes with geometric areas specified by the area parameter. This operation works on planar 2D geometry and almost planar geometry. The area accuracy breaks down with nonplanar geometry. This operation does not apply to 3D geometry such as meshes or highly nonplanar geometry.\n\nFor each specified area, the current shape's geometry is cut with planes perpendicular to the split axis such that the resulting shape contains geometry with the specified area.\n\nThe optional repeat switch * can be used to repeat the content within a { ... }* block as many times as it fully fits into the scope's dimension along the selected axis.\n\n### Examples\n\n#### Colors\n\nThese are the rules for the colors used in the following examples.\n\n``````Red --> // red\ncolor(1, 0, 0)\nprint(\"area(R) = \" + geometry.area)\n\nYellow --> // yellow\ncolor(1, 1, 0)\nprint(\"area(Y) = \" + geometry.area)\n\nGreen --> // green\ncolor(0, 1, 0)\nprint(\"area(G) = \" + geometry.area)\n\nCyan --> // cyan\ncolor(0, 1, 1)\nprint(\"area(C) = \" + geometry.area)\n\nBlue --> // blue\ncolor(0, 0, 1)\nprint(\"area(B) = \" + geometry.area)\n\nMagenta --> // magenta\ncolor(1, 0, 1)\nprint(\"area(M) = \" + geometry.area)\n\nPink --> // semi-transparent pink\ncolor(1, 0.5, 0.5, 0.3)``````\n This shape is split into two parts. The first part is 70% of the original area, and the second part is 30% of the original area. Note that the specified area amounts do not need to sum to 1. For sums less than 1, the rest of the shape is discarded, and split sections greater than 1 are ignored. ``````Lot --> print(\"total area = \" + geometry.area) splitArea(x) { '0.7 : Green | '0.3 : Yellow }``````", null, "", null, "#### Repeating Split\n\n This split divides the shape into five equal area parts using a repeat split. It repeatedly splits the shape into parts with area equal to 20% of the original area. Note that if area were set to '0.3, we would get four parts with areas equal to 30%, 30%, 30%, and 10% of the original area.``````Lot --> print(\"total area = \" + geometry.area) splitArea(x) { '0.2 : ColorMe }* ColorMe --> case split.index == 0 : Red case split.index == 1 : Yellow case split.index == 2 : Green case split.index == 3 : Cyan else : Blue``````", null, "", null, "#### Split with mixture of floating and absolute areas\n\n This split divides the shape into a middle part (yellow) that has an absolute area of 600 and two side parts each with the same area. The yellow part consists of two pieces since there is a hole in the original shape, and the split cuts the shape along the hole boundary. ``````Lot --> print(\"total area = \" + geometry.area) splitArea(x) { ~1 : Green | 600 : Yellow | ~1 : Blue``````", null, "", null, "#### Nested Split\n\n This nested split divides the shape first in x to get three equal area parts. Then, each part is divided in z into two equal parts. This yields six equal area parts. ``````Lot --> print(\"total area = \" + geometry.area) splitArea(x) { ~1 : splitArea(z) { '0.5 : Red | '0.5 : Yellow } | ~1 : splitArea(z) { '0.5 : Green | '0.5 : Cyan } | ~1 : splitArea(z) { '0.5 : Blue | '0.5 : Magenta } }``````", null, "", null, "This is the original shape and its scope.", null, "Without specifying a value for adjustSelector, the default behavior is to adjust the scope size to the geometry of each split part. Each pink box shows the scope of each split part. The scope maintains the original scope's orientation but shrinks to the bounding box of the geometry. ``````SplitAdjust --> splitArea(x) { '0.2 : splitArea(z) { '0.2 : SplitLeaf }* }* SplitLeaf--> color(0.5, 0.5, 0.5) Geometry. primitiveQuad() t(0, 0.02, 0) Pink comp(e) { all : color(0, 0, 0) Edge. }``````", null, "With noAdjust, the union of the scopes of each split part make up the original scope. The scopes are not adjusted to the geometry inside. ``````SplitNoAdjust --> splitArea(x, noAdjust) { '0.2 : splitArea(z, noAdjust) { '0.2 : SplitLeaf }* }*``````", null, "This is the original shape and its scope. Here is an example of how the splitArea operation can be used to divide a street block into lots with equal areas.", null, "The block is divided recursively three times using splitArea. Each split divides the shape perpendicular to its longest axis into two parts of equal area. Each lot is colored by its area such that lots with the same area have the same color. Using splitArea, all the lots are yellow because the block is divided into equal area lots. ``````Lot --> SplitArea(3) SplitArea(n) --> case n == 0 : A case scope.sz > scope.sx : splitArea(z) { ~1 : SplitArea(n-1) | ~1 : SplitArea(n-1) } else : splitArea(x) { ~1 : SplitArea(n-1) | ~1 : SplitArea(n-1) } A --> color(getColor) print(\"area(A) = \" + geometry.area) // color by area // (so that lots with same area get same color) const minArea = 243 const maxArea = 1255 alpha = (geometry.area - minArea)/ (maxArea - minArea) getColor = colorRamp(\"spectrum\", alpha)``````", null, "", null, "In comparison, when using the split operation, the lots are different colors because the block is divided into lots with different areas. ``````Lot2 --> Split(3) Split(n) --> case n == 0 : B case scope.sz > scope.sx : split(z) { ~1 : Split(n-1) | ~1 : Split(n-1) } else : split(x) { ~1 : Split(n-1) | ~1 : Split(n-1) } B --> color(getColor) print(\"area(B) = \" + geometry.area)``````", null, "", null, "" ]
[ null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-3DDF45CE-DC38-4349-B747-408894CAE2BB-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-2E55CEBF-3807-4C4A-9F3D-451791336E82-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-4ABB074C-C073-4247-BBC6-F01902B4EE35-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-01833964-C0AE-4C76-85FB-EA6733819CCF-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-540270FC-E2ED-4C23-BE74-EF93474F53B8-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-3908EE6F-B756-49A2-A33B-F1D9B80C1C19-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-5F2BC8C9-D61D-49D1-BD61-51676B9EDFD6-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-62A33822-F8FB-4C7E-AA94-F0491C55607A-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-B21E3FFD-3254-4715-9394-459ADAE9D6A8-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-E286E608-4E3F-4B65-A80D-B154BF68C267-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-470AEE08-09A8-47E9-A039-56A422249A89-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-C266E898-BC45-4719-842E-4CB52996DDB4-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-53B11F51-391B-4DB1-AFE1-18C5E748B1C6-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-25B19529-A18B-4827-9132-FB13FEECF5AD-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-4C0B5295-792D-45AC-B78A-A87807344771-web.png", null, "https://doc.arcgis.com/en/cityengine/latest/cga/GUID-6B6083CB-8E43-4EFC-AAA1-B3680A365642-web.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87512296,"math_prob":0.98257256,"size":5271,"snap":"2022-27-2022-33","text_gpt3_token_len":1229,"char_repetition_ratio":0.16309094,"word_repetition_ratio":0.0794702,"special_character_ratio":0.24625309,"punctuation_ratio":0.12386707,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9796271,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T12:52:55Z\",\"WARC-Record-ID\":\"<urn:uuid:836b7b85-5e80-466f-adb7-de09418ba664>\",\"Content-Length\":\"34842\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3861d368-eec4-4ab8-9e9d-fccd094d1b54>\",\"WARC-Concurrent-To\":\"<urn:uuid:7b417f93-6aa5-4d15-8c38-82b791f47a25>\",\"WARC-IP-Address\":\"198.102.60.60\",\"WARC-Target-URI\":\"https://doc.arcgis.com/en/cityengine/latest/cga/cga-split-area.htm\",\"WARC-Payload-Digest\":\"sha1:CQD7FE4EQP7GUIKTJUPKI7WGBPM3TL4Z\",\"WARC-Block-Digest\":\"sha1:JBPAIDGGIBCOR3BK3CHZZSK6HK63ROLD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103821173.44_warc_CC-MAIN-20220630122857-20220630152857-00490.warc.gz\"}"}
https://community.powerbi.com/t5/Desktop/Calculate-Turn-Around-Time/td-p/562732/page/2
[ "cancel\nShowing results for\nDid you mean:\nNick_M", null, "New Contributor\n\n## Re: Calculate Turn Around Time\n\nEasiest thing to do is get a dedicated Date Table with a column that denotes workday or not.  Plus having a dedicated date table opens up all the time intelligence functions.\n\nPanda2018", null, "Member\n\n## Re: Calculate Turn Around Time\n\nCan you show me in an example hoew to?\n\nThanks, & how do I share an dataset?\n\n@Nick_M Thanks!\n\nNick_M", null, "New Contributor\n\n## Re: Calculate Turn Around Time\n\nSure thing. I will use your first table as the starting point.  You can use excel to build a date table and then import, or use the CALENDAR function in DAX to create one.  I went with the DAX.\n\nCalculations-->Modeling-->New Table.  Use the CALENDAR function.  Which needs a start and end date (which will cover in a sec).  Now, could use CALENDARAUTO which will find the start and end data automatically, but does so by searching the entire data model.  Would have worked fine in this small example, but in bigger data models that have date place holders as something like 1/1/9999, CALENDARAUTO will find that as the max.  So back to our CALENDAR function.  Need to find the Min and Max.  We wil do this by search each the Contact Date and Date writting columns and use the Min and Max of those two columns:\n\n```Date =\nCALENDAR(\nMIN (\nMIN (TurnAroundTIme[Contact Date]),\nMIN( TurnAroundTIme[Date Written])\n),\nMAX (\nMAX (TurnAroundTIme[Contact Date]),\nMAX( TurnAroundTIme[Date Written])\n)\n)```\n\nThen add a calculated column for Day Name and then one to label using that day name to denote Weekday or Weekend:\n\n```Day = FORMAT('Date'[Date],\"DDDD\")\n\nDay Type =\nSWITCH(\n'Date'[Day],\n\"Saturday\", \"Weekend\",\n\"Sunday\", \"Weekend\",\n\"Weekday\"\n)```\n\nHere's the end result:", null, "Then need to mark as  Date Table so DAX knows to use that for the built-in time intelligence functions:", null, "Now that we have a calendar that we can use, setting up the DAX formulas should be much easier.   There's actually two ways we can do this, one using FILTER and one using DATESBETWEEN.\n\n```TaT using Filter =\nCALCULATE(\nCOUNTROWS( 'Date' ),\nFILTER( ALL('Date'),\nMAX( TurnAroundTime[Contact Date]) <= 'Date'[Date]\n&& MAX( TurnAroundTime[Date Written]) >= 'Date'[Date]\n&& 'Date'[Day Type] =\"Weekday\"\n)\n)\n\nTaT using DatesBetween =\nCALCULATE(\nCOUNTROWS( 'Date' ),\nDATESBETWEEN('Date'[Date],\nMAX( TurnAroundTime[Contact Date]) ,\nMAX( TurnAroundTime[Date Written])\n),\n'Date'[Day Type] =\"Weekday\"\n)```\n\nDATESBETWEEN leverages the use of the time-intelligence, but requires some additional logic to not give a figure when there is no Contact Date or Date Written field:\n\n```TaT using DatesBetween =\nIF (\nNOT\nOR(\nISBLANK(MAX( TurnAroundTime[Contact Date])),\nISBLANK(MAX( TurnAroundTime[Date Written])\n)\n),\nCALCULATE(\nCOUNTROWS( 'Date' ),\nDATESBETWEEN('Date'[Date],\nMAX( TurnAroundTime[Contact Date]) ,\nMAX( TurnAroundTime[Date Written])\n),\n'Date'[Day Type] =\"Weekday\"\n)\n)```\n\nThen the final output:", null, "sailkitty", null, "Regular Visitor\n\n## Re: Calculate Turn Around Time\n\nFor the least amount of typing do this in the Power Query Editor.\n\nSelect the two Columns-\n\nGo tot he Add Column Ribon at the top and slecet the Date drop down in \"From Date & Time\"  and click \"Subtract Days\"\n\nHope that helps\n\nPanda2018", null, "Member\n\n## Re: Calculate Turn Around Time\n\nThank you @sailkitty\n\nIt works...But for same date it should say 1 day not 0\n\nAlso it shows numbers in negative like difference between 11/1/18 & 11/2/18 is showing -2.\n\nThanks,\n\npanda2018\n\nsailkitty", null, "Regular Visitor\n\n## Re: Calculate Turn Around Time\n\nPart of the benefit of doing it that way is it shows you negative digits where as the Measures will result in an error. This allows you to filter them out later (in  measures ) or correct your underlying data that has errors (for example when items are \"delivered\" before the order is place. This is a data error)\n\nEsp where a blank might default to 01/01/1900\n\nPanda2018", null, "Member\n\n## Re: Calculate Turn Around Time\n\nThank you so very much!!! @Nick_M\n\n-panda2018" ]
[ null, "https://oxcrx34285.i.lithium.com/html/rank_icons/rank_img09.png", null, "https://oxcrx34285.i.lithium.com/html/rank_icons/rank_img06.png", null, "https://oxcrx34285.i.lithium.com/html/rank_icons/rank_img09.png", null, "https://oxcrx34285.i.lithium.com/t5/image/serverpage/image-id/132302i27FB3A419668E848/image-size/large", null, "https://oxcrx34285.i.lithium.com/t5/image/serverpage/image-id/132303i9FD8A522D6FC50DE/image-size/large", null, "https://oxcrx34285.i.lithium.com/t5/image/serverpage/image-id/132314i9EBEDEA79FF2AC50/image-size/large", null, "https://oxcrx34285.i.lithium.com/html/rank_icons/rank_img05.png", null, "https://oxcrx34285.i.lithium.com/html/rank_icons/rank_img06.png", null, "https://oxcrx34285.i.lithium.com/html/rank_icons/rank_img05.png", null, "https://oxcrx34285.i.lithium.com/html/rank_icons/rank_img06.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8069846,"math_prob":0.823645,"size":3486,"snap":"2019-26-2019-30","text_gpt3_token_len":909,"char_repetition_ratio":0.13268237,"word_repetition_ratio":0.05263158,"special_character_ratio":0.25215146,"punctuation_ratio":0.1007752,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97597814,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,4,null,4,null,4,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-21T22:32:51Z\",\"WARC-Record-ID\":\"<urn:uuid:29773429-31a9-40c1-a16c-f2cf00ff6c9b>\",\"Content-Length\":\"532282\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:973a6ebf-3f4d-48f8-899e-9e8646144d92>\",\"WARC-Concurrent-To\":\"<urn:uuid:aded2fb8-88a4-4e37-a814-c74e6d629220>\",\"WARC-IP-Address\":\"208.74.205.170\",\"WARC-Target-URI\":\"https://community.powerbi.com/t5/Desktop/Calculate-Turn-Around-Time/td-p/562732/page/2\",\"WARC-Payload-Digest\":\"sha1:B7S2DOVE7USV3IGSJ2ZLTGB5CVKLHNCS\",\"WARC-Block-Digest\":\"sha1:MWYQHLDALOUOJTVPSV4ZZHD7JQWZL7FY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195527204.71_warc_CC-MAIN-20190721205413-20190721231413-00303.warc.gz\"}"}
https://forum.worldviz.com/showpost.php?s=190fd8acd8c32d94bafb7fd176916490&p=17700&postcount=3
[ "Thread: Road Animation Path View Single Post\n#3\n vserchi", null, "Member Join Date: Sep 2013 Posts: 7\nSorry my bad. Here is the simplest example I could think about. Even if I am interested only in the motion of the quads, I added also balls to show that my elements superimpose. When I attach a texture to the quads and they superimpose, the resulting effect is a flashing of the overlaid part. Do you have any suggestion?\n\nCode:\n```import viz\nimport vizinfo\nimport vizact\n\nviz.setMultiSample(4)\nviz.fov(60)\nviz.go()\n\n#Add the ground plane\nground = viz.addChild('ground.osgb')\n\n#Move the viewpoint back\nviz.MainView.move([ 0, 0, -4])\n\n#Initialize an array of control points\npositions = [[ 2, 0, 0], [ 1, 0, 0], [ 0, 0, 0], [ -1, 0, 0], [ -2, 0, 0], [ -3, 0, 0], [ -4, 0, 0]]\n\n#Define a variable to store the animation paths relative to each element\nPATH = []\n\nfor i in range(0, len(positions)):\nviz.startlayer(viz.QUADS)\nviz.texCoord( 0, 1)\nviz.vertex( -0.5, .005, -0.5)\nviz.texCoord( 1, 1)\nviz.vertex( -0.5, .005, 0.5)\nviz.texCoord( 1, 0)\nviz.vertex( 0.5, .005, 0.5)\nviz.texCoord( 0, 0)\nviz.vertex(0.5, .005, -0.5)\n\nquad = viz.endLayer()\nball= viz.addChild(\"beachball.osgb\")\n\nexec('path' + str(i) + ' = viz.addAnimationPath()')\nPATH.append(eval('path' + str(i)))\n\nfor x, pos in enumerate(positions):\n#Add a ball at each control point and make it\n#semi-transparent, so the user can see where the\n#control points are\nb = viz.addChild('beachball.osgb',cache=viz.CACHE_CLONE)\nb.setPosition(pos)\nb.alpha(0.2)\n\nPATH[i].addControlPoint( x + 1, pos = pos)\n\n#Set the initial loop mode to circular\nPATH[i].setLoopMode(viz.ON)\n\n# Add a control point at the second control point of path-i\nPATH[i].addEventAtControlPoint(\"positions\", 1)\n\n# Link the ball and the quad to the path\nviz.link( PATH[i], ball)\nviz.link( PATH[i], quad)\n\nindex = 1\ndef onPointReached1():\n# Trigger the start of the following animation path\nglobal index\n\nif index < len(PATH):\nPATH[index].play()\n\nindex += 1\n\n# Setup callbacks for each animation path\nfor i in range( 0, len(PATH) - 1):\nvizact.onPathEvent( PATH[i], \"positions\", onPointReached1)\n\n#Play the first animation path\nPATH.play()\n\n#Setup path control panel\ncontrolPanel = vizinfo.InfoPanel(text = None, title = 'Settings', icon = False)\nslider_speed = controlPanel.addLabelItem('Speed', viz.addSlider())\nslider_speed.set(0.1)\n\ndef changeSpeed(pos):\n#Adjust the speed of the animation path\nfor i in range(0, len(PATH)):\nPATH[i].setSpeed( pos * 10 )\n\n#Setup callbacks for slider events\nvizact.onslider(slider_speed, changeSpeed)```" ]
[ null, "https://forum.worldviz.com/images/statusicon/user_offline.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5517108,"math_prob":0.9656101,"size":2424,"snap":"2021-04-2021-17","text_gpt3_token_len":756,"char_repetition_ratio":0.119008265,"word_repetition_ratio":0.0056980057,"special_character_ratio":0.3160066,"punctuation_ratio":0.22574627,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9708049,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-19T00:06:00Z\",\"WARC-Record-ID\":\"<urn:uuid:8b40bad2-1362-44bb-9b8e-c579ce857dd7>\",\"Content-Length\":\"14866\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:97325fbd-f1fb-4aec-8ec6-d46865cc80d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:a9269112-6af8-486c-970f-3624b8f001f2>\",\"WARC-IP-Address\":\"34.239.162.54\",\"WARC-Target-URI\":\"https://forum.worldviz.com/showpost.php?s=190fd8acd8c32d94bafb7fd176916490&p=17700&postcount=3\",\"WARC-Payload-Digest\":\"sha1:OT5PWXQSBCKLPY5WNU7XAH7IVYCSE7EH\",\"WARC-Block-Digest\":\"sha1:EWRARILR73CPVWKDCWT4MVDUY6HP6JVQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038862159.64_warc_CC-MAIN-20210418224306-20210419014306-00157.warc.gz\"}"}
https://ifv.227.in.net/thrustmaster-t-flight-hotas-x.html
[ "an infinite straight wire carrying I in the z direction is: Using Ampere’s Law to solve problems If we have a symmetric problem, such that the direction of is known and is constant over a chosen path, then: Therefore: H = I / pathlength Example: Consider an infinite current sheet with current density . volume current density Jin the +xdirection. What is the magnetic field both inside and outside the slab? First, we need to find the direction of the field. From the Biot-Savart law, we know the field must be perpendicular to the current, so there can be no field in the xdirection. Now if we consider a thin line of current parallel\n\nCurrent Sheets A current sheet is electric current flowing along some 2D surface. Thanks to the Ampere Law, current sheets cause discontinuities of the magnetic field B(r). That is, as r approaches the same point on the current sheet from two different sides, the magnetic field B(r) has different limits." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87960404,"math_prob":0.9870348,"size":5882,"snap":"2020-10-2020-16","text_gpt3_token_len":1398,"char_repetition_ratio":0.15787683,"word_repetition_ratio":0.06986028,"special_character_ratio":0.22492349,"punctuation_ratio":0.103658535,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99078274,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-28T00:08:54Z\",\"WARC-Record-ID\":\"<urn:uuid:54a32ae0-88f9-4485-90aa-998981b8068f>\",\"Content-Length\":\"14808\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e0b06708-3283-4dd9-bf75-502506339e4e>\",\"WARC-Concurrent-To\":\"<urn:uuid:60e4b5d5-8a86-4333-bf1d-669ec37847b2>\",\"WARC-IP-Address\":\"104.31.83.179\",\"WARC-Target-URI\":\"https://ifv.227.in.net/thrustmaster-t-flight-hotas-x.html\",\"WARC-Payload-Digest\":\"sha1:45NV3EFQIZ6DWZPDTHVHRWQNZCJMGZKW\",\"WARC-Block-Digest\":\"sha1:EASSPYN52IHSVFMOH7QH2E4EDPLSF6VI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146907.86_warc_CC-MAIN-20200227221724-20200228011724-00511.warc.gz\"}"}
https://www.klayout.de/forum/discussion/1789/is-it-possible-to-filter-an-instance-on-a-circular-shape
[ "Is it possible to filter an instance on a circular shape ?\n\nHello,\nI want to remove instances which do not overlap with a circular form using a Python script.\nFor sure, Klayout gives the possibility to check the overlap betwwen two boxes, but I don't see how to check the overlap between a box and a circular form.\n\nbox=....\nif not box.overlaps(db.Box(inst.bbox())):\ninst.delete()\n\nThank you.\n\nRegards,\n\n• @ahmedo\n\nThis will find all boxes outside of a layer/circle\nRun the script and you will see a circle with boxes inside,\nclick undo and you can see what has been removed\n\n# Enter your Python code here import pya import array import sys import math import time mw = pya.Application.instance().main_window() layout = pya.Application.instance().main_window().current_view().active_cellview().layout() topcell = pya.Application.instance().main_window().current_view().active_cellview().cell lv = pya.Application.instance().main_window().current_view() if layout == None: raise Exception(\"No layout\") cv = pya.Application.instance().main_window().current_view().active_cellview() dbu =layout.dbu cell = pya.Application.instance().main_window().current_view().active_cellview().cell if cell == None: raise Exception(\"No cell\") n = 2000 # number of points r = 2*2 # radius da = 2 * math.pi / n def coords(path): result = [] for p in path: result.append(DPoint.new(p/dbu,p/dbu)) return result def create_polygon(crd,lay): layer_info = LayerInfo.new(lay,0) layer_index = layout.layer(layer_info) cv.cell.shapes(layer_index).insert(Polygon.new(crd)) def create_box(crd,lay): layer_info = LayerInfo.new(lay,0) layer_index = layout.layer(layer_info) cv.cell.shapes(layer_index).insert(Box.new(crd,crd)) circle = [] for i in range(0, n): path = r * math.cos(i * da), r * math.sin(i * da) circle.append(path) lay1 = 9 lay2 = 5 pth = coords(circle) create_polygon(pth,lay1) box = [(-6.818,-1.185), (-5.195,3.591)] crd = coords(box) create_box(crd,lay2) box = [(5.742,-2.333), (7.674,2.771)] crd = coords(box) create_box(crd,lay2) box = [(-2.827,0.849), (-1.326,2.469)] crd = coords(box) create_box(crd,lay2) box = [(0.730,-2.708), (2.231,-1.088)] crd = coords(box) create_box(crd,lay2) def cut_region(lay1,lay2): lv.transaction(\"Undo\") lif1 = LayerInfo.new(lay1,0) li1 = layout.layer(lif1) lif2 = LayerInfo.new(lay2,0) li2 = layout.layer(lif2) r1 = pya.Region(topcell.begin_shapes_rec(li1)) r2 = pya.Region(topcell.begin_shapes_rec(li2)) #Boolean Functions #outside = r1 - r2 #outside = r2 - r1 #outside = r2 + r1 #outside = r1 + r2 #outside = r1 & r2 outside = r2 & r1 #outside = r2 ^ r1 #outside = r1 ^ r2 layout.clear_layer(li2) output = layout.layer(lif2) topcell.shapes(output).insert(outside) lay1 = 9 lay2 = 5 cut_region(lay1,lay2) lv.commit()\n• If I understand @ahmedo correctly he said not overlapping, so instead of\noutside = r2 & r1\nit should be\nnot_outside = r2.select_not_outside(r1)\nand then\ntopcell.shapes(output).insert(not_outside)\n\nr1 & r2 will cut parts of boxes that are outside of the circle.\n\n• @sebastian\n\nI may have misread .. but good point and thanks for highlighting this, we all learn\n\n• No worries ^^. I also think overlapping is not clearly defined in daily speech, so you may very well be correct. Regions were confusing when I started with KLayout, so I wanted to point out that there are fancy options in the regions included that might save a lot of time.\n\n• Thank you @tagger5896 and @sebastian for your valuable help.\n\nI will make a test with your code. I see that the code looks for the name of the layout at the beginning, so I guess I need to run the script using the command line:\nklayout -r <script.py> my.gds\n\nIs it right ?\n\nBy the way I have implemented a solution that allows me to check the overlap:\n\nfilter_poly = db.Polygon(circular_shape.polygon)\nfor inst in layout_insts:\n# Define the 4 points of the instance box\nllp = db.Point(inst.bbox().left,inst.bbox().bottom)\nlrp = db.Point(inst.bbox().right,inst.bbox().bottom)\ntlp = db.Point(inst.bbox().left,inst.bbox().top)\ntrp = db.Point(inst.bbox().right,inst.bbox().top)\n\nlet = 0\nif filter_poly.inside(llp) or filter_poly.inside(lrp) or filter_poly.inside(tlp) or filter_poly.inside(trp):\nlet = 1\n\n# if let = 0 , then remove instance.\nif let == 0 :\ninst.delete()\n\nRegards\n\n• For my question I think that I have undestood that @tagger5896 's code should be executed from macro menu", null, "I'm used to creating scripts that I execute always in batch mode.\n\nRegards\n\n• @ahmedo\nYou should have an open layout and run the code from the Macro IDE menu yes.\n\n• @tagger5896\n\nThank you so much for your help.\n\nRegards\n\n• @sebastian\nthanks for pointing this out ... floodgates open and the sea's part,\nand yes this makes it easier", null, "", null, "not_outside = r2.select_not_outside(r1)" ]
[ null, "https://www.klayout.de/forum/resources/emoji/smile.png", null, "https://www.klayout.de/forum/resources/emoji/smile.png", null, "https://www.klayout.de/forum/resources/emoji/wink.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9204679,"math_prob":0.92554414,"size":2218,"snap":"2022-05-2022-21","text_gpt3_token_len":545,"char_repetition_ratio":0.10117435,"word_repetition_ratio":0.0,"special_character_ratio":0.23038773,"punctuation_ratio":0.11633109,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98091406,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-16T21:36:09Z\",\"WARC-Record-ID\":\"<urn:uuid:8d18030d-f3ba-4ba2-a8b1-f882910a4d8f>\",\"Content-Length\":\"44672\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:41515341-142a-4ef8-8336-b915a81816a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:35999956-9a45-4d8e-9095-c8f174e9be62>\",\"WARC-IP-Address\":\"185.21.102.151\",\"WARC-Target-URI\":\"https://www.klayout.de/forum/discussion/1789/is-it-possible-to-filter-an-instance-on-a-circular-shape\",\"WARC-Payload-Digest\":\"sha1:4HE5DZQ4NCDIFNXP6RBSC4XHGZ2GCRVL\",\"WARC-Block-Digest\":\"sha1:PL7EDB23YWFCREZFF3GABYTZGGXOWUGP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300244.42_warc_CC-MAIN-20220116210734-20220117000734-00210.warc.gz\"}"}
https://todayfreshers.com/simplification-questions-for-banking-exams/
[ "# Simplification Questions for Banking Exams\n\nSimplification generally means finding an answer for the complex calculation that may involve numbers on division,multiplication, square roots, etc. Simplification means finding the solution from the complex calculation. Simplification in Agile isn’t what makes organizations succeed. Simplification for All Banks Exams Bank, Po RBI, SSC-CGL, SSC-CHSL, etc.\n\n## Simplification Questions for Banking Exams\n\nA. 26\nB. 32\nC. 35\nD. 17\nE. 33\n\nOption C – 35\n\nA. 6312\nB. 6223\nC. 6479\nD. 6475\nE. 6421\n\nOption D – 6475\n\nA. 550\nB. 660\nC. 880\nD. 770\nE. None of these\n\nOption C – 880\n\nA. 62\nB. 55\nC. 48\nD. 52\nE.57\n\nOption C – 48\n\nA. 556\nB. 676\nC. 26\nD. 28\nE. None of these\n\nOption B – 676\n\nA. 620\nB. 520\nC. 550\nD. 480\nE. 460\n\nOption B – 520\n\n### Q7.23.56 + 4142.25 + 134.44 = ?What the Answers?\n\nA. 4010.05\nB. 4300.15\nC. 4100.25\nD. 4200.25\nE. None of these\n\nOption B – 4300.15\n\nA. 12.5\nB. 16.5\nC. 13.5\nD. 18.5\nE. 15.5\n\nOption B – 16.5\n\nA. 33.74\nB. 23.74\nC. 34.74\nD. 36.74\nE. None of these\n\nOption A – 33.74\n\nA. 36.15\nB. 32.15\nC. 31.05\nD. 32.05\nE. None of these\n\nOption C – 31.05\n\nA. 26\nB. 32\nC. 41\nD. 39\nE. 36\n\nOption D- 39\n\nA. 50.385\nB. 50.285\nC. 50.465\nD. 50.185\nE. 50.225\n\nOption B- 50.285\n\nA. 71747\nB. 71727\nC. 72747\nD. 73747\nE. None of these\n\nOption A- 71747\n\nA. 1461\nB. 1649\nC. 1361\nD. 1576\nE. None of these\n\nOption A- 1461\n\nA. 48861\nB. 48961\nC. 47861\nD. 46861\nE. 41661\n\nOption A- 48861\n\nA. 50\nB. 40\nC. 80\nD. 70\nE. 60\n\nOption A- 50\n\nA. 325\nB. 456\nC. 414\nD. 448\nE. 352\n\nOption C- 414\n\nA. 18.5\nB. 16.5\nC. 14.5\nD. 20.5\nE. None of these\n\nOption A- 18.5\n\nA. 226.87\nB. 234.87\nC. 244.87\nD. 224.87\nE. None of these\n\nOption A- 226.87\n\nA. 141\nB. 146\nC. 143\nD. 142\nE. None of these\n\nOption A- 141\n\nA. 240\nB. 320\nC. 400\nD. 300\nE. 330\n\nOption D- 300\n\nA. 627.8\nB. 627.5\nC. 637.8\nD. 637.7\nE. 636.8\n\nOption C- 637.8\n\nA. 91.875\nB. 92.174\nC. 97.275\nD. 93.565\nE. None of these\n\nOption A- 91.875\n\nA. 167\nB. 146\nC. 128\nD. 138\nE. None of these\n\nOption C- 128\n\nA. 262.5\nB. 260.5\nC. 261.5\nD. 263.5\nE. 264.5\n\nOption A- 262.5\n\nA. 139.80\nB. 157.60\nC. 157.80\nD. 170.70\nE. 176.60\n\nOption C- 157.80" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5138568,"math_prob":0.8688981,"size":3270,"snap":"2022-27-2022-33","text_gpt3_token_len":1675,"char_repetition_ratio":0.25903246,"word_repetition_ratio":0.07132667,"special_character_ratio":0.6382263,"punctuation_ratio":0.2977906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9856086,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-16T09:24:17Z\",\"WARC-Record-ID\":\"<urn:uuid:50af0f40-f4d4-4319-85f8-54f392de7f00>\",\"Content-Length\":\"107838\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9dea7776-5e16-46f5-907c-db8f200d01d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:2bdcad07-7c27-4cb8-a498-8bc80f9c3c92>\",\"WARC-IP-Address\":\"104.21.6.144\",\"WARC-Target-URI\":\"https://todayfreshers.com/simplification-questions-for-banking-exams/\",\"WARC-Payload-Digest\":\"sha1:VGXSLC4UTGGXVBFXYMLHTPHQQ7NFYU4U\",\"WARC-Block-Digest\":\"sha1:MB7AD7WIICRTG56WP6PCXKCGG5YUGE4R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572286.44_warc_CC-MAIN-20220816090541-20220816120541-00277.warc.gz\"}"}
https://iq.opengenus.org/hamiltonian-cycle/
[ "# Hamiltonian Cycle\n\n#### algorithm graph algorithm hamiltonian cycle\n\nImagine yourself to be the Vasco-Da-Gama of the 21st Century who have come to India for the first time. Now you want to visit all the World Heritage Sites and return to the starting landmark but then there is a budget constraint and you don't want your crew to repeat site. Seems pretty tough for an young exlplorer,right ?\n\nThis situation can be imagined as a classical Graph Theory problem where each heritage site is a \"Vertex\" and the route from one site to the other is an \"Edge\".\n\nThis route map of visiting every node and returning to the starting node is what called as a \"Hamilton Cycle\"", null, "### Definition\n\nA Circuit in a graph G that passes through every vertex exactly once is called a \"Hamilton Cycle\".", null, "The route depicted starting from Taj Mahal and ending in there is an example of \"Hamilton Cycle\".\n\n### Algorithm\n\nData Structures used :\n\n• A two dimensional array for the Graph named graph[][]\n• A one dimensional array for storing the Hamilton Cycle named path[]\n\nStep 1 : Insert vertex 0 (i.e the starting vertex) to the path[] array.\nStep 2 : Check whether a vertex starting from (vertex 1) is adjacent to the previously added vertex and has been added.\nStep 3 : If we find such a vertex satisfying conditions of Step 2, then we add that vertex to path[]\nStep 4 : If we don't find such a vertex we return False\n\n### Example\n\nGiven a graph G, we need to find the Hamilton Cycle\n\nStep 1: Initialize the array with the starting vertex", null, "Step 2: Search for adjacent vertex of the topmost element (here it's adjacent element of A i.e B, C and D ). We start by choosing B and insert in the array.", null, "Step 3: The topmost element is now B which is the current vertex. We again search for the adjacent vertex (here C) since C has not been traversed we add in the list.", null, "Step 4: The current vertex is now C, we see the adjacent vertex from here. We get D and B, inserting D into the array as B has already been considered.", null, "Step 5: The current vertex is now D since it is a Hamilton Cycle, we need to get back to the starting vertex (which is A). Thus a Hamilton Cycle of the given graph is A -> B -> C -> D -> A .", null, "The Hamilton Path covering all the vertexes. Another Cycle can be A -> D -> C -> B -> A.\n\nIn another case, if we would have chosen C in Step 2, we would end up getting stuck. We would have to traverse a vertex more than once which is not the property of a Hamilton Cycle.", null, "### Code\n\n#include<iostream>\n#define NODE 4\nusing namespace std;\n\nint graph[NODE][NODE];\nint path[NODE];\n\nvoid displayCycle() { //Function to display the cycle obtained\ncout<<\"The Hamilton Cycle : \" << endl;\n\nfor (int i = 0; i < NODE; i++)\ncout << path[i] << \" \";\ncout << path << endl; //print the first vertex again\n}\n\nbool isValid(int v, int k) {\nif (graph [path[k-1]][v] == 0) //if there is no edge\nreturn false;\n\nfor (int i = 0; i < k; i++) //if vertex is already taken, skip that\nif (path[i] == v)\nreturn false;\nreturn true;\n}\n\nbool cycleFound(int k) {\nif (k == NODE) { //when all vertices are in the path\nif (graph[path[k-1]][ path ] == 1 )\nreturn true;\nelse\nreturn false;\n}\n\nfor (int v = 1; v < NODE; v++) { //for all vertices except starting point\nif (isValid(v,k)) { //if possible to add v in the path\npath[k] = v;\nif (cycleFound (k+1) == true)\nreturn true;\npath[k] = -1; //when k vertex will not in the solution\n}\n}\nreturn false;\n}\n\nbool hamiltonianCycle() {\nfor (int i = 0; i < NODE; i++)\npath[i] = -1;\npath = 0; //first vertex as 0\n\nif ( cycleFound(1) == false ) {\ncout << \"Solution does not exist\"<<endl;\nreturn false;\n}\n\ndisplayCycle();\nreturn true;\n}\n\nint main() {\nint i,j;\ncout << \"Enter the Graph : \" << endl;\nfor (i=0;i<NODE;i++){\nfor (j=0;j<NODE;j++){\ncout << \"Graph G(\" << (i+1) << \",\" << (j+1) << \") = \";\ncin >> graph[i][j];\n}\n}\ncout << endl;\ncout << \"The Graph :\" << endl;\nfor (i=0;i<NODE;i++){\nfor (j=0;j<NODE;j++){\ncout << graph [i][j] << \"\\t\";\n}\ncout << endl;\n}\n\ncout << endl;\n\nhamiltonianCycle();\n}\n\n\n### Input and Output", null, "The Graph given in the example section, we input that in the form of an Adjacency Matrix. The Hamilton Cycle thus obtained is the output.\n\nINPUT :\n\nGraph :-\n\n0 1 1 1\n1 0 1 0\n1 1 0 1\n1 0 1 0\n\nOUTPUT :\n\nThe Hamilton Cycle :\n\n0 -> 1 -> 2 -> 3 -> 0\n\n### Complexity\n\nThe Worst Case complexity when used with DFS and backtracking is O(N!).\n\n### Practical Applications\n\n• Hamilton's \"A Voyage Round The World Puzzle\"\n\nInspired from a game called the \"Isconian puzzle\", a dodecahedron containing 20 vertices which denotes a city and the edges denotes the routes (as shown in the Fig. (a)). The object of the puzzle was to start at a city and travel along the edges of the dedecahedron, visiting each of other 19 cities exactly once and end back at the first city.\n\nThe cycle or the circuit thus obtained is (Fig. (b)):", null, "• The Travelling Salesman Problem\n\nThis problem involves finding the shortest route a travelling salesman should take to visit a set of cities. This reduces to finding a \"Hamilton Cycle\" in a complete graph such that the total weight of its edges is as small as possible. This shortest can be found out by something called the Dijkastra Algorithm" ]
[ null, "https://iq.opengenus.org/content/images/2019/05/New-Doc-2019-05-12-18.50.53_1.jpg", null, "https://iq.opengenus.org/content/images/2019/05/New-Doc-2019-05-12-18.50.53_2.jpg", null, "https://iq.opengenus.org/content/images/2019/05/graph_1.png", null, "https://iq.opengenus.org/content/images/2019/05/graph_2.png", null, "https://iq.opengenus.org/content/images/2019/05/graph_3.png", null, "https://iq.opengenus.org/content/images/2019/05/graph_4.png", null, "https://iq.opengenus.org/content/images/2019/05/graph_5.png", null, "https://iq.opengenus.org/content/images/2019/05/ss_graph-1.png", null, "https://iq.opengenus.org/content/images/2019/05/graph_IO.png", null, "https://iq.opengenus.org/content/images/2019/05/Figure-Aa-Hamilton-cycle-in-a-Dodecahedron.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8612999,"math_prob":0.9437714,"size":5074,"snap":"2019-51-2020-05","text_gpt3_token_len":1353,"char_repetition_ratio":0.13234714,"word_repetition_ratio":0.031991743,"special_character_ratio":0.29582185,"punctuation_ratio":0.11078998,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99886537,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T05:48:51Z\",\"WARC-Record-ID\":\"<urn:uuid:f0d88d24-56c1-4b7d-b245-21978191459d>\",\"Content-Length\":\"43693\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e99e294f-1e77-4fcc-855f-626e6461d1d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:1f2169cc-1e43-4dba-8254-a341fce50d13>\",\"WARC-IP-Address\":\"159.89.134.55\",\"WARC-Target-URI\":\"https://iq.opengenus.org/hamiltonian-cycle/\",\"WARC-Payload-Digest\":\"sha1:AFWJZPNGSNYZXFRPZOSYCP6UJU52HMLQ\",\"WARC-Block-Digest\":\"sha1:T34SHPOIRPW22C7SU63CSGKARQLPNX3Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540484815.34_warc_CC-MAIN-20191206050236-20191206074236-00087.warc.gz\"}"}
https://blog.ephorie.de/zeror-the-simplest-possible-classifier-or-why-high-accuracy-can-be-misleading
[ "# ZeroR: The Simplest Possible Classifier, or Why High Accuracy can be Misleading", null, "In one of my most popular posts So, what is AI really? I showed that Artificial Intelligence (AI) basically boils down to autonomously learned rules, i.e. conditional statements or simply, conditionals.\n\nIn this post, I create the simplest possible classifier, called ZeroR, to show that even this classifier can achieve surprisingly high values for accuracy (i.e. the ratio of correctly predicted instances)… and why this is not necessarily a good thing, so read on!\n\nIn the above-mentioned post, I gave an example of a classifier that was able to give you some guidance on whether a certain mushroom is edible or not. The basis for this was rules, which separated the examples based on the given attributes:\n\n```## Rules:\n## If odor = almond then type = edible\n## If odor = anise then type = edible\n## If odor = creosote then type = poisonous\n## If odor = fishy then type = poisonous\n## If odor = foul then type = poisonous\n## If odor = musty then type = poisonous\n## If odor = none then type = edible\n## If odor = pungent then type = poisonous\n## If odor = spicy then type = poisonous\n```\n\nObviously, the more rules the more complex a classifier is. In the example above we used the so-called OneR classifier which bases its decision on one attribute alone. Here, I will give you an even simpler classifier! The ZeroR classifier bases its decision on no attribute whatsoever… zero, zilch, zip, nada! How can this be? Easy: it just takes the majority class of the target attribute! I will give you an example.\n\nFirst, we build a function for the classifier by using the `OneR` package (on CRAN) and some S3-class magic:\n\n```library(OneR)\n\nZeroR <- function(x, ...) {\noutput <- OneR(cbind(dummy = TRUE, x[ncol(x)]), ...)\nclass(output) <- c(\"ZeroR\", \"OneR\")\noutput\n}\npredict.ZeroR <- function(object, newdata, ...) {\nclass(object) <- \"OneR\"\npredict(object, cbind(dummy = TRUE, newdata[ncol(newdata)]), ...)\n}\n```\n\nAs an example we take the well-known German Credit Dataset (originally from my old alma mater, the University of Hamburg) and divide it into a training and a test set:\n\n```data <- read.table(\"data/german.data\", header = FALSE)\ndata <- data.frame(data[ , 1:20], creditrisk = factor(data[ , 21]))\ntable(data\\$creditrisk)\n##\n## 1 2\n## 700 300\n\nset.seed(805)\nrandom <- sample(1:nrow(data), 0.6 * nrow(data))\ndata_train <- data[random, ]\ndata_test <- data[-random, ]\n```\n\nWe see that 700 customers have a good credit risk while 300 have a bad one. The ZeroR classifier now takes the majority class (good credit risk) and uses it as the prediction every time! You have read correctly, it just predicts that every customer is a good credit risk!\n\nSeems a little crazy, right? Well, it illustrates an important point: many of my students, as well as some of my consulting clients, often ask me what a good classifier is and how long it does take to build one. Many people in the area of data science (even some “experts”) will give you something like the following answer (source: A. Burkov):\n\nMachine learning accuracy rule:\n0-80%: one day\n80-90%: one week\n90-95%: one month\n95-97%: three months\n97-99%: one year (or never)\n\nWell, to be honest with you: this is not a very good answer. Why? Because it very much depends on… the share of the majority class! To understand that, let us have a look at how the ZeroR classifier performs on our dataset:\n\n```model <- ZeroR(data_train)\nsummary(model)\n##\n## Call:\n## OneR.data.frame(x = cbind(dummy = TRUE, x[ncol(x)]))\n##\n## Rules:\n## If dummy = TRUE then creditrisk = 1\n##\n## Accuracy:\n## 481 of 700 instances classified correctly (68.71%)\n##\n## Contingency table:\n## dummy\n## creditrisk TRUE Sum\n## 1 * 481 481\n## 2 219 219\n## Sum 700 700\n## ---\n## Maximum in each column: '*'\n##\n## Pearson's Chi-squared test:\n## X-squared = 98.063, df = 1, p-value < 2.2e-16\n\nplot(model)\n```", null, "```prediction <- predict(model, data_test)\neval_model(prediction, data_test)\n##\n## Confusion matrix (absolute):\n## Actual\n## Prediction 1 2 Sum\n## 1 219 81 300\n## 2 0 0 0\n## Sum 219 81 300\n##\n## Confusion matrix (relative):\n## Actual\n## Prediction 1 2 Sum\n## 1 0.73 0.27 1.00\n## 2 0.00 0.00 0.00\n## Sum 0.73 0.27 1.00\n##\n## Accuracy:\n## 0.73 (219/300)\n##\n## Error rate:\n## 0.27 (81/300)\n##\n## Error rate reduction (vs. base rate):\n## 0 (p-value = 0.5299)\n```\n\nSo, because 70% of the customers are good risks we get an accuracy of about 70%! You can take this example to extremes: for example, if you have a dataset with credit card transactions where 0.1% of the transactions are fraudulent (which is about the actual number) you will get an accuracy of 99.9% just by using the ZeroR classifier! Concretely, just by saying that no fraud exists (!) you get an accuracy even beyond the “one year (or never)” bracket (according to the above scheme)!\n\nAnother example even concerns life and death: the probability of dying within one year lies at about 0.8% (averaged over all the people worldwide, according to “The World Factbook” by the CIA). So by declaring that we are all immortal, we are in more than 99% of all cases right! Many medical studies have a much higher error rate… 😉\n\nNow, let us try the OneR classifier on our credit dataset:\n\n```model <- OneR(optbin(data_train))\nsummary(model)\n##\n## Call:\n## OneR.data.frame(x = optbin(data_train))\n##\n## Rules:\n## If V3 = A30 then creditrisk = 2\n## If V3 = A31 then creditrisk = 2\n## If V3 = A32 then creditrisk = 1\n## If V3 = A33 then creditrisk = 1\n## If V3 = A34 then creditrisk = 1\n##\n## Accuracy:\n## 492 of 700 instances classified correctly (70.29%)\n##\n## Contingency table:\n## V3\n## creditrisk A30 A31 A32 A33 A34 Sum\n## 1 10 14 * 247 * 37 * 173 481\n## 2 * 16 * 19 124 21 39 219\n## Sum 26 33 371 58 212 700\n## ---\n## Maximum in each column: '*'\n##\n## Pearson's Chi-squared test:\n## X-squared = 39.504, df = 4, p-value = 5.48e-08\n\nplot(model)\n```", null, "```# Attribute 3: (qualitative)\n# Credit history\n# A30 : no credits taken/\n# all credits paid back duly\n# A31 : all credits at this bank paid back duly\n# A32 : existing credits paid back duly till now\n# A33 : delay in paying off in the past\n# A34 : critical account/\n# other credits existing (not at this bank)\n\nprediction <- predict(model, data_test)\neval_model(prediction, data_test)\n##\n## Confusion matrix (absolute):\n## Actual\n## Prediction 1 2 Sum\n## 1 207 63 270\n## 2 12 18 30\n## Sum 219 81 300\n##\n## Confusion matrix (relative):\n## Actual\n## Prediction 1 2 Sum\n## 1 0.69 0.21 0.90\n## 2 0.04 0.06 0.10\n## Sum 0.73 0.27 1.00\n##\n## Accuracy:\n## 0.75 (225/300)\n##\n## Error rate:\n## 0.25 (75/300)\n##\n## Error rate reduction (vs. base rate):\n## 0.0741 (p-value = 0.2388)\n```\n\nHere, we see that we get an out-of-sample accuracy of 75%, which is more than 7 percentage points better than what we got with the ZeroR classifier, here called base rate. Yet, this is not statistically significant (for an introduction to statistical significance see From Coin Tosses to p-Hacking: Make Statistics Significant Again!).\n\nBecause the concept of “error rate reduction” compared to ZeroR (= base rate) and its statistical significance is so relevant it is displayed by default in the `eval_model()` function of the `OneR` package.\n\nTo end this post, we build a random forest model with the `randomForest` package (on CRAN) on the dataset (for some more information on random forests see Learning Data Science: Predicting Income Brackets):\n\n```set.seed(78)\nlibrary(randomForest)\n## randomForest 4.6-14\n## Type rfNews() to see new features/changes/bug fixes.\n\nmodel <- randomForest(creditrisk ~., data = data_train, ntree = 2000)\nprediction <- predict(model, data_test)\neval_model(prediction, data_test)\n##\n## Confusion matrix (absolute):\n## Actual\n## Prediction 1 2 Sum\n## 1 209 43 252\n## 2 10 38 48\n## Sum 219 81 300\n##\n## Confusion matrix (relative):\n## Actual\n## Prediction 1 2 Sum\n## 1 0.70 0.14 0.84\n## 2 0.03 0.13 0.16\n## Sum 0.73 0.27 1.00\n##\n## Accuracy:\n## 0.8233 (247/300)\n##\n## Error rate:\n## 0.1767 (53/300)\n##\n## Error rate reduction (vs. base rate):\n## 0.3457 (p-value = 9.895e-05)\n```\n\nThe out-of-sample accuracy is over 80% here and the error rate reduction (compared to ZeroR) of about one third is statistically significant. Yet 80% is still not that impressive when you keep in mind that 70% is the base rate!\n\nYou should now be able to spot why this is one of the worst scientific papers I have ever seen: Applications of rule based Classification Techniques for Thoracic Surgery (2015). This also shows one of the more general problems: although this is a medical topic not many medical professionals would be able to spot the elephant in the room here… this will be true for most other areas too, where machine learning will be used ever more frequently. (Just as an aside: this type of blunder wouldn’t have happened had the authors used the `OneR` package: One Rule (OneR) Machine Learning Classification in under One Minute.)\n\nAs you can imagine, there are many strategies to deal with the above challenges of imbalanced/unbalanced data, e.g. other model metrics (like recall or precision) and other sampling strategies (like undersampling the majority class or oversampling the minority class)… but that are topics for another post, so stay tuned!\n\n## 4 thoughts on “ZeroR: The Simplest Possible Classifier, or Why High Accuracy can be Misleading”\n\n1.", null, "Memona Khan says:\n\nHi. I need your help with this new algorithm that you proposed. I have very short time please get back to me today.\nAlso, is there a research paper on Zeror classifier? If so, please provide link.\nPlease get back to me soon.\n\n1.", null, "Learning Machines says:\n\nWhat exactly do you need? No, because it is so basic I don’t know any research paper on the ZeroR classifier?\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed." ]
[ null, "http://blog.ephorie.de/wp-content/uploads/2020/01/discount-1015451_1280-e1568068884649-300x277.jpg", null, "http://blog.ephorie.de/wp-content/uploads/2020/01/zeroR-840x600.png", null, "http://blog.ephorie.de/wp-content/uploads/2020/01/zeroR2-840x600.png", null, "https://secure.gravatar.com/avatar/3a76753de329501f70a16789a79c7768", null, "https://secure.gravatar.com/avatar/55df985dc2ba7df50e931ca237620607", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7774483,"math_prob":0.82338285,"size":9531,"snap":"2022-27-2022-33","text_gpt3_token_len":2559,"char_repetition_ratio":0.11556628,"word_repetition_ratio":0.13198572,"special_character_ratio":0.32441506,"punctuation_ratio":0.1389485,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9673922,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,5,null,5,null,5,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T04:52:56Z\",\"WARC-Record-ID\":\"<urn:uuid:ee73d0bb-3bd8-4e6b-a330-b59450d09add>\",\"Content-Length\":\"79676\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:39e8a53a-3491-4b1f-9452-88ef2da34ce3>\",\"WARC-Concurrent-To\":\"<urn:uuid:11aaedf7-5578-458b-9be5-a77d1e540290>\",\"WARC-IP-Address\":\"217.160.0.192\",\"WARC-Target-URI\":\"https://blog.ephorie.de/zeror-the-simplest-possible-classifier-or-why-high-accuracy-can-be-misleading\",\"WARC-Payload-Digest\":\"sha1:SJUDCQEHT3ZUYT2KKMGCGYOPZHYS4PN6\",\"WARC-Block-Digest\":\"sha1:MN5DSRSGKZRZ6ISJPHGBM5RIMHNW4QHA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573163.7_warc_CC-MAIN-20220818033705-20220818063705-00783.warc.gz\"}"}
https://programmersheaven.com/discussion/comment/427110/
[ "# Simple C++ program to convert inches and feet to centimeters\n\nHey guys, I am very new to c++, and im trying to learn and create basics codes.\n\nCould you guys please tell me what i am doing wrong here.\n\nI want to maintain my current coding format.\n\nThanks\n\n#include\n#include\n#include\n\nusing namespace std;\n\nint main()\n{\nconst double feet_to_inch = 12; // conversion from feet to inches\nconst double inch_to_centi = 2.54; // conversion from inches to centimenters\n\nint feet ; 0\ndouble inches ; = 0.0, total_inches = 0.0 ;\ndouble centimeters ; 0.0\n\n// Conversion Program to centimeters\ncout << \"hello world\";\n\n// Entered value by user in feet\ncout << \"\nEnter in feet\" ;\ncin >> feet ;\n\n// Entered value by user in inches\ncout << \"\nEnter in inches\" ;\ncin >> inches ;\n\n//Calculate number of inches\ntotal_inches = inches + feet * feet_to_inch ;\n\n// Convert to centimeters\ncentimeters = total_inches * inch_to_centi ;\n\n// Output results\ncout << \"/n The Result is :\" << centimeters;\n\nchar c;\ncout << \"/n Press anything to Exit\" ;\ncin >> c;\n\nreturn 0;\n\n}[hr]\n\nThis is just confusing as to how you could get some of it right and then in the following line do it all wrong.\n\nCorrect:\n[code]\nconst double feet_to_inch = 12;\nconst double inch_to_centi = 2.54;\n[/code]\n\nIncorrect:\n[code]\nint feet ; 0\ndouble inches ; = 0.0, total_inches = 0.0;\ndouble centimeters ; 0.0\n[/code]\n\nFix:\n[code]\nint feet = 0;\ndouble inches = 0.0, total_inches = 0.0 ;\ndouble centimeters = 0.0;\n[/code]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.70665103,"math_prob":0.944927,"size":1391,"snap":"2019-26-2019-30","text_gpt3_token_len":388,"char_repetition_ratio":0.16366258,"word_repetition_ratio":0.12648222,"special_character_ratio":0.32063264,"punctuation_ratio":0.19402985,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98339576,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-17T01:01:46Z\",\"WARC-Record-ID\":\"<urn:uuid:1632e467-365a-4233-aedd-8855044a9eef>\",\"Content-Length\":\"57594\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a90886e0-9a8e-4632-9bf9-0386f9ca6824>\",\"WARC-Concurrent-To\":\"<urn:uuid:e57114b8-0498-4773-aebf-277878f2f872>\",\"WARC-IP-Address\":\"18.213.177.41\",\"WARC-Target-URI\":\"https://programmersheaven.com/discussion/comment/427110/\",\"WARC-Payload-Digest\":\"sha1:3GRFTVPKC75ZKKLJXCVZ3E7EEDZHUZH5\",\"WARC-Block-Digest\":\"sha1:CQBGLKA6SIHCS25APBFFR6PY5VSP2LVS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525004.24_warc_CC-MAIN-20190717001433-20190717023433-00389.warc.gz\"}"}
https://scholars.duke.edu/display/pub795338
[ "Approximate complex polynomial evaluation in near constant work per point\n\nPublished\n\nJournal Article\n\nThe n complex coefficients of a degree n-1 complex polynomial are given and this polynomial is evaluated at a large number m≥n of points on the complex plane. This problem is required by many algebraic computations and so is considered in most basic algorithm texts. An arithmetic model of computation is assumed. Approximation algorithms for complex polynomial evaluation that cost, in many cases, near constant amortized work per point are presented.\n\n• Reif, JH\n\nPublished Date\n\n• January 1, 1999\n\n• 28 / 6\n\n• 2059 - 2089\n\n• 0097-5397\n\nDigital Object Identifier (DOI)\n\n• 10.1137/S0097539797324291\n\nCitation Source\n\n• Scopus", null, "" ]
[ null, "https://scholars.duke.edu/themes/duke/images/duke-footer-logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85047436,"math_prob":0.8855518,"size":842,"snap":"2019-43-2019-47","text_gpt3_token_len":200,"char_repetition_ratio":0.12529834,"word_repetition_ratio":0.0,"special_character_ratio":0.24584323,"punctuation_ratio":0.066176474,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9844395,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-18T13:25:18Z\",\"WARC-Record-ID\":\"<urn:uuid:1537dd85-67c6-46ed-a463-e47de7062f45>\",\"Content-Length\":\"10841\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1cd70910-34a6-4d7b-b47b-29b8b202503e>\",\"WARC-Concurrent-To\":\"<urn:uuid:f6692019-388e-4ed1-b7bd-679a40d665f9>\",\"WARC-IP-Address\":\"152.3.72.205\",\"WARC-Target-URI\":\"https://scholars.duke.edu/display/pub795338\",\"WARC-Payload-Digest\":\"sha1:ZOWUQ5PN7UEPMCX7V5CLYJYWI3JEA3GL\",\"WARC-Block-Digest\":\"sha1:GSYTSTFTUTCWIPGZKGWULIQEB2JMDEDU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986682998.59_warc_CC-MAIN-20191018131050-20191018154550-00490.warc.gz\"}"}
https://git.fmrib.ox.ac.uk/yqzheng1/archive/-/commit/83b010e6aaafce17282351d440b60ea78c52b560
[ "### Update 2021JUL21.md\n\nparent 472e5b46\n ... ... @@ -17,9 +17,7 @@ The marginal distribution of $\\mathbf{x}_{n}^{L}, \\mathbf{x}_{n}^{H}$ is In summary, in addition to finding the the hyper-parameters $\\pi, \\mu, \\Sigma_{k}^{H}, \\Sigma^{L}_{k}$, we want to estimate a transformation matrix $\\mathbf{U}$ such that $\\mathbf{UX}^{H}$ is as close to $\\mathbf{X}^{L}$ as possible (or vice versa). ### Pseudo code Algorithm 1. EM for the Fusion of GMMs --- ### Pseudo code - Algorithm 1. EM for the Fusion of GMMs 1. Run K-means clustering on the high-quality data to generate the assignment of the voxels $R^{(0)}$. 2. Initialise the means $\\mu_{k}$, covariances $\\Sigma_{k}$, and mixing coefficients $\\pi_k$ using the K-means assignment $R^{(0)}$, and evaluate the initial likelihood. 3. Initialise the transformation matrix $\\mathbf{U}$ using Algorithm 3. ... ...\nMarkdown is supported\n0% or .\nYou are about to add 0 people to the discussion. Proceed with caution.\nFinish editing this message first!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5742386,"math_prob":0.99935776,"size":939,"snap":"2022-05-2022-21","text_gpt3_token_len":306,"char_repetition_ratio":0.118716575,"word_repetition_ratio":0.046875,"special_character_ratio":0.3631523,"punctuation_ratio":0.17837837,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999484,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-24T05:11:06Z\",\"WARC-Record-ID\":\"<urn:uuid:6e16549a-1c52-4252-a385-e3e330dd26fd>\",\"Content-Length\":\"115052\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7bea265e-022f-4ae3-8084-c902a2beca29>\",\"WARC-Concurrent-To\":\"<urn:uuid:85eaa774-8849-463e-baab-cd254b4ad8fb>\",\"WARC-IP-Address\":\"129.67.248.84\",\"WARC-Target-URI\":\"https://git.fmrib.ox.ac.uk/yqzheng1/archive/-/commit/83b010e6aaafce17282351d440b60ea78c52b560\",\"WARC-Payload-Digest\":\"sha1:BZK6SE3D5IGK6J5Z4L3CDMFAPPC4XPSP\",\"WARC-Block-Digest\":\"sha1:ENRRT2WFAP253QMOUZU3M7DDTUJFFNA3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662564830.55_warc_CC-MAIN-20220524045003-20220524075003-00792.warc.gz\"}"}
http://mathonline.wikidot.com/locally-convex-topological-vector-spaces-lctvs
[ "Locally Convex Topological Vector Spaces (LCTVS)\n\n# Locally Convex Topological Vector Spaces (LCTVS)\n\nOn the Topological Vector Spaces (TVS) page we said that a linear space $X$ with a Hausdorff topology is said to be a topological vector space if $+$ and $\\cdot$ are both continuous with respect to the topology.\n\nOn the Convex Subsets of Vector Spaces page that a subset $K$ of a vector space $X$ is said to be convex if for every pair of points $x, y \\in K$ and for every $0 \\leq t \\leq 1$ we have that:\n\n(1)\n\\begin{align} \\quad tx + (1 - t)y \\in K \\end{align}\n\nFurthermore, we said that elements of the form above are called convex combinations of $x$ and $y$.\n\nWe are now ready to define a locally convex topological vector space.\n\n Definition: A normed linear space $X$ over $\\mathbb{R}$ (or $\\mathbb{C}$) is said to be a Locally Convex Topological Vector Space (abbreviated LCTVS) if $X$ is also equipped with a Hausdorff topology $\\tau$ and such that: 1) The operation of addition $+ : X \\times X \\to X$ defined by $(x, y) \\to x + y$ is continuous on $X \\times X$. 2) The operator of scalar multiplication $\\cdot : \\mathbb{R} \\times X \\to X$ defined by $(\\lambda, x) \\to \\lambda x$ is continuous on $\\mathbb{R} \\times X$. 3) There is a local base at the origin consisting of only convex sets.\n\nRecall that in a topological vector space, a local base at the origin completely determines the topology on $X$ since we can translate a local base at $0$ to any point $x \\in X$.\n\nThe simplest examples of locally convex topological vector spaces are normed linear spaces with the norm topology. We noted that these spaces are already topological vector spaces and we have already seen that open balls are convex sets." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87572306,"math_prob":0.9997429,"size":1626,"snap":"2023-40-2023-50","text_gpt3_token_len":433,"char_repetition_ratio":0.15166461,"word_repetition_ratio":0.034602076,"special_character_ratio":0.25522757,"punctuation_ratio":0.06109325,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999323,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T13:49:44Z\",\"WARC-Record-ID\":\"<urn:uuid:9254344e-d204-4343-8bb0-4a4839bfcaec>\",\"Content-Length\":\"15697\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e7e62b2f-f74b-4d63-99a2-a7e472a7e9e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:41d553e4-950c-44a9-ad7e-6f0132b404ec>\",\"WARC-IP-Address\":\"107.20.139.170\",\"WARC-Target-URI\":\"http://mathonline.wikidot.com/locally-convex-topological-vector-spaces-lctvs\",\"WARC-Payload-Digest\":\"sha1:M3LDKN262DYFFAHNX4XN7Y4LSSFCCXRC\",\"WARC-Block-Digest\":\"sha1:6MN73VTPN733C52LUMZ7C22QOA5YPO5K\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510208.72_warc_CC-MAIN-20230926111439-20230926141439-00302.warc.gz\"}"}
https://books.google.gr/books?id=3R03AAAAMAAJ&amp;hl=el&amp;lr=
[ "# The Improved Arithmetic: Newly Arranged and Clearly Illustrated, Both Theoretically and Practically to Meet the Exigencies of the Student in the Acquisition of the Nature and Science of Numbers, and Also, to Aid the Accountant in All Arithmetical Computations, Relative to Business Transactions : Designed for the Use of Academies, Schools, and Counting-houses\n\nJ. & J. Harper, 1828 - 348 σελίδες\n\n### Τι λένε οι χρήστες -Σύνταξη κριτικής\n\nΔεν εντοπίσαμε κριτικές στις συνήθεις τοποθεσίες.\n\n### Περιεχόμενα\n\n of England Scotland and Ireland 100 0 of America 6 Tables of Troy Weight Avoirdupois Weight Cloth Measure 8 Notation and Numeration of Whole Numbers 16 Increase viz Addition and Multiplication 28 Decrease viz Subtraction and Division 62 Leipsic 66 Fractions Generally 92\n Decimal Fractions 120 Reduction of Currencies to Federal Money 139 Remarks 154 Single Rule of Three 166 Practice 184 Tare and Trett 191 e\n\n### Δημοφιλή αποσπάσματα\n\nΣελίδα 118 - ... from the right hand of the quotient, point off so many places for decimals, as the decimal places in the dividend exceed those in the divisor.\nΣελίδα 286 - RULE. 1. Separate the given number into periods of three figures each, by putting a point over the unit figure, and every third figure from the place of units to the left, and if there be de- . cimals, to the right.\nΣελίδα 243 - ... compute the interest on the principal sum due on the obligation for one year, add it to the principal, and compute the interest on the sum paid, from the time it was paid, up to the end of the year : add it to the sum paid, and deduct that sum from the principal and interest added as above...\nΣελίδα 243 - But if there be several payments made within the said time, find the amount of the several payments, from the time they were paid, to the time of settlement, and deduct their amount from the amount or the nrincina.\nΣελίδα 241 - COMPUTE the interest on the principal sum, from the time when the interest commenced to the first time when a payment was made, which exceeds either alone or in conjunction with the preceding payments (if any) the interest at that time due: add that interest to the principal, and from the sum subtract the payment made at that time, together with the preceding payments (if any) and the remainder forms a new principal ; on which, compute and subtract the interest, as upon the first principal: and proceed...\nΣελίδα 265 - IS the method of finding what quantity of e'ach of the ingredients, whose rates are given, will compose a mixture of a given rate; so that it is the reverse of alligation medial, and may be proved by it. CASE. I.\nΣελίδα 89 - Multiply all the numerators together for a new numerator, and all the denominators for a new denominator; and they will form the fraction required. . EXAMPLES. 1. Reduce £ of | of £ of -fa to a simple fraction.\nΣελίδα 51 - DIVISION teaches to find how many times one whole number is contained in another ; and also what remains ; and is a concise way of performing several subtractions. Four principal parts are to be noticed in Division : 1. The Dividend, or number given to be divided. 2. The Divisor, or number given to divide by. 3. The Quotient, or answer to the question, which shows how many times the divisor is contained in the dividend. 4. The Remainder, which is always less than the divisor, and of the same name...\nΣελίδα 292 - ... terms, RULE. Multiply the sum of the extremes by the number of terms, and half the product will be the sum of the terms. EXAMPLES FOR PRACTICE. 2. If the extremes be 5 and 605, and the number of terms 151, what is the sum of the series?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8124871,"math_prob":0.9087568,"size":4438,"snap":"2019-35-2019-39","text_gpt3_token_len":1222,"char_repetition_ratio":0.12990528,"word_repetition_ratio":0.024,"special_character_ratio":0.21969356,"punctuation_ratio":0.12783751,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9827293,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-18T03:33:39Z\",\"WARC-Record-ID\":\"<urn:uuid:f710c07f-6d5a-43c2-86e7-9d3e00f9f361>\",\"Content-Length\":\"53428\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:340042a7-92c7-43bb-82e8-ac2fd2376703>\",\"WARC-Concurrent-To\":\"<urn:uuid:1b16eb2d-1c9f-4c00-a6fd-d8ab9cb700cc>\",\"WARC-IP-Address\":\"172.217.7.238\",\"WARC-Target-URI\":\"https://books.google.gr/books?id=3R03AAAAMAAJ&amp;hl=el&amp;lr=\",\"WARC-Payload-Digest\":\"sha1:SRCKZUXE2DFAFDFIMABXZFXAKTPMPEV6\",\"WARC-Block-Digest\":\"sha1:PYF5Z366YFS7FPJCROAKFNODCLJXPJQX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027313589.19_warc_CC-MAIN-20190818022816-20190818044816-00054.warc.gz\"}"}
https://www.mrexcel.com/board/tags/b10/
[ "# b10\n\n1. ### skip the blank cells\n\nHello Excel experts, I use a formula in Sheet2 from B10 to B100 cells. =IF(DATEDIF(Sheet1!C10,Sheet1!\\$F\\$9,\"Y\")>=59,Sheet1!B10,\"\") i want to skip the blank cells in Sheet. Sheet2 protected only B10 to B100 cells are not protected. Thanks in advance.\n2. ### VBA to convert tabs to PDF if selected from cells\n\nNeed to macro to convert selected tabs to PDF Say in A1 to A10 is the tab name and the in B1 to B10 is true or false If it is true in B1 to b10 then it will convert them to PDF\n3. ### Unique List Issue\n\nAfternoon, I am using the following formula to create a unique list, typically I wont have a fixed reference point at the point B10 is reference and the formula works fine in these instances. However in this case B10 will be come B11 and so forth, the issue I'm having is the formula is creating...\n4. ### VBA to count number of days between 2 dates\n\nHello, I use this code to have the date and time populate in column c, starting in C13. I would like it to also count the number of days between this date, and a date in cell B10, in cell D13. B10 would stay the same, Column C would be different dates. Not sure if this code could be...\n5. ### Extracting Data from list\n\nHi all, I am trying to extract data from a list using formula but cannot seem to get it right B10 has a name cell with formula B13:B103 I am looking to extract data from M13:M103 when the list matches B10 in Q13:Q103 Then drag down to extract every entry etc Any help much appreciated\n6. ### Speed up code\n\nIs there anyway to speed up how Im doing this? I have more code like this but only posted a portion of it. Takes about 5 seconds to run the macro, would really like to lower that time. ActiveCell.Offset(rowOffset:=0, columnOffset:=1).Activate ActiveCell.Value = B1...\n7. ### VBA to pull text out of cell\n\nHi - Is there a way to say look in A10 and if the word \"March\" appears, put \"March\" in B10? Thanks!\n\nI thought this was straightforward but it does not seem to be. I have a two spreadsheets and in one (call it S1) I get a calculated value in column C. In the other (call it S2), I want to link to the values in column C. If, in S2 B10, there is a reference =S1!C10, and I sort S1, how do I...\n9. ### Calculation to have cell turn red or yellow\n\nIn cells A10 and C10, I have two different calculations with a number total for each cell. In B10, I'd like to have a calculation that looks at A10 and C10 . . . if those two numbers are the same or within 1 of each other, B10 is bright yellow. If A10 and C10 are different by more than 1, B10...\n10. ### Formula Help\n\nHi All - I'm looking for help with the following scenario: Let's A1 has a numerical value, if B1 is blank, then C1 should be 3% of A1. And, if B1 is not blank (some numerical value), then D1 should be 25% of A1. If B1 is blank, then C1=A1*3% Conversely, if B1>0, then D1=B1*25% Can anyone...\n11. ### Excel Macro/VBA, VLOOKUP setting\n\nHi All, Can someone help me please. Below is what i'm trying to do. 'When user inputs value into B10 on the \"Calc\" sheet, the system will find the same value (or similar value If it doesn't exist) from Column B on the \"Data\" sheet. Then the data of Column C will be displayed into B11 on the...\n12. ### Help with Dates in Formula\n\nHelp please. I need a formula that will add 15% to a date. If the resulting date is a Saturday, the date needs to be moved back to Friday. If the resulting date is a Sunday, the date needs to move up to Monday. Here's what I'm using now: - B5 is a start date - B6 is an end date - B7 calculates...\n13. ### Clear Calendar contents with new drop down selection from list\n\nHey all, Hoping I can get some help here. I created a generic calendar with year (c3) and month input (c4). In each row and column there is another drop down for manger (starting with b10) and to the right (in c10) is a list dependent on b10. This goes all the way through the calendar...\n14. ### Look up data across multiple pages\n\nHi, Its been awhile since I've really used excel and I am trying to put together a worksheet that will look across multiple worksheets and pull data. On the master worksheet in cell A1, i have a data validation where i can select a date. then in cell A2 I have a data validation to select a...\n15. ### Formula to predict a10,b10 & c10\n\n<colgroup><col><col><col></colgroup><tbody> A 1 5 B 0 C 6 2 5 1 4 3 0 3 3 4 2 2 3 5 7 6 9 6 4 2 9 7 9 8 1 8 5 2 3 9 5 7 7 </tbody> 10\n16. ### Rearranging data with VBA\n\nThis one seems simple but gets hairy as it goes on: I have 2 columns of 182 rows of data (A + B) I need to copy 2 rows of existing data (b10 & b11) ahead of each B cell containing current data. I need to insert 2 blank rows below each A cell with current data. This is what I have (Ignore...\n17. ### Countifs criteria\n\nHaving trouble finding the solution to this if there is a simple one. My formula looks like this: =COUNTIFS(Model,\\$A10,Package,\\$B10,Status,\"*DS*\") But it is not returning anything. I'm assuming this is because B10 contains varying numbers with 0's in front like \"01\" or \"02\" but the Package...\n18. ### Create error messages based on entries in 2 cells\n\nI have a sheet which calculates charges due on an account. I have a box at B10 which can be completed to offer a discount. I have a dropdown which references to box D5 where a user can select a 'Loyalty Discount' (shows \"1\" for no discount, then either \"2\"or \"3\" for the level of discount...\n19. ### Get cell reference\n\nHi, There is probably an easy way to do this but I can't seem to figure this one out. How do I construct a formula to reference a cell, like \"B10\" so that the value returned in \"B10\". I want to the formula to change when the cell moves, for example if I add a column and \"B10\" moves to \"C10...\n20. ### How to reference a sheet by number not name in formula?\n\nI have a situation where I want to reference a worksheet by sheet number and not by sheet name because the sheet name changes based on a user input (sheet name will never be standard). Typically I could use the following formula to get the value in cell B10 on sheet called B10: ='BobSmith'!B10...\n\n### This Week's Hot Topics", null, "### We've detected that you are using an adblocker.\n\nWe have a great community of people providing Excel help here, but the hosting costs are enormous. You can help keep this site running by allowing ads on MrExcel.com.\n\n### Which adblocker are you using?", null, "", null, "", null, "", null, "1)Click on the icon in the browser’s toolbar.\n2)Click on the icon in the browser’s toolbar.\n2)Click on the \"Pause on this site\" option.", null, "Go back\n\n1)Click on the icon in the browser’s toolbar.\n2)Click on the toggle to disable it for \"mrexcel.com\".", null, "Go back\n\n### Disable uBlock Origin\n\nFollow these easy steps to disable uBlock Origin\n\n1)Click on the icon in the browser’s toolbar.\n2)Click on the \"Power\" button.\n3)Click on the \"Refresh\" button.", null, "Go back\n\n### Disable uBlock\n\nFollow these easy steps to disable uBlock\n\n1)Click on the icon in the browser’s toolbar.\n2)Click on the \"Power\" button.\n3)Click on the \"Refresh\" button.", null, "Go back" ]
[ null, "https://www.mrexcel.com/img/sgw/logo.jpg", null, "https://www.mrexcel.com/img/sgw/ab_icon.png", null, "https://www.mrexcel.com/img/sgw/abp_icon.png", null, "https://www.mrexcel.com/img/sgw/ubo_icon.png", null, "https://www.mrexcel.com/img/sgw/ub_icon.png", null, "https://www.mrexcel.com/img/sgw/ab_instructions.jpg", null, "https://www.mrexcel.com/img/sgw/abp_instructions.jpg", null, "https://www.mrexcel.com/img/sgw/ubo_instructions.jpg", null, "https://www.mrexcel.com/img/sgw/ub_instructions.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74939483,"math_prob":0.59196305,"size":291,"snap":"2021-31-2021-39","text_gpt3_token_len":100,"char_repetition_ratio":0.14982578,"word_repetition_ratio":0.0,"special_character_ratio":0.3539519,"punctuation_ratio":0.17567568,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95239556,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-25T10:00:33Z\",\"WARC-Record-ID\":\"<urn:uuid:ce625a34-57dc-4540-ad98-58ee3c237322>\",\"Content-Length\":\"84223\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dead0f78-888a-4d66-a640-cbaea11178a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:7bc5028a-c258-4799-b26f-4f2e7fd2829f>\",\"WARC-IP-Address\":\"216.92.17.166\",\"WARC-Target-URI\":\"https://www.mrexcel.com/board/tags/b10/\",\"WARC-Payload-Digest\":\"sha1:L2LWSKTCD43E6ECBS53XJDET6ABED47P\",\"WARC-Block-Digest\":\"sha1:7T5R55SLRXPKJIWDAXLGPOOMPOE6ZPXD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057615.3_warc_CC-MAIN-20210925082018-20210925112018-00490.warc.gz\"}"}
https://www.geeksforgeeks.org/fast-method-calculate-inverse-square-root-floating-point-number-ieee-754-format/
[ "# Fast method to calculate inverse square root of a floating point number in IEEE 754 format\n\nGiven a 32 bit floating point number x stored in IEEE 754 floating point format, find inverse square root of x, i.e., x-1/2.\n\nA simple solution is to do floating point arithmetic. Following is example function.\n\n `#include ` `#include ` `using` `namespace` `std; ` ` `  `float` `InverseSquareRoot(``float` `x) ` `{ ` `    ``return` `1/``sqrt``(x); ` `} ` ` `  `int` `main() ` `{ ` `    ``cout << InverseSquareRoot(0.5) << endl; ` `    ``cout << InverseSquareRoot(3.6) << endl; ` `    ``cout << InverseSquareRoot(1.0) << endl; ` `    ``return` `0; ` `}`\n\nOutput:\n\n```1.41421\n0.527046\n1```\n\nFollowing is a fast and interesting method based for the same. See this for detailed explanation.\n\n `#include ` `using` `namespace` `std; ` ` `  `// This is fairly tricky and complex process. For details, see  ` `// http://en.wikipedia.org/wiki/Fast_inverse_square_root ` `float` `InverseSquareRoot(``float` `x) ` `{ ` `    ``float` `xhalf = 0.5f*x; ` `    ``int` `i = *(``int``*)&x; ` `    ``i = 0x5f3759d5 - (i >> 1); ` `    ``x = *(``float``*)&i; ` `    ``x = x*(1.5f - xhalf*x*x); ` `    ``return` `x; ` `} ` ` `  `int` `main() ` `{ ` `    ``cout << InverseSquareRoot(0.5) << endl; ` `    ``cout << InverseSquareRoot(3.6) << endl; ` `    ``cout << InverseSquareRoot(1.0) << endl; ` `    ``return` `0; ` `}`\n\nOutput:\n\n```1.41386\n0.526715\n0.998307```\n\nAttention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready.\n\nMy Personal Notes arrow_drop_up\n\n## Recommended Posts:\n\nArticle Tags :\nPractice Tags :\n\n1\n\nPlease write to us at [email protected] to report any issue with the above content." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.70288235,"math_prob":0.96866924,"size":1534,"snap":"2020-34-2020-40","text_gpt3_token_len":447,"char_repetition_ratio":0.15359478,"word_repetition_ratio":0.19742489,"special_character_ratio":0.30573663,"punctuation_ratio":0.18275861,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99436873,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T04:18:21Z\",\"WARC-Record-ID\":\"<urn:uuid:a662847c-0a7c-4895-bb4d-3c5879ab6ed5>\",\"Content-Length\":\"99528\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:786c97dd-5632-413b-bfc2-b2db260aaacd>\",\"WARC-Concurrent-To\":\"<urn:uuid:5390faac-0611-43b4-bee2-3a93e78cf58c>\",\"WARC-IP-Address\":\"23.221.75.33\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/fast-method-calculate-inverse-square-root-floating-point-number-ieee-754-format/\",\"WARC-Payload-Digest\":\"sha1:HTZXI4GFF735AP2GXWVHZQ2QCUHLC4GJ\",\"WARC-Block-Digest\":\"sha1:N3VTCZH7SSOKMLOXM53IMXYUZRFGAN3T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400193391.9_warc_CC-MAIN-20200920031425-20200920061425-00277.warc.gz\"}"}
https://plato.sydney.edu.au/entries/lewis-metaphysics/counterpart-sem.html
[ "## Counterpart-theoretic Semantics for Quantified Modal Logic\n\n### Lewis’s original proposal\n\nLewis (1968) proposes a translation from the language of quantified modal logic into an extensional first-order language with quantifiers ranging over possible worlds and possible individuals. Like the “standard translation” for Kripke semantics (van Benthem 1983), Lewis’s translation implicitly defines a semantics for quantified modal logic.\n\nThe eight postulates of Lewis (1968) can be seen as defining a concept of a model. As usual, a model contains some worlds and some individuals. Every world is associated with a domain of individuals which are “in” the world. According to Lewis’s second postulate, no individual is “in” more than one world; so different worlds have disjoint domains. Like in Kripke semantics, the worlds may be related by an accessibility relation. In addition, individuals may be related by a counterpart relation. Lewis’s sixth postulate says that this relation is reflexive. According to his fifth postulate, nothing is a counterpart of anything else in its world. (This assumption is dropped in Lewis 1986e, 232.) Lewis also identifies a designated “actual” world, relative to which non-modal sentences are interpreted.\n\nIn the semantics induced by Lewis's translation rules, formulas of quantified modal logic are true relative to a world and an assignment function. The interpretation of atomic sentences and boolean connectives is standard, except that disjointness of domains allows interpreting predicates in terms of sets of tuples of individuals, with no world-relativity. Quantifiers are given an “actualist” interpretation, so they range only over individuals in the relevant world. The key novelty is the interpretation of the modal operators:\n\n$$\\Box \\phi$$ is true relative to a world $$w$$ and assignment $$g$$ iff $$\\phi$$ is true relative to every world $$w^\\prime$$ accessible from $$w$$ and every assignment $$g^\\prime$$ such that $$g^\\prime(x)$$ is a counterpart of $$g(x)$$ for each variable $$x$$ that is free in $$\\phi$$.\n\n$$\\Diamond \\phi$$ is the dual of $$\\Box \\phi$$.\n\n1. The requirement of disjoint domains could easily be dropped by making the interpretation of predicates world-relative and assuming that the counterpart relation links world-individual pairs. This would not affect the logic.\n\n2. If we further assume that the counterpart relation is an equivalence relation and that it relates every individual at any world to exactly one individual at every other world, then the resulting semantics is equivalent to constant-domain Kripke semantics. (See e.g., Hughes & Cresswell 1996, 354f.)\n\n3. Lewis assumes that the language of quantified modal logic does not have names or function symbols. He suggests that English names should be analysed as wide-scope Russellian descriptions (Lewis 1968, 120f.). To simplify the following discussion, we will assume that there are names ‘$$a$$’ and ‘$$b$$’ that semantically behave like free variables, insofar as their reference is shifted by the modal operators. See Ghilardi & Meloni (1991) for how to add function symbols, and Schurz (1997), Fitting (2004), Kracht & Kutz (2005) among others, for alternative treatments of names in counterpart semantics.\n\n4. Since quantifiers are actualist, the interpretation of closed formulas only requires interpreting formulas relative to a world $$w$$ and assignment $$g$$ that maps every variable to an individual in $$w$$. Accordingly, we will assume that the names ‘a’ and ‘b’ (initially) pick out individuals in the actual world. See Forbes (1985), Kracht & Kutz (2005), Kracht & Kutz (2007), Kupffer (2010), among others, for counterpart semantics for a language with “possibilist” quantifiers.\n\n5. Lewis’s original semantics does not allow associating different counterpart relations with different terms. In Lewis (1971, 209) he says that he does not know how to allow for this in a formal semantics, because it is not clear how the relevant counterpart relations are determined. See Ramachandran (1998), Fara (2012), Schwarz (2014), Kocurek (2018), among others, for relevant proposals.\n\n6. Lewis’s original semantics also does not allow imposing constraints on the choice of counterparts for formulas that are “multiply de re” (see Hazen 1979, 328f.; Lewis 1983d, 44f.). To get around this, one can assume that the counterpart relation relates not individuals, but sequences of individuals (roughly as in Lewis 1983d). Alternatively, one can replace the single counterpart relation by a set of counterpart relations (see below, section 4).\n\n### Deviance\n\nIt is often suggested that Lewis’s semantics gives rise to a deviant and unmanageable logic. Here is a sample of non-standard features.\n\nF1. Familiar principles of S5 modal logic such as $$\\Box A \\rightarrow \\Box \\Box A$$ or $$\\Diamond A \\rightarrow \\Box \\Diamond A$$ are invalid, even if the accessibility relation is universal. (See e.g., Lewis 1968; Forbes 1982, Cresswell 2004.)\n\nF2. The “Necessity of Identity” $$\\Box \\forall x \\forall y (x=y \\rightarrow \\Box x=y)$$ and the “Necessity of Distinctness” $$\\Box \\forall x \\forall y (x \\neq y \\rightarrow \\Box x \\neq y)$$ are invalid. (See e.g., Hazen 1979; Ramachandran 2008.)\n\nF3. The “Necessity of Existence” $$\\forall y \\Box \\exists x (x=y)$$ and the Converse Barcan Formula $$\\Box \\forall x A \\rightarrow \\forall x \\Box A$$ are valid. (See e.g., Lewis 1968; Forbes 1982.)\n\nF4. Modal contexts are not closed under logical consequence. For example, $$\\Box (A \\land B) \\rightarrow \\Box A$$ is invalid, and so is $$\\Box (A \\rightarrow B) \\rightarrow (\\Box A \\rightarrow \\Box B)$$ or $$\\Box Rab \\rightarrow \\Box \\exists x Rax$$. (See e.g., Hazen 1979, Woollaston 1994.)\n\nF5. Familiar substitution rules of first-order logic appear to be invalid. For example, $$\\forall x \\Diamond x \\neq a$$ does not entail $$\\Diamond a \\neq a$$. (See e.g., Kripke 1980, fn 13; Woollaston 1994.)\n\nF6. If the language of quantified modal logic is extended by an ‘actually’ operator, we seem to get a deviant logic for the extended language. (See e.g., Hazen 1979; Fara & Williamson 2005.)\n\nIt will be useful to look at these issues in terms of their causes. F1 arises from the assumption that the counterpart relation is not an equivalence relation. F2, F5, and F6 arise from the possibility of non-functional and non-injective counterpart relations. F3, F4, and other aspects of F6 arise from the possibility that an individual may lack a counterpart at some accessible world.\n\nAs we will see, in each case one can argue that the non-standard feature should be embraced, but one can also adjust the semantics so as to block the feature.\n\n### Non-equivalence\n\nIn propositional modal logic, there is a well-known correspondence between modal schemas and properties of the accessibility relation: $$\\Box A \\rightarrow \\Box \\Box A$$ is valid on a relational frame iff the accessibility relation is transitive; $$A \\rightarrow \\Box \\Diamond A$$ is valid iff the relation is symmetric; and so on. In Lewis’s semantics, these results do not carry over to quantified modal logic. For example, $$\\Box A \\rightarrow \\Box \\Box A$$ can be invalid even though the accessibility relation is transitive, provided that the counterpart relation is intransitive.\n\nIn the main entry, we mentioned that this might be seen as an advantage of counterpart semantics, since it allows for a resolution of puzzles like the Chisholm-Chandler paradox, where we seem to have an intuitive counterexample to $$\\Box A \\rightarrow \\Box \\Box A$$. Counterpart semantics promises to vindicate the intuition without assuming that metaphysical accessibility is not an equivalence relation.\n\nIf one wants $$\\Box A \\rightarrow \\Box \\Box A$$ to be valid for metaphysical modality (despite apparent counterexamples), one can simply assume that the counterpart relation for this application is transitive. Lewis himself rejected transitivity, but he was open to the idea that the metaphysical counterparthood relation is symmetric (see Beebee & Fisher 2020, Volume 1, Letter 137).\n\nReasoning about metaphysical modality is not the only application of modal logic. Counterpart semantics has found applications in many other areas, from temporal logic (e.g., Sider 1996) to epistemic logic (e.g., van Rooij 2006) to algebraic topology (e.g., Braüner & Ghilardi 2007). Different applications might call for different logics. Relational semantics for propositional modal logic is popular in part because it allows defining a wide range of logics by putting restrictions on the accessibility relation. Counterpart semantics similarly allows defining a wider range of logics by putting joint constraints on accessibility and counterparthood.\n\n### Multiple counterparts and common counterparts\n\nLewis’s semantics allows for cases in which an individual has several counterparts at an accessible world, and for cases in which several individuals have a common counterpart at a world. This has some noteworthy consequences.\n\nFor one thing, the “Necessity of Identity” and the “Necessity of Distinctness” become invalid. Informally, if ‘$$a$$’ and ‘$$b$$’ pick out an individual with two counterparts at some accessible world, then $$\\Box a = a$$ is true, because every counterpart of $$a$$ at every accessible world is identical to itself, but $$\\Box a = b$$ is false, because not every counterpart of $$a$$ is identical to every counterpart of $$b$$.\n\nMultiple counterparts create a special challenge if the modal language is extended by operators that shift the point of evaluation to a single world, such as nominals, Stalnaker-type counterfactuals, or an ‘actually’ operator ($$\\mathsf{ACT}$$). Suppose some possible individual has two counterparts in the actual world, one of which satisfies $$\\phi(x)$$ and the other does not. Should we regard $$\\Diamond \\exists x \\mathsf{ACT} \\ \\phi(x)$$ as true or false? The first answer renders $$\\Diamond \\exists x (\\mathsf{ACT} \\ Fx \\land \\mathsf{ACT} \\ \\neg Fx)$$ satisfiable; the second $$\\Diamond \\exists x (\\neg \\mathsf{ACT} \\ Fx \\land \\neg \\mathsf{ACT} \\ \\neg Fx)$$. Some intuit that both answers are problematic. (See e.g., Hazen 1979; Fara & Williamson 2005.)\n\nIt has also been suggested that allowing for multiple counterparts invalidates familiar rules and axioms of classical predicate logic. For example, since $$\\Box a = a$$ and $$a=b$$ do not entail $$\\Box a = b$$, “Leibniz’s Law” seems to become invalid. The same is true for Universal Instantiation. For example, $$\\forall x \\Diamond (x \\neq a)$$ does not entail $$\\Diamond a \\neq a$$. (See e.g., Kripke 1980, fn.13; Woollaston 1994, 259; Cresswell 2004, 35.) Since Leibniz’s Law and Universal Instantiation remain valid for non-modal formulas, Lewis’s logic also appears to violate closure under second-order substitution. (See Bauer & Wansing 2002).\n\nLewis (1983d) rejects the charge that his semantics invalidates rules of classical logic. Universal Instantiation and Leibniz’s Law both involve substitution of terms, and Lewis points out that such rules always require a proviso to deal with the possibility of “capturing”. In classical predicate logic, for example, $$x=y \\rightarrow (\\exists y (x \\neq y) \\rightarrow \\exists y (y \\neq y))$$ is not a proper instance of Leibniz’ Law, because the variable ‘$$y$$’ gets captured by the quantifier ‘$$\\exists y$$’. In Lewis’s semantics, modal operators function as unselective binders, capturing all variables in their scope. We should therefore not be surprised that substitution rules have to be restricted. Schwarz (2012, 17f.) defines a general restriction, analogous to the condition that a variable must be “free for” another variable in classical predicate logic. A popular alternative in counterpart semantics is to introduce special syntactic machinery such as $$\\lambda$$-abstraction that allows distinguishing between de re readings and de dicto readings. Substitution is then allowed only in de re contexts, and the $$\\lambda$$-conversion rules are restricted (see Ghilardi & Meloni 1988; Corsi 2002; and Schwarz 2013 (Other Internet Resources) for details).\n\nThe fact still remains that allowing for multiple (and common) counterparts renders many rules and sentences invalid that are valid in Kripke semantics. Like in the previous section, we can distinguish two kinds of response. One is to embrace the deviance, another is to block it.\n\nAlong the first line, one might argue that allowing for contingent identity and distinctness is useful to defuse philosophical puzzles. Relevant puzzles would not include the putative identity between persons and bodies or statues and pieces of clay. In Lewis’s semantics, $$a=b \\rightarrow (\\Diamond Fa \\rightarrow \\Diamond Fb)$$ is valid, so we can’t account for identical things that appear to have different modal properties. Here we would need models with different counterpart relations (see comment 5 in section 1), so that we could have worlds where someone’s person counterpart is not identical to their body counterpart. Contingent identity in Lewis’s original semantics instead involves multiple counterparts relative to the same counterpart relation.\n\nAs mentioned in the main entry, relevant puzzles for this kind of multiple counterparthood might be cases of fission, fusion, or time-travel in the temporal dimension, and cases of possible fission, fusion, or time-travel in the modal dimension. Schwarz (2012) argues that such cases also call for a deviant logic of actually. (See also Stalnaker 1986, 136f.)\n\nFurther motivation for Lewis’s non-standard treatment of substitution comes from applications of modal logic outside metaphysics. Ninan (2018) invokes multiple counterparts to explain certain puzzles about epistemic modality. Braüner & Ghilardi (2007, 599–607, 614f.) present a range of mathematical applications that seem to call for a logic with the Lewisian restrictions on substitution.\n\nFor some application, one might still prefer a more orthodox logic of quantification, identity, and actuality. This can be achieved in several ways.\n\nOne obvious move is to simply restrict the relevant models to ones in which the counterpart relation is functional and/or injective (see e.g., Torza 2011).\n\nThis may appear to be at odds with Lewis’s assumption that counterparthood is a matter of qualitative similarity. In a world of two-way eternal recurrence, for example, the individuals in any epoch are perfect qualitative duplicates of individuals in every other epoch; a qualitative counterpart relation can then hardly select exactly one of the individuals as, say, Joe Biden’s counterpart.\n\nHowever, every Lewisian counterpart relation $$C$$ can be unravelled into a set $$F$$ of functional and injective counterpart relations: if $$x$$ at $$w$$ has two counterparts $$y$$ and $$y^\\prime$$ at $$v$$, then one of the unravelled relations will link $$x$$ with $$y$$, another will link $$x$$ with $$y^\\prime$$. We can easily adjust Lewis’s semantics to quantify over the unravelled relations:\n\n$$\\Box \\phi$$ is true relative to $$w,g$$ iff $$\\phi$$ is true relative to every world $$w^\\prime$$ accessible from $$w$$ and every assignment $$g^\\prime$$ such that for every $$f \\in F$$ and every variable $$x$$ that is free in $$phi$$, $$g^\\prime(x)$$ stands in $$f$$ to $$g(x)$$.\n\nThis kind of functional counterpart semantics was first introduced in Hazen (1977) and Hazen (1979), and has been explored in different forms by many authors; see e.g., Ghilardi (1991), Stalnaker (1994), Stalnaker (2012) (ch.3 and p154ff.), Sider (2008, Other Internet Resources), Kupffer (2010), Russell (2013), Bacon (2014).\n\nThe triple quantification in the clause for the box (over worlds, counterparts, and unravelled counterpart relations) may initially seem ad hoc, but it can be given independent motivation.\n\nFor one, it allows putting restrictions on the choice of counterparts for “multiply de re” sentences. Suppose we judge that Elizabeth II is necessarily the daughter of George VI, even though both have many counterparts in a world of eternal recurrence, so that not every counterpart of Elizabeth is the daughter of every counterpart of George. On the functional approach, we can restrict the set of counterpart relations $$F$$ so that all its members link Elizabeth to an Elizabeth counterpart who is the daughter of the George counterpart linked to George. (In that case, $$F$$ is no longer construed by unravelling a simple Lewisian counterpart relation.)\n\nThe functional approach can also be motivated by Lewis’s (1986e, 231f.) distinction between possible worlds and “individual possibilities”. Intuitively, a world of eternal recurrence represents one possibility for the world, but many possibilities for us. In general, if a world contains multiple counterparts of a given individual, then the world might be regarded as multiple possibilities for that individual, one for each counterpart. Likewise, a possibility for n individuals might be represented as a pair of a possible world and n (suitably coordinated) counterparts of the original individuals at the world. In a functional model, each unraveled counterpart relation selects a unique counterpart for every individual at every accessible world. By quantifying over both worlds and unraveled counterpart relations, the revised clause for the box effectively quantifies over all relevant individual possibilities.\n\nSurprisingly, the extra layer of quantification (over counterpart relations) has also proved useful in mathematical logic.\n\nA well-known limitation of Kripke semantics for quantified modal logic is that many important systems of propositional modal logic become incomplete when quantifiers are added. For example, Ghilardi (1991) proves—with the help of functional counterpart semantics—that every quantified system in between S4.3 and S5 is incomplete with respect to (variable-domain) Kripke semantics. (With constant domains, things are even worse; see e.g., Hughes & Cresswell 1996, 265–71.) In Ghilardi (1992), Ghilardi shows that this problem disappears in functional counterpart semantics, where the quantified extension of every canonical propositional modal logic above S4 is complete.\n\nThis formal advantage of counterpart semantics requires multiple counterpart relations (see Kracht & Kutz 2005, sec.7), but it does not require the relations to be functional and injective. Ghilardi’s result trivially generalises to a semantics with multiple, possibly non-functional, counterpart relations. See Kutz (2000), Kracht & Kutz (2002), Kracht & Kutz (2005), and Schwarz (2013, Other Internet Resources) for investigations into this kind of semantics.\n\n### Missing counterparts\n\nLewis does not assume that every individual has a counterpart at every world. This is meant to capture the intuition that most of us could have failed to exist. Oddly, however, $$\\forall x \\Box \\exists y (x=y)$$ comes out valid by Lewis’s translation rules, and so does the Converse Barcan Formula $$\\Box \\forall x A \\rightarrow \\forall x \\Box A$$. What becomes invalid instead are some basic principles of normal propositional modal logic. For example, $$\\Box (Fa \\land Gb)$$ does not entail $$\\Box Fa$$.\n\nHere is why. In Lewis’s semantics, $$\\Box Fa$$ is true iff all counterparts of $$a$$ at all worlds are $$F$$; equivalently: iff at all worlds where $$a$$ has a counterpart, all these counterparts are $$F$$. Similarly, $$\\Box (Fa \\land Gb)$$ is true iff at all worlds where $$a$$ and $$b$$ both have counterparts, all the $$a$$-counterparts are $$F$$ and all the $$b$$-counterparts $$G$$. If $$a$$ has non-$$F$$ counterparts, but only in worlds where $$b$$ has no counterparts, then $$\\Box (Fa \\land Gb)$$ is true while $$\\Box Fa$$ is false.\n\nWe can also see why $$\\forall x \\Box \\exists y (x=y)$$ is valid. $$\\Box \\exists y (x=y)$$ is true iff at every world where $$x$$ has a counterpart, all these counterparts are identical to something.\n\nAs Hazen (1979, 327f.) notes, these results are the consequence of a semantic choice that is independent of questions about trans-world identity. Kripke (1971, 137) distinguished two readings of a statement like ‘Joe Biden is necessarily human’. On its “weak” reading, the statement expresses that Biden is human at all worlds at which he exists. On its “strong” reading, the sentence expresses that Biden is human at all worlds whatsoever. Lewis adopts the weak reading. He assumes that $$\\Box Hb$$ is meant to formalise the hypothesis that it is “necessary for Biden” that he is human, meaning that he is human at all worlds where he exists (in the sense of having a counterpart).\n\nOn the weak reading of the box, the validity of $$\\forall x \\Box \\exists y (x=y)$$ and the Converse Barcan Formula should be unsurprising. The formulas do not express that everything exists at all accessible worlds. (This rather corresponds to $$\\exists x \\Box \\phi \\rightarrow \\Box \\exists x \\phi$$; see Ghilardi & Meloni 1988.) It should also be unsurprising that $$\\Box (Fa \\land Gb) \\rightarrow \\Box Fa$$ becomes invalid, unless we assume that everything exists at (or has a counterpart at) all accessible worlds. We would get the same non-standard logic if we tracked individuals by identity rather than by the counterpart relation.\n\nLewis acknowledged the problem raised by the validity of $$\\forall x \\Box \\exists y (x=y)$$. He agreed that a sentence like ‘Joe Biden necessarily exists’ is intuitively false. But he saw no systematic way of getting $$\\Box Hb$$ to be true while $$\\Box \\exists x (x=b)$$ is false (Lewis (1986e): 11-13). This is one of the considerations that led him to conclude that English modal sentences can’t be adequately translated into the standard language of quantified modal logic.\n\nThere are further reasons to doubt that the language of quantified modal logic is adequate to formalise claims about individual possibility (or weak necessity). For example, we might intuit that George VI could have existed without having any children, while Queen Elizabeth II could not have existed without having George VI as her father. That is, we might want to say that it is necessary for Elizabeth that she is the daughter of George, although it is not necessary for George that Elizabeth is his daughter: $$\\Box Fab$$ should be true relative to $$a$$ but not relative to $$b$$. The standard language of quantified modal logic does not allow drawing this distinction (compare Hunter & Seager 1981, 74f.). Similarly, there is no natural way to say that it is jointly necessary for Elizabeth and George that George has a daughter.\n\nThese considerations suggest that if we want to adopt Lewis’s preferred reading of de re modal statements, we should adjust the syntax of quantified modal logic. Corsi (2007) introduces special boxes and diamonds that can bind singular terms. Ghilardi & Meloni (1988), Ghilardi & Meloni (1991), and Ghilardi (2001) instead follow a long-standing tradition in categorical logic of using a typed language. The properly typed versions of all K-schemas then become valid. Informally, if $$Fa \\land Fb$$ is necessary for $$a$$ and $$b$$, then $$Fa$$ is also necessary for $$a$$ and $$b$$.\n\nBraüner & Ghilardi (2007) present a complete axiomatisation of Lewis’s semantics for a typed language, along with some mathematical applications. One interesting aspect of this framework is that some logical principles that are connected in Kripke semantics become independent in counterpart semantics. For example, making both accessibility and counterparthood an equivalence relation results in a logic that validates (typed versions of) all the S5 schemas, but fails to validate the Barcan Formula (see Ghilardi 2001, 105ff.).\n\nAn advantage of reading the box as weak necessity is that we don’t need to think about how a formula should be interpreted relative to a world if one of its terms does not pick out any individual in that world (see comment 4 in section 1). However, the issue must still be faced if an ‘actually’ operator is added to the language. The ‘actually’ operator is supposed to shift the point of evaluation back to the actual world, irrespective of whether the relevant individuals exist here; so we have to explain how to interpret $$\\mathsf{ACT} \\ \\phi(x)$$ if the value of $$x$$ has no counterpart in the actual world (see e.g., Fara & Williamson 2005).\n\nFrom a technical perspective, an easy way to avoid all complications arising from missing counterparts is to stipulate that—perhaps for certain applications—every individual at every world has a counterpart at every world. This move resembles assuming constant domains in Kripke semantics. It makes the “weak” reading of the box equivalent to the “strong” reading. Like in Kripke semantics, the assumption simplifies the logic, but may be regarded as philosophically problematic. To avoid rendering $$\\forall x \\Box \\exists y (x=y)$$ valid, one may add that the quantifiers only range over some of the individuals at the relevant world, distinguishing between an “inner” and an “outer domain” (see e.g., Forbes 1982; Forbes 1985; Kracht & Kutz 2002; Varzi 2020).\n\nOne can also replace Lewis’s weak reading of the box by a strong reading without assuming that everything has a counterpart at every world. Instead, one can assume that when modal operators shift the point of evaluation to another world, then the relevant terms either pick out a counterpart of their previous referent at that world, or they become empty. One then has a range of familiar options from free logic for how to interpret formulas with empty terms. (See e.g., Stalnaker 1994 and Schwarz 2012; also Ramachandran 1989 for a related approach.)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88657504,"math_prob":0.98625255,"size":25640,"snap":"2023-14-2023-23","text_gpt3_token_len":5847,"char_repetition_ratio":0.15638165,"word_repetition_ratio":0.051806923,"special_character_ratio":0.23681748,"punctuation_ratio":0.10879121,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9922541,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T12:08:33Z\",\"WARC-Record-ID\":\"<urn:uuid:6e2937a3-3e34-4024-bbac-282bf9414d8e>\",\"Content-Length\":\"37951\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7829500c-b0b8-481d-b6ee-94543bc5d487>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ddd0815-cfd0-40cf-b5c1-e4b6b06091e3>\",\"WARC-IP-Address\":\"104.18.21.195\",\"WARC-Target-URI\":\"https://plato.sydney.edu.au/entries/lewis-metaphysics/counterpart-sem.html\",\"WARC-Payload-Digest\":\"sha1:MZ7MKKEZBJEXC3J432NFEM5Y2GB576RH\",\"WARC-Block-Digest\":\"sha1:3JKVDEUPM2PIBSE7I62LEH5643N2WOOF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224645595.10_warc_CC-MAIN-20230530095645-20230530125645-00164.warc.gz\"}"}
https://help.altair.com/flux/Flux/Help/english/UserGuide/English/topics/B(Stress)Property.htm
[ "# B(Stress) magneto-mechanical model: dependency between B(H) constitutive law and mechanical stress\n\n## Introduction\n\nThe magnetic behavior of a material is known to be sensitive to the application of mechanical stress. This may lead to significant effects that modify the performance of electromagnetic devices. For instance, the iron losses in a mechanically stressed magnetic circuit may increase up to 40%. The mechanical stress may be applied in different ways: during the construction of a magnetic device (punching, laser cut, assembly...) or during its operation (magnetostriction, centrifugal forces due to high-speed rotations, etc.).\n\nFlux can model this dependency between the B(H) constitutive relation of an electrical steel and mechanical stress through its B(Stress) analytical multiscale model. This model allows considering the influence of an equivalent mechanical stress on the B(H) magnetic behavior of the material. More specifically, this coupled property allows Flux to account for plastic or elastic deformation effects that modify the B(H) relationship that would be verified in an unconstrained sample of the material. The modified B(H) curves are computed automatically by Flux and used in the FEM computations.\n\nIt is important to note that the descriptions of both the B(H) and B(Stress) properties of a material are mandatory to account for magneto-mechanical stress effects. Figure 1 shows the computed B(H) curves that result from different values of mechanical stress for an electrical steel sheet (FeSi alloy):\n\nIn this model, the influence of mechanical stress on the shape of the B(H) curve is governed by the saturation magnetostriction constant λ. In Figure 1, the unconstrained material may be described by the Isotropic analytic saturation + knee adjustment (arctg, 3 coef.) law with the following parameters:\n\n• Initial relative permeability: 10000;\n• Saturation magnetization: 1.9 T;\nOn the other hand, the coupled B(Stress) property of this material is characterized by a saturation magnetostriction constant λ equal to 9e-6.\nParameter λ may be determined with the help of measurements (i.e., by applying a controlled stress on a magnetic sample) and depends directly on the type of alloy composing the steel sheet. A list containing the most frequent alloys and metals and the corresponding values of their saturation magnetostriction constants is provided in Table 1 below.\nTable 1. Typical saturation magnetostriction constant values for metals and alloys frequently used in electric machinery.\nAlloy / metal Saturation magnetostriction constant\nFeSi 9e-6\nFeNi 27e-6\nFeCo 70e-6\nFe -7.0e-6\nNi -33e-6\nCo -62e-6\nThe following topics are covered in the next sections:\n• How to create a B(Stress) magneto-mechanical property coupled to a B(H) magnetic property;\n• Specificities for the solving process;\n• Application example.\n\n## Creating a material with a magneto-mechanical model and assigning it to a region\n\nThe magneto-mechanical model is available only in Flux 2D for Transient Magnetic and Magneto-static applications and may be assigned to a material either during its creation or during its modification.\n\nMore specifically, the user must set both the B(H) property of the material to the appropriate subtype (i.e., isotropic analytic saturation + knee adjustment (arctg, 3 coef.)) and the B(Stress) property.\n\nThe procedure is detailed below:\n\n• First, create or edit an existing material in a Flux project. Two methods are available to perform these actions:\n1. Using the Physics menu, by selecting the option Material and then New or Edit;\n2. Through the Flux Data Tree located on the left of the main project view, i.e., in its Physics section, by double-clicking on Material to create a new one or by double-clicking on an existing material to edit it.\nDepending on the case, Flux will display either the New Material or the Edit Material dialog box.\n• Enable the Magnetic Property option that is available in the B(H) tab.\n• Then, choose the following model:\n• Isotropic analytic saturation + knee adjustment (arctg, 3 coef.) and fill the properties of the model: the Initial relative permeability, the Saturation magnetization and the Knee adjusting coefficient.\n• Enable the Magneto-mechanical property option that is available in the B(Stress) tab and then choose the following model:\n• Analytic multiscale model and fill the value of the Saturation magnetostriction constant λ.\n• Finally, the previously created material must be assigned to a Laminated magnetic non conducting region for a proper consideration of its magneto-mechanical behavior during solving process and post-processing operations (e.g. computation of iron losses). In practice, while creating or editing the chosen laminated magnetic non-conducting region, the user must enable the Mechanical stress dependence option in the main tab of the region dialog box, and then choose between one of the two following constraint types (and thus models):\n1. Uniform over the whole region or\n2. Exponential decay towards region center.\n\nIn both cases, the user must then fill the Equivalent uniaxial stress (MPa) field with the value of the intensity of the mechanical constraint in MPa. For the first case, this value impacts the B(H) law of all nodes of the region, while the second model applies this value only on all boundary nodes of the region (except symmetry and periodicity lines), being the internal nodes constrained by a lower (in absolute value) stress determined by means of an exponential decay equation. To define such a model, the user is also required to fill the Distance-to-boundary defining the stress decay rate (mm) field, which is the inverse value of the exponential decay constant (in other words, the value to fill plays the same role as the \"time constant\" of RL or RC circuits).\n\nNote: A positive stress value corresponds to a tension and a negative value to compression (i.e., the typical case for punched laminated sheets employed in electrical machines). Moreover, this stress is said to be equivalent to an uniaxial stress in the sense that the change in the B(H) property in the region corresponds to the change that would take place in a sample of the material subjected to uniaxial stress. The region remains isotropic, i.e., the modified B(H) property is used for all directions in the computations.\nNote: The first approach (Uniform over the whole region) is useful when the user decides to split the original region in sub-regions, which may include parts that are not affected by mechanical stress and others that are impacted by a mechanical constraint.\nNote:\n\nThe second approach (Exponential decay towards region center) may be used as a straightforward way to model a magnetic region that is subjected to a mechanical constraint only in a narrow band lying along its boundaries (e.g. effects of punching operations). Since it does not require splitting the original region into sub-regions, this approach makes the geometrical and the mesh descriptions easier. As a rule of thumb for meshing, it is advised to have in this area a maximum mesh element size equal to the Distance-to-boundary defining the stress decay rate (mm) value.\n\nIn this second approach, all the nodes of the region are subjected to a mechanical stress (and consequently to a modified B(H) property) whose value is highest on boundaries and decays exponentially as the nodes are located inside the region. In such a manner, 95% of the mechanical stress is applied on the narrow band lying along the region boundaries and whose width is three times the distance value filled in the Distance-to-boundary defining the stress decay rate (mm) field.\n\nA comparison between the two approaches is shown in Figure 2 below where the obtained stress distributions (isovalues) are reported.\n\n## Specificities for the solving process\n\nFlux solving process is customized in case the project to solve requires the use of this magneto-mechanical model. In particular:\n• the default options for the nonlinear Newton-Raphson solver are set in a way to ensure - in most cases - the best trade-off between accuracy and rapidity. For more details, please refer to Adjusting parameters in the Newton-Raphson method.\n• the Knee adjusting coefficient defined in the B(H) property of the material is not considered for regions where the Mechanical stress dependence option is activated. This means that in case of Equivalent uniaxial stress (MPa) value set to zero, the results may slightly be different when compared to the configuration where the Mechanical stress dependence option is not activated. This choice allows a better representation of the phenomena occurring in case of high values of mechanical compression (e.g., -150 MPa).\n\n## Application example\n\nLet's consider a Permanent Magnet Synchronous Machine (PMSM) modeled with Flux 2D and use it as an example to investigate the effect of punching on iron losses computations. Let's also consider the two approaches described in the previous section for the description of the required Laminated magnetic non-conducting region when applied to this specific machine:\n• Approach based on the constraint Uniform over the whole region: the stator of the PMSM is split in two separate regions. The first is a narrow band corresponding to the edges of the stator and has been damaged by the punching process. The second represents the innermost parts of the stator that have not been damaged through punching.\n• Approach based on the Exponential decay towards region center of the constraint: the stator is represented by a single region. The compressive mechanical stress is set to its maximum value on the boundary nodes and it exponentially decays as the nodes are located inside the region. This approach assumes that the width of the damaged zone along the boundaries is small when compared to the stator dimensions.\nSeveral computations were performed with the compressive mechanical stress going from 0 to -210 MPa and with both approaches: in the case of uniform stress a damaged zone width of 0.25 mm has been used, while in the case of exponential decaying stress, the decay rate has been fixed at 0.08 mm, so that 95% of the mechanical stress is applied on the three-times (0.24 mm) width external band. The total Bertotti iron losses were evaluated while in post-processing, after resolution of a Transient Magnetic scenario. A comparison of the total iron losses obtained for each value of mechanical stress is summarized in Figure 4 below:\n\nThe magnetic flux density distribution may be visualized through an isovalue plot in the stator, as shown in the Figure 5 below. In this figure we can clearly see the impact of the punching process and of the mechanical stress on the stator teeth boundaries. The intensity of the magnetic flux density in a narrow region along the perimeter of a tooth is visibly lower when compared to its innermost parts.\n\nThe weakening of the magnetic flux density along the boundaries is a consequence of the localized degradation of the magnetic permeability resulting from punching during the fabrication of the stator, as shown in Figure 1." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89860994,"math_prob":0.91956156,"size":9266,"snap":"2023-40-2023-50","text_gpt3_token_len":1935,"char_repetition_ratio":0.15374649,"word_repetition_ratio":0.038225256,"special_character_ratio":0.19674078,"punctuation_ratio":0.085506365,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96001387,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T23:16:24Z\",\"WARC-Record-ID\":\"<urn:uuid:8cc96d5f-6a2e-4c4a-8638-d12b04eadf60>\",\"Content-Length\":\"85568\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a86cd0e8-8911-4c0c-af31-422925e65006>\",\"WARC-Concurrent-To\":\"<urn:uuid:d070b668-8c2e-4d8c-84b3-3707dbb5498f>\",\"WARC-IP-Address\":\"173.225.177.121\",\"WARC-Target-URI\":\"https://help.altair.com/flux/Flux/Help/english/UserGuide/English/topics/B(Stress)Property.htm\",\"WARC-Payload-Digest\":\"sha1:32JJM3IFROCTDK74MYPMASSOSIWCG5J3\",\"WARC-Block-Digest\":\"sha1:SBF47QKO2NSLGLUFVUQ6ZMZ46IFIXYUL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510100.47_warc_CC-MAIN-20230925215547-20230926005547-00420.warc.gz\"}"}
https://www.unitconverter.dev/planeangle/milliradian/minuteofarc/1/
[ "# 🧮 Plane Angle\n\n## Milliradian to Minute of Arc\n\nThe plane angle conversion of 1 milliradian is 3.4377467707849 minute of arc.\n\nto\nMinute of Arc\nMilliradian Minute of Arc\n0.01 0.034377467707849\n0.05 0.17188733853925\n0.1 0.34377467707849\n0.25 0.85943669269623\n1 3.4377467707849\n5 17.188733853925\n10 34.377467707849\n20 68.754935415699\n50 171.88733853925\n100 343.77467707849\n\n### Plane Angle\n\nIn plane geometry, an angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Angles formed by two rays lie in a plane, but this plane does not have to be a Euclidean plane. Angles are also formed by the intersection of two planes in Euclidean and other spaces. These are called dihedral angles. Angles formed by the intersection of two curves in a plane are defined as the angle determined by the tangent rays at the point of intersection. Similar statements hold in space, for example, the spherical angle formed by two great circles on a sphere is the dihedral angle between the planes determined by the great circles." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9011904,"math_prob":0.8179375,"size":1077,"snap":"2022-40-2023-06","text_gpt3_token_len":319,"char_repetition_ratio":0.15377447,"word_repetition_ratio":0.024096385,"special_character_ratio":0.37325907,"punctuation_ratio":0.13615024,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9900769,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T21:20:18Z\",\"WARC-Record-ID\":\"<urn:uuid:bee2574d-e4ce-4ae5-84fd-a99a1c8dfb13>\",\"Content-Length\":\"199824\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0445402f-7b28-4685-9c97-68acdfe43a1a>\",\"WARC-Concurrent-To\":\"<urn:uuid:a34fe08e-1d34-4343-909b-678578b9108b>\",\"WARC-IP-Address\":\"172.67.186.238\",\"WARC-Target-URI\":\"https://www.unitconverter.dev/planeangle/milliradian/minuteofarc/1/\",\"WARC-Payload-Digest\":\"sha1:6EDJ627OK4YHD5ICEPKR2KNAJUKKPAKY\",\"WARC-Block-Digest\":\"sha1:NELLG4FKXXFOB4W5ZZ75MFJSPPZWEZQ3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337668.62_warc_CC-MAIN-20221005203530-20221005233530-00684.warc.gz\"}"}
https://www.celebrationsballooncompany.com/balloon-arch-size-calculator
[ "# Balloon Arch Size Calculator\n\nHow to calculate what size balloon arch you need", null, "Example 1: If the balloon arch is wider than it is tall:\n\nHeight + Width = Approximate Total Length\n\nExample 2: If the balloon arch height and width are about the same:\n\n1.5(Height) + Width = Approximate Total Length\n\nExample 3: If the balloon arch is taller than it is wide:\n\n2(Height) + Width = Approximate Total Length" ]
[ null, "https://cdn.filestackcontent.com/sV6zWrPPTxmC6hmH2Ww3", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84917206,"math_prob":0.9989924,"size":365,"snap":"2021-31-2021-39","text_gpt3_token_len":90,"char_repetition_ratio":0.15512465,"word_repetition_ratio":0.15873016,"special_character_ratio":0.24383561,"punctuation_ratio":0.09859155,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9922014,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-23T16:47:39Z\",\"WARC-Record-ID\":\"<urn:uuid:8652e6ff-331a-44f2-ba37-4de780baeb8f>\",\"Content-Length\":\"20016\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b371417-a375-4851-90a4-724e4542cddf>\",\"WARC-Concurrent-To\":\"<urn:uuid:f932c2b1-ab99-4433-af81-29ab37635037>\",\"WARC-IP-Address\":\"198.199.80.168\",\"WARC-Target-URI\":\"https://www.celebrationsballooncompany.com/balloon-arch-size-calculator\",\"WARC-Payload-Digest\":\"sha1:S4LBCYTWDE4DPFSLP4D6QRFGQG6CPBUV\",\"WARC-Block-Digest\":\"sha1:R7S7M3NR2K56LKFOIZH64RHMEVM6QDSI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046149929.88_warc_CC-MAIN-20210723143921-20210723173921-00483.warc.gz\"}"}
https://www.geeksforgeeks.org/check-if-frequency-of-character-in-one-string-is-a-factor-or-multiple-of-frequency-of-same-character-in-other-string/?ref=lbp
[ "Related Articles\nCheck if frequency of character in one string is a factor or multiple of frequency of same character in other string\n• Difficulty Level : Hard\n• Last Updated : 09 Aug, 2019\n\nGiven two strings, the task is to check whether the frequencies of a character(for each character) in one string is a multiple or a factor in another string. If it is, then output “YES”, otherwise output “NO”.\n\nExamples:\n\nInput: s1 = “aabccd”, s2 = “bbbaaaacc”\nOutput: YES\nFrequency of ‘a’ in s1 and s2 are 2 and 4 respectively, and 2 is a factor of 4\nFrequency of ‘b’ in s1 and s2 are 1 and 3 respectively, and 1 is a factor of 3\nFrequency of ‘c’ in s1 and s2 are same hence it also satisfies.\nFrequency of ‘d’ in s1 and s2 are 1 and 0 respectively, but 0 is a multiple of every number, hence satisfied.\n\nInput: s1 = “hhdwjwqq”, s2 = “qwjdddhhh”\nOutput: NO\n\n## Recommended: Please try your approach on {IDE} first, before moving on to the solution.\n\nApproach:\n\n1. Store frequency of characters in s1 in first map STL.\n2. Store frequency of characters in s2 in second map STL.\n3. Let the frequency of a character in first map be F1. Let us also assume the frequency of this character in second map is F2.\n4. Check F1%F2 and F2%F1(modulo operation). If either of them is 0, then the condition is satisfied.\n5. Check it for all the characters.\n\nBelow is the implementation of the above approach:\n\n## C++\n\n `// C++ implementation of above approach ` `#include ` `using` `namespace` `std; ` ` `  `// Function that checks if the frequency of character ` `// are a factor or multiple of each other ` `bool` `multipleOrFactor(string s1, string s2) ` `{ ` `    ``// map store frequency of each character ` `    ``map<``char``, ``int``> m1, m2; ` `    ``for` `(``int` `i = 0; i < s1.length(); i++) ` `        ``m1[s1[i]]++; ` ` `  `    ``for` `(``int` `i = 0; i < s2.length(); i++) ` `        ``m2[s2[i]]++; ` ` `  `    ``map<``char``, ``int``>::iterator it; ` ` `  `    ``for` `(it = m1.begin(); it != m1.end(); it++) { ` ` `  `        ``// if any frequency is 0, then continue ` `        ``// as condition is satisfied ` `        ``if` `(m2.find((*it).first) == m2.end()) ` `            ``continue``; ` ` `  `        ``// if factor or multiple, then condition satified ` `        ``if` `(m2[(*it).first] % (*it).second == 0 ` `            ``|| (*it).second % m2[(*it).first] == 0) ` `            ``continue``; ` ` `  `        ``// if condition not satisfied ` `        ``else` `            ``return` `false``; ` `    ``} ` `} ` ` `  `// Driver code ` `int` `main() ` `{ ` `    ``string s1 = ``\"geeksforgeeks\"``; ` `    ``string s2 = ``\"geeks\"``; ` ` `  `    ``multipleOrFactor(s1, s2) ? cout << ``\"YES\"` `                             ``: cout << ``\"NO\"``; ` ` `  `    ``return` `0; ` `} `\n\n## Java\n\n `// Java implementation of above approach ` `import` `java.util.HashMap; ` ` `  `class` `GFG  ` `{ ` ` `  `    ``// Function that checks if the frequency of character ` `    ``// are a factor or multiple of each other ` `    ``public` `static` `boolean` `multipleOrFactor(String s1, String s2) ` `    ``{ ` `         `  `        ``// map store frequency of each character ` `        ``HashMap m1 = ``new` `HashMap<>(); ` `        ``HashMap m2 = ``new` `HashMap<>(); ` ` `  `        ``for` `(``int` `i = ``0``; i < s1.length(); i++)  ` `        ``{ ` `            ``if` `(m1.containsKey(s1.charAt(i))) ` `            ``{ ` `                ``int` `x = m1.get(s1.charAt(i)); ` `                ``m1.put(s1.charAt(i), ++x); ` `            ``}  ` `            ``else` `                ``m1.put(s1.charAt(i), ``1``); ` `        ``} ` ` `  `        ``for` `(``int` `i = ``0``; i < s2.length(); i++) ` `        ``{ ` `            ``if` `(m2.containsKey(s2.charAt(i)))  ` `            ``{ ` `                ``int` `x = m2.get(s2.charAt(i)); ` `                ``m2.put(s2.charAt(i), ++x); ` `            ``}  ` `            ``else` `                ``m2.put(s2.charAt(i), ``1``); ` `        ``} ` ` `  `        ``for` `(HashMap.Entry entry : m1.entrySet())  ` `        ``{ ` `             `  `            ``// if any frequency is 0, then continue ` `            ``// as condition is satisfied ` `            ``if` `(!m2.containsKey(entry.getKey())) ` `                ``continue``; ` ` `  `            ``// if factor or multiple, then condition satified ` `            ``if` `(m2.get(entry.getKey()) != ``null` `&&  ` `                ``(m2.get(entry.getKey()) % entry.getValue() == ``0` `                ``|| entry.getValue() % m2.get(entry.getKey()) == ``0``)) ` `                ``continue``; ` `             `  `            ``// if condition not satisfied ` `            ``else` `                ``return` `false``; ` `        ``} ` `        ``return` `true``; ` `    ``} ` ` `  `    ``// Driver code ` `    ``public` `static` `void` `main(String[] args)  ` `    ``{ ` `        ``String s1 = ``\"geeksforgeeks\"``, s2 = ``\"geeks\"``; ` `        ``if` `(multipleOrFactor(s1, s2)) ` `            ``System.out.println(``\"Yes\"``); ` `        ``else` `            ``System.out.println(``\"No\"``); ` ` `  `    ``} ` `} ` ` `  `// This code is contributed by ` `// sanjeev2552 `\n\n## Python3\n\n `# Python3 implementation of above approach  ` `from` `collections ``import` `defaultdict ` ` `  `# Function that checks if the frequency of  ` `# character are a factor or multiple of each other  ` `def` `multipleOrFactor(s1, s2):  ` `  `  `    ``# map store frequency of each character  ` `    ``m1 ``=` `defaultdict(``lambda``:``0``) ` `    ``m2 ``=` `defaultdict(``lambda``:``0``) ` `    ``for` `i ``in` `range``(``0``, ``len``(s1)):  ` `        ``m1[s1[i]] ``+``=` `1` ` `  `    ``for` `i ``in` `range``(``0``, ``len``(s2)):  ` `        ``m2[s2[i]] ``+``=` `1` ` `  `    ``for` `it ``in` `m1:   ` ` `  `        ``# if any frequency is 0, then continue  ` `        ``# as condition is satisfied  ` `        ``if` `it ``not` `in` `m2:  ` `            ``continue`  ` `  `        ``# if factor or multiple, then condition satified  ` `        ``if` `(m2[it] ``%` `m1[it] ``=``=` `0` `or`  `            ``m1[it] ``%` `m2[it] ``=``=` `0``):  ` `            ``continue`  ` `  `        ``# if condition not satisfied  ` `        ``else``: ` `            ``return` `False` `             `  `    ``return` `True` ` `  `# Driver code  ` `if` `__name__ ``=``=` `\"__main__\"``: ` `  `  `    ``s1 ``=` `\"geeksforgeeks\"`  `    ``s2 ``=` `\"geeks\"`  ` `  `    ``if` `multipleOrFactor(s1, s2): ``print``(``\"YES\"``) ` `    ``else``: ``print``(``\"NO\"``)  ` ` `  `# This code is contributed by Rituraj Jain `\n\n## C#\n\n `// C# implementation of the approach ` `using` `System; ` `using` `System.Collections.Generic; ` ` `  `class` `GFG  ` `{ ` ` `  `    ``// Function that checks if the  ` `    ``// frequency of character are  ` `    ``// a factor or multiple of each other ` `    ``public` `static` `Boolean multipleOrFactor(String s1,  ` `                                           ``String s2) ` `    ``{ ` `         `  `        ``// map store frequency of each character ` `        ``Dictionary<``char``, ``int``> m1 = ``new` `Dictionary<``char``, ``int``>(); ` `        ``Dictionary<``char``, ``int``> m2 = ``new` `Dictionary<``char``, ``int``>(); ` ` `  `        ``for` `(``int` `i = 0; i < s1.Length; i++)  ` `        ``{ ` `            ``if` `(m1.ContainsKey(s1[i])) ` `            ``{ ` `                ``var` `x = m1[s1[i]]; ` `                ``m1[s1[i]]= ++x; ` `            ``}  ` `            ``else` `                ``m1.Add(s1[i], 1); ` `        ``} ` ` `  `        ``for` `(``int` `i = 0; i < s2.Length; i++) ` `        ``{ ` `            ``if` `(m2.ContainsKey(s2[i]))  ` `            ``{ ` `                ``var` `x = m2[s2[i]]; ` `                ``m2[s2[i]]= ++x; ` `            ``}  ` `            ``else` `                ``m2.Add(s2[i], 1); ` `        ``} ` ` `  `        ``foreach``(KeyValuePair<``char``, ``int``> entry ``in` `m1) ` `        ``{ ` `             `  `            ``// if any frequency is 0, then continue ` `            ``// as condition is satisfied ` `            ``if` `(!m2.ContainsKey(entry.Key)) ` `                ``continue``; ` ` `  `            ``// if factor or multiple, then condition satified ` `            ``if` `(m2[entry.Key] != 0 &&  ` `               ``(m2[entry.Key] % entry.Value == 0 ||  ` `                   ``entry.Value % m2[entry.Key] == 0)) ` `                ``continue``; ` `             `  `            ``// if condition not satisfied ` `            ``else` `                ``return` `false``; ` `        ``} ` `        ``return` `true``; ` `    ``} ` ` `  `    ``// Driver code ` `    ``public` `static` `void` `Main(String[] args)  ` `    ``{ ` `        ``String s1 = ``\"geeksforgeeks\"``, s2 = ``\"geeks\"``; ` `        ``if` `(multipleOrFactor(s1, s2)) ` `            ``Console.WriteLine(``\"Yes\"``); ` `        ``else` `            ``Console.WriteLine(``\"No\"``); ` `    ``} ` `} ` ` `  `// This code is contributed by PrinciRaj1992  `\n\nOutput:\n\n```YES\n```\n\nAttention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready.\n\nMy Personal Notes arrow_drop_up\nRecommended Articles\nPage :" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5370613,"math_prob":0.9708798,"size":5849,"snap":"2021-04-2021-17","text_gpt3_token_len":1847,"char_repetition_ratio":0.12882806,"word_repetition_ratio":0.26413256,"special_character_ratio":0.34450334,"punctuation_ratio":0.185567,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9984757,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-16T15:10:20Z\",\"WARC-Record-ID\":\"<urn:uuid:ae075783-90f9-4cf2-a4ea-8ec93db9b180>\",\"Content-Length\":\"171422\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0066eb9d-cecf-4d6a-b28d-622f6c594213>\",\"WARC-Concurrent-To\":\"<urn:uuid:15dea581-399e-427e-88b6-9ed1c21ddacc>\",\"WARC-IP-Address\":\"23.221.73.187\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/check-if-frequency-of-character-in-one-string-is-a-factor-or-multiple-of-frequency-of-same-character-in-other-string/?ref=lbp\",\"WARC-Payload-Digest\":\"sha1:CHZCLWS3MRQEBF5NILCVXUUIILEZKORB\",\"WARC-Block-Digest\":\"sha1:4FLD76S7OKB6PWHDABULL6QVN4WQH3S6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703506697.14_warc_CC-MAIN-20210116135004-20210116165004-00718.warc.gz\"}"}
https://www.phywe.com/en/demo-advanced-physik-handbuch-sekundarstufe-1-mechanik-akustik-waerme-elektrik-regenerative-energie-optik.html
[ "Product details\n\n# Demo advanced Physik Handbuch Sekundarstufe 1,Mechanik, Akustik, Wärme, Elektrik, regenerative Energie,Optik\n\nItem no.: 01500-01\n\nprint\n• Description\n• Related Experiments\nFunction and Applications\n\nInstructions for more than 300 demonstration experiments at secondary level 1. 87 of these can be carried out in conjunction with the basic equipment set 01510-88.\n\nMechanics (17 + 62 experiments)\n\n• Properties of bodies (5 + 3 experiments)\n• Forces (2 + 15 experiments)\n• Simple machines (13 experiments)\n• motion (3 + 7 experiments)\n• Types of mechanical energy (1 + 3 experiments)\n• Simple harmonic oscillation (3 experiments)\n• Mechanics of liquids and gases (6 + 19 experiments)\n\nAcoustics (5 + 3 experiments)\n\n• Properties of sound, generation and propagation (5 + 3 experiments)\n\nThermodynamics (14 + 20 experiments)\n\n• Thermal expansion (7 + 2 experiments)\n• Transport of heat (3 + 2 experiments)\n• Changes in state of matter (4 + 5 experiment)\n• Heat energy (5 experiments)\n• Thermal state variables (6 experiments)\n\nRegenerative energies/conversion of energy (5 + 12 experiments)\n\n• Solar cells and solar collectors (3 + 5 experiments)\n• Fuel cells (2 + 2 experiments)\n• Energy from water (1 experiment)\n• Energy from the environment (4 experiments)\n\nElectricity (78 experiments)\n\n• Magnetostatics (5 + 2 experiments)\n• Circuits (7 + 2 experiments)\n• Electrostatics (1 + 5 experiments)\n• Electrical resistance (2 + 7 experiments)\n• Power and work (1 experiment)\n• Capacitors (3 experiments)\n• Diodes (2 + 8 experiments)\n• Transistors (2 + 13 experiments)\n• Conversion of energy (2 experiments)\n• Electrochemistry (6 experiments)\n• Electromagnetism (2 + 6 experiments)\n• Electric motors (3 + 4 experiments)\n• Induction (3 + 6 experiments)\n• Transformers (3 + 4 experiments)\n• Self-inductance (4 experiments)\n• Safety when handling electricity (1 + 2 experiments)\n• Sensors (3 experiments)\n• Operational amplifiers (3 experiments)\n\nOptics (7 + 64 experiments)\n\n• Propagation of light (1 + 7 experiments)\n• Mirrors (1 + 18 experiments)\n• Refraction (10 experiments)\n• Lenses (2 + 14 experiments)\n• The eye (3 experiments)\n• Optical equipment (1 + 5 experiments)\n• Colours (2 + 7 experiment)\n\nEquipment and Technical Data\n\n• DIN A4 handbook, ring binding, b/w, 948 pages\nExperiment Item no.\nRectilinear propagation of light P1100000\nShadow formation by a point light source P1100100\nUmbra and penumbra with two point light sources P1100200\nUmbra and penumbra with an extensive light source P1100300\nSolar and lunar eclipses with a point light source P1100500\nSolar and lunar eclipses with an extensive light source P1100600\nReflection of light P1100700\nThe law of reflection P1100800\nFormation of an image point by a plane mirror P1100900\nImage formation by a plane mirror P1101000\nApplications of reflection by plane mirrors P1101100\nReflection of light by a concave mirror P1101200\nProperties of a concave mirror P1101300\nReal images with a concave mirror P1101400\nLaw of imagery and magnification of a concave mirror P1101500\nVirtual images with a concave mirror P1101600\nAberrations with a concave mirror (catacaustics) P1101700\nReflection of light by a convex mirror P1101800\nProperties of a convex mirror P1101900\nImage formation by a convex mirror P1102000\nLaw of imagery and magnification of a convex mirror P1102100\nReflection of light by a parabolic mirror P1102200\nRefraction of light at the air-glass boundary P1102300\nRefraction of light at the air-water boundary P1102400\nThe law of refraction (quantitative) P1102500\nTotal reflection of light at the glass-air boundary P1102600\nTotal reflection of light at the water-air boundary P1102700\nPassage of light through a planoparallel glass plate P1102800\nRefraction by a prism P1102900\nLight path through a reversing prism P1103000\nLight path of through a deviating prism P1103100\nLight transmission by total reflection P1103200\nRefraction of light by a convergent lens P1103300\nProperties of a convergent lens P1103400\nReal images with a convergent lens P1103500\nLaw of imagery and magnification of a convergent lens P1103600\nVirtual images with a convergent lens P1103700\nRefraction of light at a divergent lens P1103800\nProperties of a divergent lens. OT 4.7 P1103900\nImage formation by a divergent lens P1104000\nLaw of imagery and magnification of a divergent lens P1104100\nLens combination consisting of two convergent lenses P1104200\nLens combination consisting of a convergent and a divergentlens P1104300\nSpherical aberration P1104400\nChromatic aberration P1104500\nNon-dispersivity of spectral colours P1104700\nComplementary colours P1104900\nStructure and function of the human eye P1105200\nShort-sightedness and its correction (myopia) P1105300\nLong-sightedness and its correction (hyperopia) P1105400\nThe magnifying glass P1105500\nThe camera P1105600\nThe astronomical telescope P1105700\nThe Newtonian reflecting telescope P1105800\nHerschel's reflecting telescope P1105900\nMass and weight P1251600\nExtension of a rubber band and helical spring P1251700\nHooke's law P1251800\nMaking and calibrating a dynamometer P1251900\nBending of a leaf spring P1252000\nForce and counterforce P1252100\nComposition of forces having the same line of application P1252200\nComposition of non-parallel forces P1252300\nResolution of a force into two non-parallel forces P1252400\nResolution of forces on an inclined plane P1252500\nResolution of forces on a crane P1252600\nRestoring force on a displaced pendulum P1252700\nDetermination of the centre of gravity of an irregular plate P1252800\nFrictional force P1252900\nDetermination of the coefficient of friction of an inclinedplane P1253000\nDouble-sided lever P1253100\nOne-sided lever P1253200\nDouble-sided lever and more than two forces P1253300\nReaction forces P1253400\nTorque P1253500\nLinear expansion of solid bodies P1291500\nVolume expansion of gases at constant pressure P1291600\nPressure increase during the heating of gases with constantvolume P1291700\nHeat convection in liquids and gases P1291800\nHeat conduction in solid bodies P1291900\nGay-Lussac's law P1292400\nCharles's (Amontons') law and Gay-Lussac's law P1292500\nBoyle-Mariotte law P1292600\nEnergy conversion of a roller coaster P1296400\nTension energy P1296600\nU-tube manometer P1296700\nHydrostatic pressure P1296800\nCommunicating vessel P1296900\nHydraulic press P1297000\nArtesian well P1297100\nDensity determination by measuring buoyancy P1297300\nDischarge velocity of a vessel P1297400\nPressure in flowing fluids P1297500\nPressure in gases P1297600\nBoyle-Mariotte law P1297700\nThe simple circuit P1380100\nVoltage measurement P1380200\nCurrent measurement P1380300\nConductors and non-conductors P1380400\nChangeover switches and alternating switches P1380500\nSeries and parallel connection of sources of voltage P1380600\nThe safety fuse P1380700\nThe bimetallic switch P1380800\nAnd- and Or circuit P1380900\nOhm's law P1381000\nThe resistance of wires - dependence on the length and cross-section P1381100\nThe resistance of wires- dependence on the material and tem-perature P1381200\nThe resistivity of wires P1381300\nCurrent and resistance in a parallel connection P1381400\nCurrent and resistance in a series connection P1381500\nVoltage in a series connection P1381600\nThe potentiometer P1381700\nThe internal resistance of a voltage source P1381800\nThe power and work of the electric current P1381900\nCapacitors in direct current circuits P1382000\nCharging and discharging a capacitor P1382100\nCapacitors in alternating current circuits P1382200\nDiodes as electrical valves P1382300\nCharacteristics of a silicon diode P1382500\nProperties of solar cells - dependence on the illuminance P1382600\nThe current-voltage characteristic of a solar cell P1382700\nSeries and parallel connection of solar cells - open-circuit voltage and short-circuit current P1382800\nSeries and parallel connection of solar cells - current-vol-tage characteristics and power ET 5.7 P1382900\nThe NPN transistor P1383100\nThe transistor as a direct current amplifier P1383200\nThe current-voltage characteristic of a transistor P1383300\nThe transistor as a switch P1383400\nThe transistor time-delay switch P1383500\nThe PNP transistor P1383600\nConversion of electrical energy into thermal energy P1396700\nConversion of electrical energy into mechanical energy andvice versa P1396800\nConductivity of aqueous solutions of electrolytes P1396900\nThe connection between current and voltage in conductiveprocesses in liquids P1397000\nElectrolysis P1397100\nGalvanisation P1397200\nGalvanic cells P1397300\nThe magnetic effect of a current-carrying conductor P1397700\nThe Lorentz force: current-carrying conductors in a magnetic P1397800\nThe electric bell P1397900\nThe electromagnetic relay P1398000\nControlling with a relay P1398100\nThe galvanometer P1398300\nThe synchronous motor P1398800\nGeneration of induced voltages with an electromagnet P1399000\nLenz's law P1399300\nThe behaviour of a direct current generator under load P1399400\nThe forces between the primary and secondary coils of atransformer P1399700\nSelf-induction when switching a circuit on P1399900\nSelf-induction when switching a circuit off P1400000\nThe coil in the alternating current circuit P1400100\nEarthing of the power supply line P1400300\nThe protective conductor system P1400400\nThe protective isolation transformer P1400500\nThe NTC resistor P1400600\nThe PTC resistor P1400700\nCharacteristic curve of a Zener diode P1400900\nThe Zener diode as voltage stabiliser P1401000\nLight-emitting diodes P1401100\nPhoto diodes P1401200\nBridge rectifiers P1401300\nAlternating voltage amplification with a transistor P1401500\nStabilisation of the operating point of a transistor ampli-fier stage P1401600\nTemperature control of a transistor P1401800\nUndamped electromagnetic oscillations P1401900\nThe Darlington circuit P1402000\nThe two-stage transistor amplifier P1402100\nOptical fibre communication P1402300\nDetermination of the volume of liquids and solids P1420000\nDetermination of the volume of gases P1420100\nDetermination of the density of solid bodes with equal massand different volume P1420500\nTypes of friction P1421100\nSliding friction as a function of the weight and area of bearing P1421200\nFree fall P1421600\nDetermination of the gravitational acceleration (with sup-port material) P1421702\nMeasurement of the hydrostatic pressure with a pressure element P1423100\nHydrostatic pressure measurement P1423200\nPositive pressure - negative pressure (with support mate-rial) P1423602\nDetermination of the atmospheric pressure P1423702\nSwimming, floating, sinking P1424700\nCapillary action (with support material) P1424902\nResonance and natural frequency of two tuning forks P1426300\nMusical intervals P1426500\nVolume expansion of solid bodies P1427100\nForces during the expansion of solid bodies P1427400\nDistillation P1428300\nThe effect of the magnetic force between magnets P1431900\nThe magnetic field P1432100\nInduced magnetism P1432300\nElementary magnets P1432400\nElectrostatic phenomena P1432600\nThe permanent magnet motor (with the demonstration generatorsystem) P1433302\nThe series motor (with the demonstration generator system) P1433402\nGeneration of induced voltage with a permanent magnet (witha demonstration coil) P1433602\nThe alternating current generator (with the demonstrationmotor-generator system) P1433702\nHigh-voltage line P1434300\nVolume expansion of liquids P1291300\nPreparing a thermometer scale P1291400\nAnomaly of water P1427000\nMelting of ice P1427900\nElectric charge P1432700\nThe electric charge quantity (with an electroscope) P1432801\nThe direct current generator (with the demonstration motor-generator system) P1433802\nStatic forces between electric P1432900\nElectrostatic induction (with an electroscope) P1433001\nThermal energy and heated mass P1428500\nMeasurement of the mixing temperature P1428600\n\n##### Information", null, "" ]
[ null, "https://www.phywe.com/skin/frontend/default/phywe/images/einklinker_kontakt_en.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81609756,"math_prob":0.8765164,"size":2113,"snap":"2020-45-2020-50","text_gpt3_token_len":557,"char_repetition_ratio":0.32622096,"word_repetition_ratio":0.023054754,"special_character_ratio":0.30383343,"punctuation_ratio":0.02097902,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9723484,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T16:49:02Z\",\"WARC-Record-ID\":\"<urn:uuid:8d02ca71-8b30-41ac-a044-81b8f07d5284>\",\"Content-Length\":\"202777\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34ae80c0-e5b3-46a1-8b0e-3963be0e7cd7>\",\"WARC-Concurrent-To\":\"<urn:uuid:a500b9a7-6665-4932-8ee4-c20593c61f08>\",\"WARC-IP-Address\":\"185.88.214.91\",\"WARC-Target-URI\":\"https://www.phywe.com/en/demo-advanced-physik-handbuch-sekundarstufe-1-mechanik-akustik-waerme-elektrik-regenerative-energie-optik.html\",\"WARC-Payload-Digest\":\"sha1:HXORAZNPDAQD33HMHADC3XCLA56XGWYF\",\"WARC-Block-Digest\":\"sha1:KIFF5JAROKIGTMTR3DJ5AFSDYUSECNRX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141740670.93_warc_CC-MAIN-20201204162500-20201204192500-00221.warc.gz\"}"}
https://sharemylesson.com/teaching-resource/hundredths-decimals-309239
[ "# Hundredths | Decimals\n\nSubject\nResource Type Presentation\n\nShare\n\nResources\nReviews\n\nLessons include: Recognize equivalent fractions with tenths and hundredths; Recognize hundredths; Count up in hundredths; Compare and order decimals; Count on in decimal steps; Count back in decimal steps; Locate hundredths on a number line; Locate tenths and hundredths on a number line beyond one whole; Round decimals to the nearest whole number; Add numbers with 2 decimal places; Subtract numbers with 2 decimal places; Divide single- or two-digit numbers by 100; Multiply decimal numbers by 10; Multiply decimal numbers by 100\n\n## Resources\n\nFiles\n\n40.pptx\n\nPresentation\nFebruary 10, 2020\n0.8 MB", null, "" ]
[ null, "https://sharemylesson.com/themes/custom/sml3/images/illustrations/write-a-review.webp", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78015894,"math_prob":0.94299746,"size":677,"snap":"2023-14-2023-23","text_gpt3_token_len":150,"char_repetition_ratio":0.19167905,"word_repetition_ratio":0.02,"special_character_ratio":0.20236337,"punctuation_ratio":0.12605043,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9800803,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T15:20:27Z\",\"WARC-Record-ID\":\"<urn:uuid:a560b70e-38c0-48a2-b189-bbd7c827667d>\",\"Content-Length\":\"118511\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25a01e12-ed91-4134-9cae-0616ffb00ce6>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a1bd6e3-90f5-46c8-aef5-7783f3a3e854>\",\"WARC-IP-Address\":\"104.26.7.173\",\"WARC-Target-URI\":\"https://sharemylesson.com/teaching-resource/hundredths-decimals-309239\",\"WARC-Payload-Digest\":\"sha1:ZGJ7RQAJ4YU25O2723AEDQDD5FEXXQAQ\",\"WARC-Block-Digest\":\"sha1:J4GC6VTBR54L6ZBTOVNS7CXI6CRNCJKO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949644.27_warc_CC-MAIN-20230331144941-20230331174941-00501.warc.gz\"}"}
https://phys.libretexts.org/Bookshelves/Relativity/General_Relativity_(Crowell)/07%3A_Symmetries/7.01%3A_Killing_Vectors
[ "# 7.1: Killing Vectors\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$$$\\newcommand{\\AA}{\\unicode[.8,0]{x212B}}$$\n\nThe Schwarzschild metric is an example of a highly symmetric spacetime. It has continuous symmetries in space (under rotation) and in time (under translation in time). In addition, it has discrete symmetries under spatial reflection and time reversal. In Section 6.2, we saw that the two continuous symmetries led to the existence of conserved quantities for the trajectories of test particles, and that these could be interpreted as mass-energy and angular momentum.\n\nGeneralizing, we want to consider the idea that a metric may be invariant when every point in spacetime is systematically shifted by some infinitesimal amount. For example, the Schwarzschild metric is invariant under $$t → t + dt$$. In coordinates\n\n$(x^0, x^1, x^2, x^3) = (t, r, \\theta, \\phi),$\n\nwe have a vector field $$(dt, 0, 0, 0)$$ that defines the time-translation symmetry, and it is conventional to split this into two factors, a finite vector field $$\\boldsymbol{\\xi}$$ and an infinitesimal scalar, so that the displacement vector is\n\n$\\boldsymbol{\\xi} dt = (1, 0, 0, 0) dt \\ldotp$\n\nSuch a field is called a Killing vector field, or simply a Killing vector, after Wilhelm Killing. When all the points in a space are displaced as specified by the Killing vector, they flow without expansion or compression. The path of a particular point, such as the dashed line in Figure $$\\PageIndex{1}$$, under this flow is called its orbit. Although the term “Killing vector” is singular, it refers to the entire field of vectors, each of which differs in general from the others. For example, the $$\\boldsymbol{\\xi}$$ shown in Figure $$\\PageIndex{1}$$ has a greater magnitude than a $$\\boldsymbol{\\xi}$$ near the neck of the surface.", null, "Figure $$\\PageIndex{1}$$: The two-dimensional space has a symmetry which can be visualized by imagining it as a surface of revolution embedded in three-space. Without reference to any extrinsic features such as coordinates or embedding, an observer on this surface can detect the symmetry, because there exists a vector field $$\\xi$$ du such that translation by $$\\xi$$ du doesn’t change the distance between nearby points.", null, "Figure $$\\PageIndex{2}$$: Wilhelm Killing (1847-1923).\n\nThe infinitesimal notation is designed to describe a continuous symmetry, not a discrete one. For example, the Schwarzschild spacetime also has a discrete time-reversal symmetry t → −t. This can’t be described by a Killing vector, because the displacement in time is not infinitesimal.\n\nExample $$\\PageIndex{1}$$: The Euclidean plane\n\nThe Euclidean plane has two Killing vectors corresponding to translation in two linearly independent directions, plus a third Killing vector for rotation about some arbitrarily chosen origin O. In Cartesian coordinates, one way of writing a complete set of these is is\n\n$\\begin{split} \\xi_{1} &= (1, 0) \\\\ \\xi_{2} &= (0, 1) \\\\ \\xi_{3} &= (-y, x) \\ldotp \\end{split}$\n\nA theorem from classical geometry1 states that any transformation in the Euclidean plane that preserves distances and handedness can be expressed either as a translation or as a rotation about some point. The transformations that do not preserve handedness, such as reflections, are discrete, not continuous. This theorem tells us that there are no more Killing vectors to be found beyond these three, since any translation can be accomplished using $$\\xi_{1}$$ and $$\\xi_{2}$$, while a rotation about a point P can be done by translating P to O, rotating, and then translating O back to P.\n\n1 Coxeter, Introduction to Geometry, ch. 3\n\nIn the example of the Schwarzschild spacetime, the components of the metric happened to be independent of t when expressed in our coordinates. This is a sufficient condition for the existence of a Killing vector, but not a necessary one. For example, it is possible to write the metric of the Euclidean plane in various forms such as\n\n$ds^2 = dx^2 + dy^2$\n\nand\n\n$ds^2 = dr^2 + r^2 \\,d \\phi^{2}.$\n\nThe first form is independent of x and y, which demonstrates that x → x + dx and y → y + dy are Killing vectors, while the second form gives us $$\\phi \\rightarrow \\phi + d \\phi$$. Although we may be able to find a particular coordinate system in which the existence of a Killing vector is manifest, its existence is an intrinsic property that holds regardless of whether we even employ coordinates. In general, we define a Killing vector not in terms of a particular system of coordinates but in purely geometrical terms: a space has a Killing vector $$\\boldsymbol{\\xi}$$ if translation by an infinitesimal amount $$\\boldsymbol{\\xi}$$ du doesn’t change the distance between nearby points. Statements such as “the spacetime has a timelike Killing vector” are therefore intrinsic, since both the timelike property and the property of being a Killing vector are coordinate-independent.", null, "Figure $$\\PageIndex{3}$$: Vectors at a point P on a sphere can be visualized as occupying a Euclidean plane that is particular to P.\n\nKilling vectors, like all vectors, have to live in some kind of vector space. On a manifold, this vector space is particular to a given point, Figure $$\\PageIndex{3}$$. A different vector space exists at every point, so that vectors at different points, occupying different spaces, can be compared only by parallel transport. Furthermore, we really have two such spaces at a given point, a space of contravariant vectors and a space of covariant ones. These are referred to as the tangent and cotangent spaces.The infinitesimal displacements we’ve been discussing belong to the contravariant (upper-index) space, but by lowering and index we can just as well discuss them as covariant vectors. The customary way of notating Killing vectors makes use of the fact, mentioned in passing in Section 5.10, that the partial derivative operators $$\\partial_{0}, \\partial_{1}, \\partial_{2}, \\partial_{3}$$ form the basis for a vector space. In this notation, the Killing vector of the Schwarzschild metric we’ve been discussing can be notated simply as\n\n$\\boldsymbol{\\xi} = \\partial_{t} \\ldotp$\n\nThe partial derivative notation, like the infinitesimal notation, implicitly refers to continuous symmetries rather than discrete ones. If a discrete symmetry carries a point P1 to some distant point P2, then P1 and P2 have two different tangent planes, so there is not a uniquely defined notion of whether vectors $$\\boldsymbol{\\xi}_{1}$$ and $$\\boldsymbol{\\xi}_{1}$$ at these two points are equal — or even approximately equal. There can therefore be no well-defined way to construe a statement such as, “P1 and P2 are separated by a displacement $$\\boldsymbol{\\xi}$$.” In the case of a continuous symmetry, on the other hand, the two tangent planes come closer and closer to coinciding as the distance s between two points on an orbit approaches zero, and in this limit we recover an approximate notion of being able to compare vectors in the two tangent planes. They can be compared by parallel transport, and although parallel transport is path-dependent, the difference bewteen paths is proportional to the area they enclose, which varies as s2, and therefore becomes negligible in the limit s → 0.\n\nExercise $$\\PageIndex{1}$$\n\nFind another Killing vector of the Schwarzschild metric, and express it in the tangent-vector notation.\n\nIt can be shown that an equivalent condition for a field to be a Killing vector is\n\n$\\nabla_{a} \\boldsymbol{\\xi}_{b} + \\nabla_{b} \\boldsymbol{\\xi}_{a} = 0.$\n\nThis relation, called the Killing equation, is written without reference to any coordinate system, in keeping with the coordinate-independence of the notion.\n\nWhen a spacetime has more than one Killing vector, any linear combination of them is also a Killing vector. This means that although the existence of certain types of Killing vectors may be intrinsic, the exact choice of those vectors is not.\n\nExample $$\\PageIndex{2}$$: Euclidean translations\n\nThe Euclidean plane has two translational Killing vectors (1, 0) and (0, 1), i.e., $$\\partial_{x}$$ and $$\\partial_{y}$$. These same vectors could be expressed as (1, 1) and (1, −1) in coordinate system that was rescaled and rotated by 45 degrees.\n\nExample $$\\PageIndex{3}$$: a cylinder\n\nThe local properties of a cylinder, such as intrinsic flatness, are the same as the local properties of a Euclidean plane. Since the definition of a Killing vector is local and intrinsic, a cylinder has the same three Killing vectors as a plane, if we consider only a patch on the cylinder that is small enough so that it doesn’t wrap all the way around. However, only two of these — the translations — can be extended to form a smooth vector field on the entire surface of the cylinder. These might be more naturally notated in ($$\\phi$$, z) coordinates rather than (x, y), giving $$\\partial_{z}$$ and $$\\partial_{\\phi}$$.", null, "Figure $$\\PageIndex{4}$$: A cylinder has three local symmetries, but only two that can be extended globally to make Killing vectors.\n\nExample $$\\PageIndex{4}$$: a sphere\n\nA sphere is like a plane or a cylinder in that it is a two-dimensional space in which no point has any properties that are intrinsically different than any other. We might expect, then, that it would have two Killing vectors. Actually it has three, $$\\xi_{x} , \\xi_{y}$$, and $$\\xi_{z}$$, corresponding to infinitesimal rotations about the x, y, and z axes. To show that these are all independent Killing vectors, we need to demonstrate that we can’t, for example, have $$\\xi_{x} = c_{1} \\xi_{y} + c_{2} \\xi_{z}$$ for some constants c1 and c2. To see this, consider the actions of $$\\xi_{y}$$ and $$\\xi_{z}$$ on the point P where the x axis intersects the sphere. (References to the axes and their intersection with the sphere are extrinsic, but this is only for convenience of description and visualization.) Both $$\\xi_{y}$$ and $$xi_{z}$$ move P around a little, and these motions are in orthogonal directions, whereas $$\\xi_{x}$$ leaves P fixed. This proves that we can’t have $$\\xi_{x} = c_{1} \\xi_{y} + c_{2} \\xi_{z}$$. All three Killing vectors are linearly independent.\n\nThis example shows that linear independence of Killing vectors can’t be visualized simply by thinking about the vectors in the tangent plane at one point. If that were the case, then we could have at most two linearly independent Killing vectors in this two-dimensional space. When we say “Killing vector” we’re really referring to the Killing vector field, which is defined everywhere on the space.\n\nExample $$\\PageIndex{5}$$: Proving nonexistence of Killing vectors\n\n• Find all Killing vectors of these two metrics: $$\\begin{split} ds^{2} &= e^{-x} dx^{2} + e^{x} dy^{2} \\\\ ds^{2} &= dx^{2} + x^{2} dy^{2} \\ldotp \\end{split}$$\n• Since both metrics are manifestly independent of y, it follows that $$\\partial_{y}$$ is a Killing vector for both of them. Neither one has any other manifest symmetry, so we can reasonably conjecture that this is the only Killing vector either one of them has. However, one can have symmetries that are not manifest, so it is also possible that there are more.\n\nOne way to attack this would be to use the Killing equation to find a system of differential equations, and then determine how many linearly independent solutions there were.\n\nBut there is a simpler approach. The dependence of these metrics on x suggests that the spaces may have intrinsic properties that depend on x; if so, then this demonstrates a lower symmetry than that of the Euclidean plane, which has three Killing vectors. One intrinsic property we can check is the scalar curvature R. The following Maxima code calculates R for the first metric.", null, "The result is R = −ex, which demonstrates that points that differ in x have different intrinsic properties. Since the flow of a Killing field $$\\xi$$ can never connect points that have different properties, we conclude that $$\\xi_{x} = 0$$. If only $$\\xi_{y}$$ can be nonzero, the Killing equation $$\\nabla_{a} \\xi_{b} + \\nabla_{b} \\xi_{a} = 0$$ simplifies to $$\\nabla_{x} \\xi_{y} = \\nabla_{y} \\xi_{y} = 0$$. These equations constrain both $$\\partial_{x} \\xi_{y}$$ and $$\\partial_{y} \\xi_{y}$$, which means that given a value of $$\\xi_{y}$$ at some point in the plane, its value everywhere else is determined. Therefore the only possible Killing vectors are scalar multiples of the Killing vector already found. Since we don’t consider Killing vectors to be distinct unless they are linearly independent, the first metric only has one Killing vector.\n\nA similar calculation for the second metric shows that R = 0, and an explicit calculation of its Riemann tensor shows that in fact the space is flat. It is simply the Euclidean plane written in funny coordinates. This metric has the same three Killing vectors as the Euclidean plane.\n\nIt would have been tempting to leap to the wrong conclusion about the second metric by the following reasoning. The signature of a metric is an intrinsic property. The metric has signature ++ everywhere in the plane except on the y axis, where it has signature +0. This shows that the y axis has different intrinsic properties than the rest of the plane, and therefore the metric must have a lower symmetry than the Euclidean plane. It can have at most two Killing vectors, not three. This contradicts our earlier conclusion. The resolution of this paradox is that this metric has a removable degeneracy of the same type as the one described in section 6.4. As discussed in that section, the signature is invariant only under nonsingular transformations, but the transformation that converts these coordinates to Cartesian ones is singular.\n\n## Inappropriate Mixing of Notational Systems\n\nConfusingly, it is customary to express vectors and dual vectors by summing over basis vectors like this:\n\n$\\begin{split} \\textbf{v} &= v^{\\mu} \\partial_{\\mu} \\\\ \\boldsymbol{\\omega} &= \\omega_{\\mu} dx^{\\mu} \\ldotp \\end{split}$\n\nThis is an abuse of notation, driven by the desire to have up-down pairs of indices to sum according to the usual rules of the Einstein notation convention. But by that convention, a quantity like v or $$\\boldsymbol{\\omega}$$ with no indices is a scalar, and that’s not the case here. The products on the right are not tensor products, i.e., the indices aren’t being contracted.\n\nThis muddle is the result of trying to make the Einstein notation do too many things at once and of trying to preserve a clumsy and outdated system of notation and terminology originated by Sylvester in 1853. In pure abstract index notation, there are not six flavors of objects as in the two equations above but only two: vectors like va and dual vectors like $$\\omega_{a}$$. The Sylvester notation is the prevalent one among mathematicians today, because their predecessors committed themselves to it a century before the development of alternatives like abstract index notation and birdtracks. The Sylvester system is inconsistent with the way physicists today think of vectors and dual vectors as being defined by their transformation properties, because Sylvester considers v and $$\\boldsymbol{\\omega}$$ to be invariant.\n\nMixing the two systems leads to the kinds of notational clashes described above. As a particularly absurd example, a physicist who is asked to suggest a notation for a vector will typically pick up a pen and write v$$\\mu$$. We are then led to say that a vector is written in a concrete basis as a linear combination of dual vectors $$\\partial_{\\mu}$$!\n\n## Conservation Laws\n\nWhenever a spacetime has a Killing vector, geodesics have a constant value of vb$$\\xi_{b}$$, where vb is the velocity four-vector. For example, because the Schwarzschild metric has a Killing vector $$\\boldsymbol{\\xi} = \\partial_{t}$$, test particles have a conserved value of vt, and therefore we also have conservation of pt, interpreted as the mass-energy.\n\nExample $$\\PageIndex{6}$$: Energy-momentum in flat 1+1 spacetime\n\nA flat 1+1-dimensional spacetime has Killing vectors $$\\partial_{x}$$ and $$\\partial_{t}$$. Corresponding to these are the conserved momentum and mass-energy, p and E. If we do a Lorentz boost, these two Killing vectors get mixed together by a linear transformation, corresponding to a transformation of p and E into a new frame.\n\nIn addition, one can define a globally conserved quantity found by integrating the flux density Pa = Tab$$\\xi_{b}$$ over the boundary of any compact orientable region.2 In case of a flat spacetime, there are enough Killing vectors to give conservation of energy-momentum and angular momentum.\n\nNote\n\nHawking and Ellis, The Large Scale Structure of Space-Time, p. 62, give a succinct treatment that describes the flux densities and proves that Gauss’s theorem, which ordinarily fails in curved spacetime for a non-scalar flux, holds in the case where the appropriate Killing vectors exist. For an explicit description of how one can integrate to find a scalar mass-energy, see Winitzki, Topics in General Relativity, section 3.1.5, available for free online.\n\nThis page titled 7.1: Killing Vectors is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Benjamin Crowell via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request." ]
[ null, "https://phys.libretexts.org/@api/deki/files/10158/Figure_7.1.2.png", null, "https://phys.libretexts.org/@api/deki/files/10162/Figure_7.1.b.png", null, "https://phys.libretexts.org/@api/deki/files/10159/Figure_7.1.3.png", null, "https://phys.libretexts.org/@api/deki/files/10160/Figure_7.1.4.png", null, "https://phys.libretexts.org/@api/deki/files/10161/Figure_7.1.a.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9146726,"math_prob":0.998342,"size":16187,"snap":"2023-14-2023-23","text_gpt3_token_len":3770,"char_repetition_ratio":0.14669715,"word_repetition_ratio":0.007739938,"special_character_ratio":0.23679496,"punctuation_ratio":0.0989011,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990351,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T22:27:01Z\",\"WARC-Record-ID\":\"<urn:uuid:d219cca8-2abf-4cf2-926f-c002d85da478>\",\"Content-Length\":\"136779\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81d38e06-769a-4ab4-afa0-e4355e701f2e>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd2c38b7-54b3-4f1b-bb4a-c312fcba8cae>\",\"WARC-IP-Address\":\"18.160.46.56\",\"WARC-Target-URI\":\"https://phys.libretexts.org/Bookshelves/Relativity/General_Relativity_(Crowell)/07%3A_Symmetries/7.01%3A_Killing_Vectors\",\"WARC-Payload-Digest\":\"sha1:CY2AY7EQNTCUALHVAOHOTMHXO4M5T6BZ\",\"WARC-Block-Digest\":\"sha1:GPCZLIS2HXXP5VYWKDLYNEJOAMQISA35\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949689.58_warc_CC-MAIN-20230331210803-20230401000803-00003.warc.gz\"}"}
https://plainmath.net/25660/determine-whether-polynomial-expressed-linear-combination-polynomials
[ "", null, "# Determine whether the first polynomial can be expressed as a linear combination of other two polynomials.2x^3-2x^2+12x-6,x^3-2x^2-5x-3,3x^3-5x^2-4x-9", null, "FobelloE 2021-09-03 Answered\n\nDetermine whether the first polynomial can be expressed as a linear combination of other two polynomials.\n$2{x}^{3}-2{x}^{2}+12x-6,$\n\n${x}^{3}-2{x}^{2}-5x-3,$\n\n$3{x}^{3}-5{x}^{2}-4x-9$\n\nYou can still ask an expert for help\n\n• Questions are typically answered in as fast as 30 minutes\n\nSolve your problem for the price of one coffee\n\n• Math expert for every subject\n• Pay only if we can solve it", null, "timbalemX\n\nThe given polynomials are\n$2{x}^{3}-2{x}^{2}+12x-6$\n${x}^{3}-2{x}^{2}-5x-3$\n$3{x}^{3}-5{x}^{2}-4x-9$\nThe polynomial can be written as vector form\n${v}_{1}=\\left[\\begin{array}{c}2\\\\ -2\\\\ 12\\\\ -6\\end{array}\\right],{v}_{2}=\\left[\\begin{array}{c}1\\\\ -2\\\\ -5\\\\ -3\\end{array}\\right],{v}_{2}=\\left[\\begin{array}{c}3\\\\ -5\\\\ -4\\\\ -9\\end{array}\\right]$\nLet, ${v}_{1}=a{v}_{2}+b{v}_{3}$\n$\\left[\\begin{array}{c}2\\\\ -2\\\\ 12\\\\ -6\\end{array}\\right]=a\\left[\\begin{array}{c}1\\\\ -2\\\\ -5\\\\ -3\\end{array}\\right]+b\\left[\\begin{array}{c}3\\\\ -5\\\\ -4\\\\ -9\\end{array}\\right]$\n$\\left[\\begin{array}{c}2\\\\ -2\\\\ 12\\\\ -6\\end{array}\\right]=\\left[\\begin{array}{c}a+3b\\\\ -2a-5b\\\\ 5a-4b\\\\ -3a-9b\\end{array}\\right]$\n$⇒a+3b=2⇒-5a-4b=12$\n$⇒-2a-5b=-2⇒-3a-9b=-6$\nusing equation\n$6b-5b=4-2$\n$b=2$\n$a=-4$\nThese values of a,b are also santisty, hence polynomial having linear relationship\n${P}_{1}=-4{P}_{2}+2{P}_{3}$\nor $2{P}_{3}={P}_{1}+4{P}_{2}$\n${P}_{1}=$ I polynomial\n${P}_{2}=$ II polynomial\n${P}_{3}=$ III polynomial" ]
[ null, "https://plainmath.net/build/images/search.png", null, "https://plainmath.net/build/images/avatar.jpeg", null, "https://plainmath.net/build/images/avatar.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.881245,"math_prob":0.99996305,"size":388,"snap":"2022-05-2022-21","text_gpt3_token_len":75,"char_repetition_ratio":0.20052083,"word_repetition_ratio":0.42857143,"special_character_ratio":0.16494845,"punctuation_ratio":0.07575758,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999517,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-29T08:10:48Z\",\"WARC-Record-ID\":\"<urn:uuid:d5996c3f-0ae6-41d6-b583-071912e04f7f>\",\"Content-Length\":\"107421\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:29bf5edd-978b-4eb6-bb1b-d1d553cca96c>\",\"WARC-Concurrent-To\":\"<urn:uuid:f3aeb96e-6ede-4712-91d1-b7543d60153b>\",\"WARC-IP-Address\":\"172.66.40.79\",\"WARC-Target-URI\":\"https://plainmath.net/25660/determine-whether-polynomial-expressed-linear-combination-polynomials\",\"WARC-Payload-Digest\":\"sha1:SJSAXBDLQG277LPPY2XT7ESOFAOIQ5IH\",\"WARC-Block-Digest\":\"sha1:YJXLJKAK7TAOSWQ225D3QGRBRN2Q7SAR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663048462.97_warc_CC-MAIN-20220529072915-20220529102915-00542.warc.gz\"}"}
https://csharp.2000things.com/2011/09/01/
[ "## #402 – Value Equality vs. Reference Equality\n\nWhen we normally think of “equality”, we’re thinking of value equality–the idea that the values stored in two different objects are the same.  This is also known as equivalence.  For example, if we have two different variables that both store an integer value of 12, we say that the variables are equal.\n\n``` int i1 = 12;\nint i2 = 12;\n\n// Value equality - evaluates to true\nbool b2 = (i1 == i2);\n```\n\nThe variables are considered “equal”, even though we have two different copies of the integer value of 12.", null, "We can also talk about reference equality, or identity–the idea that two variables refer to exactly the same object in memory.\n\n``` Dog d1 = new Dog(\"Kirby\", 15);\nDog d2 = new Dog(\"Kirby\", 15);\nDog d3 = d1;\n\nbool b1 = (d1 == d2); // Evaluates to false\nbool b2 = (d1 == d3); // Evaluates to true\n```", null, "In C#, the == operator defaults to using value equality for value types and reference equality for reference types." ]
[ null, "https://2000thingscsharp.files.wordpress.com/2011/08/001-valueequality.png", null, "https://2000thingscsharp.files.wordpress.com/2011/09/002-referenceequality1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84915584,"math_prob":0.99969137,"size":973,"snap":"2022-40-2023-06","text_gpt3_token_len":250,"char_repetition_ratio":0.13622291,"word_repetition_ratio":0.011428571,"special_character_ratio":0.29290852,"punctuation_ratio":0.12565444,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980897,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T19:02:31Z\",\"WARC-Record-ID\":\"<urn:uuid:ccc8b702-4829-493c-b8c3-8924695d4daa>\",\"Content-Length\":\"98408\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2622768f-33c9-467a-b8bb-00ba5b4e3d7e>\",\"WARC-Concurrent-To\":\"<urn:uuid:0328dc32-55ce-411e-8a22-da1e62079023>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://csharp.2000things.com/2011/09/01/\",\"WARC-Payload-Digest\":\"sha1:3I7IGHPXHZ7JNIIJA7L257MDJ6YHZPKT\",\"WARC-Block-Digest\":\"sha1:BCIBTE4MCSV7YBX6U5OHOFZCPUXFSPFZ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500074.73_warc_CC-MAIN-20230203185547-20230203215547-00231.warc.gz\"}"}
https://www.learninggamesforkids.com/graphic_organizers/math/subtraction-math/subtraction-by-10.html
[ "### Difference of 10 Graphic Organizer\n\nGraphic organizers are a great way to help students see number combinations whose difference is the same number. As they find equations that lead to a common difference, students will make connections between numbers and operations. They’ll get a sense of what subtraction really means. By using graphic organizers, students will also begin to memorize number pairs that will help them solve more complex math problems in the future. In the Difference of 10 Graphic Organizer, students fill in 10 equations that have the difference of 10. This exercise helps students with number and subtraction automaticity. It will also help students as they begin to learn numbers and operations in base ten. Using a graphic organizer allows students to look at the numbers and the equations in new ways. They will make connections that aid their learning as they grow and the complexity of required math tasks increases." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9407425,"math_prob":0.9263279,"size":1384,"snap":"2020-45-2020-50","text_gpt3_token_len":252,"char_repetition_ratio":0.14275362,"word_repetition_ratio":0.0,"special_character_ratio":0.17485549,"punctuation_ratio":0.07234043,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9648005,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T02:37:35Z\",\"WARC-Record-ID\":\"<urn:uuid:eab13da5-af64-4e6d-8f04-1ada139fb404>\",\"Content-Length\":\"143680\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:427ec0fd-f59b-4816-8a94-93ed8fedd885>\",\"WARC-Concurrent-To\":\"<urn:uuid:18fa6e06-900c-45a5-9a50-663aab207711>\",\"WARC-IP-Address\":\"104.26.5.193\",\"WARC-Target-URI\":\"https://www.learninggamesforkids.com/graphic_organizers/math/subtraction-math/subtraction-by-10.html\",\"WARC-Payload-Digest\":\"sha1:NUFKQ56322YJ7G3Y6H3MDJ2UZZ7SBFWE\",\"WARC-Block-Digest\":\"sha1:5RASCV7UQ7TO7HQZSCSL6XW5EXVC3Q56\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107906872.85_warc_CC-MAIN-20201030003928-20201030033928-00678.warc.gz\"}"}
https://investguiding.com/articles/what-is-a-test-of-sufficiency
[ "# What is a test of sufficiency? (2023)\n\n## How do you answer data sufficiency?\n\nYou need to remember the steps involved in solving a particular Data Sufficiency question and follow them in this particular order: Check A (i.e. the first statement), then Check B (i.e. the second statement) and lastly, if required, combine the two statements to get the answer.\n\n(Video) Sufficient Statistics\nHow do you answer sufficient assumption questions?\n\nThe question asks us to prove the conclusion. The way to answer sufficient assumption questions is to arrange the evidence, find the gap, and add a new premise that lets you draw the conclusion. Here, conditional logic is key, but this will not always be the case.\n\n(Video) Understanding the FHA Self Sufficiency Test\n(Mark Fisher)\nWhat is data sufficiency test?\n\nData sufficiency means checking and testing a given set of information to see if it is enough to answer a given question. These are designed to test the candidate's ability to correlate every provided question to reach a conclusion.\n\n(Video) Online Lesson: Introduction to GMAT Data Sufficiency\n(PrepScholar GMAT)\nHow do you answer data sufficiency questions GMAT?\n\nOur five tips for GMAT data sufficiency\n1. Don't waste your time solving the problem. ...\n2. Memorize the answer choices. ...\n3. Consider each statement separately. ...\n4. Make sure the solution answers the question that is asked. ...\n5. Brush up on your math.\nApr 14, 2020\n\n(Video) How to Calculate the FHA Self Sufficiency Test\n(Sarita Sells)\nHow many types of data sufficiency are there?\n\nThere are two basic kinds of data sufficiency questions: value questions and yes/no questions. Value questions ask you to find a numerical value (e.g., what's the value of 5x?). For value questions, if you're able to find a specific value using the information in either statement, then that statement is sufficient.\n\n(Video) FHA Multi-Family Self Sufficiency Test\n(suzanne caldeira)\nHow do you ace data sufficiency questions?\n\nWith data sufficiency, you don't need to know the exact solution to the question. Therefore, don't try to solve the system of equations provided. All you need to do is simply recognize that putting together the two statements gives you sufficient information to answer the question.\n\n(Video) Buyer Beware! FHA’s Self Sufficiency Test\n(DiSpirito Team Top Real Estate Professionals)\nWhat is the difference between necessary and sufficient assumption questions?\n\nA sufficient assumption is an assumption that, if true, would make the whole argument totally valid. A necessary assumption is an assumption that needs to be true in order for the conclusion to be possible.\n\n(Video) FHA Self Sufficiency Test for 3 & 4 units.\n(Hughes Mortgage)\nWhat is a sufficient assumption?\n\nIf you add a statement in a choice to the argument's support and the conclusion becomes guaranteed, then you've found a sufficient assumption!\n\n(Video) Sufficiency Problems | Math | DAT OAT Exam | DAT Destroyer\n(DAT Destroyer ORGOMAN)\nWhat are 3 strategies for challenging your assumptions?\n\nHere are 5 ways to challenge your assumptions:\n• Respond don't react. ...\n• Decide to see positive intentions. ...\n• Empower and Equip Everyone. ...\n• Shift from expectation to shared understanding.\nDec 11, 2015\n\n(Video) 🎯Target CHT 2021 || Data Sufficiency || Shortcut,Tips & Tricks || Reasoning class for all Exams\n(It's STUDY TIME ! )\nWhat is a sufficiency study?\n\n1 Introduction. Statistical sufficiency is a concept in the theory of statistical inference that is meant to capture an intuitive notion of summarizing a large and possibly complex set of data by relatively few summary numbers that carry the relevant information in the larger data set.\n\n(Video) Civil Service Reviewer - Data Sufficiency (Practice Exam No. 1)\n(CSE Review Masterclass)\n\n## What is data sufficiency in quantitative aptitude?\n\nData sufficiency covers many different topics of quantitative aptitude. In data sufficiency, usually, a question is followed by two or three statements. You need to determine whether any of the statements individually or together are required to find the answer.\n\n(Video) Sufficient For Today (feat. Maryanne J. George) | Maverick City | TRIBL\n(TRIBL)\nWhat is data sufficiency in CAT?\n\nData Sufficiency, a very important topic of the exam, tests the ability of a candidate to determine whether a given set of data is sufficient to answer the question given. The candidates are not required to find the solution to the question.", null, "What are the 5 options in data sufficiency?\n\n• Statement (1) alone is sufficient to answer the question; Statement (2) alone is not.\n• Statement (2) alone is sufficient to answer the question; Statement (1) alone is not.\n• Only when considered together (T) do you have sufficient information to answer the question.\nJun 4, 2021\n\nHow do you ask good data questions?\n\nTo sum it up, here are the most important data questions to ask:\n1. What exactly do you want to find out?\n2. What standard KPIs will you use that can help?\n3. Where will your data come from?\n4. How can you ensure data quality?\n5. Which statistical analysis techniques do you want to apply?\nFeb 15, 2022\n\nWhat is a question that can be answered by collecting data?\n\nI will be able to understand the concept of variability within a data set. A statistical question is a question that can be answered by collecting data that vary.\n\nWhat is data Interpretation and data sufficiency?\n\nWhat is Data Sufficiency? Data Sufficiency is to check and test the given set of information, whether it is enough to answer a question or not. Data Sufficiency-type questions are designed to test the candidate's ability to relate given information to reach a conclusion.\n\nWhat is data reasoning?\n\nReasoning with Data is a quantitative methods textbook that puts simulations, hands-on examples, and conceptual reasoning first. That approach is made possible in part thanks to the widespread availability of the free and open-source R platform for data analysis and graphics (R Core Team, 2016).\n\nWhat is data Interpretation in aptitude?\n\nData Interpretation is the process of making sense out of a collection of data that has been processed. This collection may be present in various forms like bar graphs, line charts and tabular forms and other similar forms and hence needs an interpretation of some kind.\n\nHow do you calculate data interpretation?\n\nTips on how to answer data interpretation questions\n1. 1) You don't need a maths degree. ...\n2. 2) Review the data first. ...\n4. 4) Remember it is multiple choice. ...\n5. Revise and practice your skills. ...\n6. Get faster.\nDec 2, 2020\n\nWhat are the three key questions to ask the data creator when you receive a data set from them?\n\nEvery organization is collecting data today, but very few know what to do with it. Part of the challenge is, organizations don't know what to ask of data?\n...\n• How am I doing? ...\n• What drives my business? ...\n• Who are my customers, what are their needs?\nAug 19, 2012\n\n## Why is data sufficiency important?\n\nThe major advantage of Data Sufficiency is that you actually do not need to solve the questions. You just need to find out those statements which are required to find out the answer to the given question. You need to have proper conceptual knowledge to solve these questions without any pen and paper.\n\nIs data sufficiency asked in GRE?\n\nWhile both have multiple choice, select one problem solving questions, the GRE features Quantitative Comparison problems at the beginning of each Quant section while the GMAT intersperses Data Sufficiency problems throughout its Quantitative section.\n\nIs data sufficiency asked in GMAT?\n\nAbout 40% of the questions that appear in the GMAT quant section are data sufficiency questions. You will be provided with a question and two statements. You have to determine whether the information given in the statements is sufficient to answer the question asked.\n\nYou might also like\nPopular posts\nLatest Posts\nArticle information\n\nAuthor: Margart Wisoky\n\nLast Updated: 02/08/2023\n\nViews: 6716\n\nRating: 4.8 / 5 (78 voted)\n\nAuthor information\n\nName: Margart Wisoky\n\nBirthday: 1993-05-13\n\nAddress: 2113 Abernathy Knoll, New Tamerafurt, CT 66893-2169\n\nPhone: +25815234346805\n\nJob: Central Developer\n\nHobby: Machining, Pottery, Rafting, Cosplaying, Jogging, Taekwondo, Scouting\n\nIntroduction: My name is Margart Wisoky, I am a gorgeous, shiny, successful, beautiful, adventurous, excited, pleasant person who loves writing and wants to share my knowledge and understanding with you." ]
[ null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mP8/h8AAvMB+NzmkbcAAAAASUVORK5CYII=", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91929287,"math_prob":0.63537127,"size":5827,"snap":"2023-40-2023-50","text_gpt3_token_len":1165,"char_repetition_ratio":0.1739653,"word_repetition_ratio":0.028747434,"special_character_ratio":0.2085121,"punctuation_ratio":0.1420765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97506416,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T04:07:25Z\",\"WARC-Record-ID\":\"<urn:uuid:f20eb9b9-3226-426b-a026-98cacb686401>\",\"Content-Length\":\"85083\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8a336da-d683-446e-befb-dc3b50ef6322>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9fc587c-6cec-4cea-a355-2f3cd360b877>\",\"WARC-IP-Address\":\"172.67.143.70\",\"WARC-Target-URI\":\"https://investguiding.com/articles/what-is-a-test-of-sufficiency\",\"WARC-Payload-Digest\":\"sha1:F2ZQP55TPCPH5DFZIRD23PUMYQVKH5OQ\",\"WARC-Block-Digest\":\"sha1:UYBYOKYKNPW57SDDJCISP5TFQXCCTNMI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506676.95_warc_CC-MAIN-20230925015430-20230925045430-00513.warc.gz\"}"}
https://www.scirp.org/journal/paperinformation.aspx?paperid=85633
[ "Aerodynamic Performance and Vibration Analyses of Small Scale Horizontal Axis Wind Turbine with Various Number of Blades\n\nAbstract\n\nThe need to generate power from renewable sources to reduce demand for fossil fuels and the damage of their resulting carbon dioxide emissions is now well understood. Wind is among the most popular and fastest growing sources of alternative energy in the world. It is an inexhaustible, indigenous resource, pollution-free, and available almost any time of the day, especially in coastal regions. As a sustainable energy resource, electrical power generation from the wind is increasingly important in national and international energy policy in response to climate change. Experts predict that, with proper development, wind energy can meet up to 20% of US needs. Horizontal Axis Wind Turbines (HAWTs) are the most popular because of their higher efficiency. The aerodynamic characteristics and vibration of small scale HAWT with various numbers of blade designs have been investigated in this numerical study in order to improve its performance. SolidWorks was used for designing Computer Aided Design (CAD) models, and ANSYS software was used to study the dynamic flow around the turbine. Two, three, and five bladed HAWTs of 87 cm rotor diameter were designed. A HAWT tower of 100 cm long and 6 cm diameter was considered during this study while a shaft of 10.02 cm diameter was chosen. A good choice of airfoils and angle of attack is a key in the designing of a blade of rough surface and maintaining the maximum lift to drag ratio. The S818, S825 and S826 airfoils were used from the root to the tip and 4° critical angle of attack was considered. In this paper, a more appropriate numerical models and an improved method have been adopted in comparable with other models and methods in the literature. The wind flow around the whole wind turbine and static behavior of the HAWT rotor was solved using Moving Reference Frame (MRF) solver. The HAWT rotor results were used to initialize the Sliding Mesh Models (SMM) solver and study the dynamic behavior of HAWT rotor. The pressure and velocity contours on different blades surfaces were analyzed and presented in this work. The pressure and velocity contours around the entire turbine models were also analyzed. The power coefficient was calculated using the Tip Speed Ratio (TSR) and the moment coefficient and the results were compared to the theoretical and other research. The results show that the increase of number of blades from two to three increases the efficiency; however, the power coefficient remains relatively the same or sometimes decreases for five bladed turbine models. HAWT rotors and shaft vibrations were analyzed for two different materials using an applied pressure load imported from ANSYS fluent environment. It has proven that a good choice of material is crucial during the design process.\n\nShare and Cite:\n\nRahman, M. , Maroha, E. , El Shahat, A. , Soloiu, V. and Ilie, M. (2018) Aerodynamic Performance and Vibration Analyses of Small Scale Horizontal Axis Wind Turbine with Various Number of Blades. Journal of Power and Energy Engineering, 6, 76-105. doi: 10.4236/jpee.2018.66006.\n\n1. Introduction\n\nWith the global energy demand rising to unprecedented numbers, there is an obvious need to generate power from renewable sources to reduce the greenhouse gas emissions and achieve the U.S department of energy future goals of wind energy (20% wind energy by 2030, and 35% by 2050) . Renewable energy sources include wind energy, solar energy, tidal energy, geo-thermal energy, and biomass energy. Wind is among the most popular sources of alternative energy because it is pollution free and available almost any time of the day, especially in the coastal regions . Winds are commonly classified by their spatial scale, their speed, the type of forces that cause them, the regions in which they occur, and their effect. Winds have various aspects, an important one being its velocity; other aspects are the density of the gas involved and the energy content or wind energy .\n\nThe two primary types of wind turbine are horizontal axis wind turbine (HAWT) and vertical axis wind turbine (VAWT) . Horizontal axis means the rotating axis of the wind turbine is horizontal, or parallel to the ground. In the vertical axis wind turbine, the rotating axis of the wind turbine stands vertical or perpendicular to the ground. HAWTs are the most popular configuration because they have higher efficiency . In a HAWT the generator converts directly the wind which is extracted from the rotor. HAWT are conventional wind turbines and unlikely the VAWT are not omnidirectional . They must have some means for orienting the rotor with respect to the wind.\n\nLeithead et al. studied dynamics of variable speed wind turbines and design of models to control wind turbines. The purpose of this study was to investigate the dynamics of variable speed wind turbines and determine suitable models to support the control design task. Murtagh et al. investigated control wind turbine vibration by incorporating a passive control device. A passive control method using a tuned mass damper to mitigate vibrations of the blades and tower of a wind turbine was introduced. Ashwani K. et al. have done structural and Modal analysis of Al 2024 wind turbine blade designed based on FEA . Their analysis results show that the maximum deformation occurs at tip and the stresses are very less for Al lightweight materials.\n\nDuque et al. explored the ability of various methods to predict wind turbine power and aerodynamic loads. Results showed that all the methods, namely blade element momentum (BEM), vortex lattice, and Reynolds averaged Navier Stokes (RANS), perform well for pre-stall regimes. They found that the RANS code OVERFLOW, although not perfect, gives better predictions of power production for stall and post stall regime modeling than other methods. The combination of three methods has been used by to test the performance of three HAWT blade shapes. They concluded that the wind tunnel test results show that the OPT and the UOT blades obtain the same maximum power coefficient (Cp = 0.428) but at different tip speed ratio points. The UUT blade obtains the lowest Cp value because it almost always operates in stall conditions.\n\nD. S. Li et al. performed the field experiment and numerical simulation methods to test the pressure distribution on a 33 KW HAWT. They noticed that the closer to the root of blade the more consistent to the experimental results of the blade pressure. Ece Sagol et al. performed the icing simulation method on wind turbine blades to the data representing the wind flow field. This method consists of creating a wind turbine model using Computer Aided Design (CAD) software followed by using ANSYS-FLUENT software to simulate the model. C. J. Bai et al. designed a horizontal axis wind blade of 10,000 power output using the BEM and the modified stall model. The simulation results are compared with the improved BEM theory at rated wind speed of 10 m/s and show that the CFD is a good method on aerodynamic investigation of a HAWT blade. The CFD method has been used by to compute the aerodynamics of HAWT and a VAWT designs for low-cost rural applications. This method has been done in three different phases. In the first phase, a preliminary steady flow field is generated by rotating the turbine at a fixed rotation in the presence of the corresponding free stream wind speed.\n\nGoals and Objectives of the Research\n\nThe goal of this numerical study is to analyze the designing of HAWT blade models to optimize wind turbine performance and vibrations in considering maximization of the power produced by a wind turbine and reduction of vibrations of the turbine’s drive train and shaft. To achieve this goal, the following objectives were set for the present study:\n\n1) Design a HAWT’s blade and entire turbine mechanical system using SolidWorks CAD software;\n\n2) Design two, three, and five bladed HAWT rotor models with high lift force and low drag to minimize the blade’s turbulence and vibration;\n\n3) Analyze the rotor moment coefficients and power coefficients using numerical and theoretical method;\n\n4) Compare the numerical results to the theoretical and other research;\n\n5) Investigate the HAWT system vibration using ANSYS Fluent, Structural, and Modal environment.\n\n2. Turbine Modeling and Simulation\n\nTwo, three, and five bladed HAWT rotors have been designed and numerically tested using SolidWorks and ANSYS FLUENT software. The aerodynamic performance results from ANSYS FLUENT simulations are presented in this study and com-pared with the theoretical results and other research.\n\nModal analysis is used to determine a structure’s vibration characteristics including its natural frequency, mode shapes and mode participation factors (how much a mode participates in a given direction). Modal analysis is the most fundamental of the dynamic analysis and is used as the starting point for other dynamic analysis in ANSYS as it helps in calculating solution controls including step time. It allows the design to avoid resonant vibrations and gives an idea of how the model will respond to different types of dynamic loadings. Different materials are used to try to reduce the amount of vibration on the shaft and to find the optimum material and geometry that will cause the least amount of vibration.\n\nIn this research, entire HAWT and the turbine shaft are analyzed in ANSYS using modal analysis. HAWT and shaft’s natural frequencies have been investigated for structural steel and aluminum alloy material and the results were compared to other research. The shaft natural frequencies at different mode shapes were calculated theoretically and the results were compared with the numerical results.\n\nFEA study of two, three, and five HAWT rotors has also been done during this work to investigate the HAWT blades deflection frequencies and mode shapes. The procedure of importing the pressure distributions of the HAWT blade surfaces from ANSYS FLUENT to ANSYS Static structural, and then from ANSYS Static Structural to ANSYS Modal environment per-formed in this case is implemented.\n\n2.1. Blade, Shaft and Entire Turbine Model Design\n\nA shaft of 20 cm long and three sections was then deigned, the shaft sections diameters are 0.5 inch, 0.75 inch, and 1 inch respectively from the tip to the root as shown in Figure 3. Two bearings are attached on the shaft.\n\nThe blade was then assembled with the hub and a circular pattern tool of\n\nFigure 1. Increasing twist angle from the tip to the root.\n\nFigure 2. Solid works blade geometry.\n\nFigure 3. Shaft geometry.\n\nSolidWorks CAD software was applied to create a three bladed HAWT rotor. The HAWT tower, rotor, shaft, nacelle, gear box, and generator were then assembled using Solid-Works CAD software to create the entire HAWT geometry shown in Figure 4. A tower of 100 cm long and 6 cm diameter was used during this study.\n\n2.2. Rotors Fluid Domains and Mesh\n\nThe blade model was imported into ANSYS Design Modeler, and the two, three and five bladed HAWT rotors were created using circular pattern tool of ANSYS Design Modeler as shown in Figure 5.\n\nEach rotor has a diameter of 87 cm and two fluid domains were then added to all geometries; one is a cylindrical fluid domain of 100 cm diameter rotating around z-axis and is known as rotating domain. The second is a box fluid domain of 140 cm long and 60 cm width is known as stationary far-field fluid domain. The different computational domains are presented in Figure 6. The medium mesh of the model and fluid domains was generated and the number of nodes and elements are presented in Table 1. The front view of medium mesh of each HAWT rotor model and fluid domains are shown in Figure 7.\n\n2.3. Entire Fluid Domains and Mesh\n\nThe entire HAWT was imported into ANSYS Design Modeler and the wind flow analysis around the whole wind turbine was performed using CFD technique. The HAWT geometry is shown in Figure 8.\n\nAs mentioned earlier, the first step is to study the wind flow around the whole HAWT, the pressure distributions on the HAWT surfaces will be imported into ANSYS structural environment for farther analyses. The HAWT was enclosed in the far field stationary fluid domain of 140 cm long, 50 cm width, and 200 cm high. The medium mesh of the computational fluid do-main was generated and is shown in Figure 9 as well as the number of nodes and elements.\n\nFigure 4. Entire HAWT geometry.\n\nFigure 5. HAWT rotors geometries in design modeler of ANSYS.\n\nFigure 6. Computational domains.\n\n2.4. Computational Fluid Dynamic (CFD) Method\n\nThe shear stress transport (SST) K-turbulence model was used for each model to allow a use of two equation model to ac-count for history effects like convection and diffusion of turbulent energy, as well as the eddy viscosity model commonly called the turbulent viscosity. The following are the governing equations for the SST K-turbulence models :\n\n・ Turbulence kinetic energy\n\nFigure 7. Mesh of two, three and five bladed HAWT rotors and their fluid domains.\n\nFigure 8. HAWT geometry in design modeler of ANSYS.\n\nFigure 9. Mesh of whole HAWT and its fluid domain.\n\nTable 1. HAWT rotors and fluid domains number of element and nodes.\n\n$\\frac{\\partial k}{\\partial t}+{U}_{j}\\frac{\\partial k}{\\partial {x}_{j}}={P}_{k}-{\\beta }^{\\ast }k\\omega +\\frac{\\partial }{\\partial {x}_{j}}\\left[\\left(v+{\\sigma }_{k}{v}_{T}\\right)\\frac{\\partial k}{\\partial {x}_{j}}\\right]$ (1)\n\n・ Specific dissipation rate\n\n$\\frac{\\partial \\omega }{\\partial t}+{U}_{j}\\frac{\\partial \\omega }{\\partial {x}_{j}}=\\alpha {S}^{2}-\\beta {\\omega }^{2}+\\frac{\\partial }{\\partial {x}_{j}}\\left[\\left(v+{\\sigma }_{\\omega }{v}_{T}\\right)\\frac{\\partial \\omega }{\\partial {x}_{j}}\\right]+2\\left(1-{F}_{1}\\right){\\sigma }_{{\\omega }^{2}}\\frac{1}{\\omega }\\frac{\\partial k}{\\partial {x}_{i}}\\frac{\\partial \\omega }{\\partial {x}_{i}}$ (2)\n\n・ Kinematic eddy viscosity\n\n${v}_{t}=\\frac{{\\alpha }_{1}k}{\\mathrm{max}\\left({\\alpha }_{1}\\omega ,S{F}_{2}\\right)}$ (3)\n\nwhere Pk , F1 , and F2 are production limit, first and second bending functions respectively.\n\nThe rotor computational domain consists of a rotating zone surrounding the blades and a stationary far-field zone. A mesh interface was created between the two zones. The interface was necessary because the nodes on the boundaries of the far-field and rotational zones were intentionally non-conformal. The interface paired these so that interpolation can occur and fluid may pass into the rotating region. For each case, a static simulation with moving reference frame (MRF) and a dynamic sliding mesh model (SMM) were performed. The rotation was first defined using the steady-state solver with MRF, and the simulation was then solved in a transient manner using a sliding mesh motion. The converged static result from the MRF simulation was used to initialize the transient SMM solver. The moment Coefficient (Cm) was monitored over time with accurate reference values for one full rotation. At 477 RPM, one rotation is completed in 0.1258 sec. A time step of 0.001747 sec was chosen so that the Cm was calculated for every 5 degrees of rotation. This resulted in 72 time steps per simulation and 60 iterations per time step. From the Cm data, the power coefficient (Cp) of the wind turbine can be easily calculated. Boundary conditions for the simulations included air velocity inlet, rotational speed of the blades, and pressure outlet. The rotating zone was set to 477 RPM for each simulation. The blade walls inside the rotating domain were given a no slip condition. Constant wind speeds along the z direction were set at the velocity inlet. Five different wind speeds of 4, 6, 8, 10, and 12 m/s were tested for each model. The pressure outlet was kept at constant atmospheric pressure.\n\nThe optimum angle of attack must be maintained during wind turbine operation to maximize the blade efficiency and protect the turbine from vibration due to drag. The lift to drag ratio as function of angle of attack have been investigated during this work using ANSYS FLUENT software and the results are presented in Table 2. It was notice the current blade design has an optimum angle of attack of 4 degree. The plot of lift to drag ratio is shown in Figure 10, which shows that a peak of lift to drag ratio of 47% occurs at 4-degree angle of attack which corresponds to the optimum angle of attack of the blade design. This optimum angle of attack was considered during the HAWT blades simulation process.\n\n2.5. Mathematical Equations for Theoretical Calculation\n\nThe power generated by a cylindrical column of free air moving at a constant speed V is given Equation (4).\n\n$P=\\frac{\\text{d}E}{\\text{d}t}$ (4)\n\nwhere E is the kinetic energy given by:\n\n$E=\\frac{1}{2}m{V}^{2}$ (5)\n\nSubstituting Equation (5) into Equation (4), we get:\n\n$P=\\frac{\\text{d}\\left(\\frac{1}{2}m{V}^{2}\\right)}{\\text{d}t}$ (6)\n\nFor constant wind speed,\n\nTable 2. Lift to drag ratio as function of angle of attack.\n\nFigure 10. Lift to drag ratio vs angle of attack.\n\n$\\frac{\\text{d}V}{\\text{d}t}=0$ (7)\n\nConsequently, Equation (6) becomes\n\n$P=\\frac{1}{2}\\stackrel{˙}{m}{V}^{2}$ (8)\n\nIf the cross-sectional area of the column of air is A , and its density is ρ, the mass flow rate is given by:\n\n$\\stackrel{˙}{m}=\\rho AV$ (9)\n\nBy substituting Equation (9) into Equation (8), we get:\n\n$P=\\frac{1}{2}\\rho A{V}^{3}$ (10)\n\nIf the diameter of the column of air is D, then\n\n$P=\\frac{1}{2}\\rho \\frac{\\text{π}{D}^{2}}{4}{V}^{3}$ (11)\n\nThis is the total power available in the wind source.\n\nOn the other hand, the power extracted by the wind turbine is given by:\n\n${P}_{T}=T\\omega$ (12)\n\nThe wind turbine power coefficient\n\n${C}_{p}=\\frac{T\\omega }{\\frac{1}{2}\\rho A{V}^{3}}=\\lambda {C}_{m}$ (13)\n\nwhere λ is the tip speed ratio given by\n\n$\\lambda =\\frac{\\omega D}{2V}$ (14)\n\nAnd ${C}_{m}$ is the moment coefficient given by\n\n${C}_{m}=\\frac{T}{\\frac{1}{2}\\rho AD{V}^{2}}$ (15)\n\nEquation (13) will be used to analyze and validate the wind turbine efficiency.\n\nAccording to Betz , the theoretical maximum power efficiency of any design of wind turbine is 0.59. This is called Betz limit and wind turbines cannot operate at this maximum limit. The common real values of power coefficient for large scale wind turbines are 0.30 - 0.45, however, small scale wind turbines are less efficient, and their power coefficients can be below this range.\n\nWhen a shaft rotates, it may well go into transverse oscillations and the out of balance forces resulting in centrifugal force will induce the shaft to vibrate.\n\nThe centrifugal force is given by the following equation \n\n${F}_{c}=M{\\omega }^{2}\\left(r+e\\right)$ (16)\n\nwhere M is the mass, r is the shaft deflection, $\\left(r+e\\right)$ is the distance from the center of gravity, and $\\omega$ is the shaft angular velocity.\n\nThe deflection force is given by\n\n$F={K}_{t}r$ (17)\n\nwhere ${K}_{t}=M{\\omega }_{n}^{2}$ is the transverse stiffness\n\nEquating Equation (16) and Equation (17):\n\n${K}_{t}r=M{\\omega }^{2}\\left(r+e\\right)$ (18)\n\nAfter some algebraic manipulation, we get\n\n$r=\\frac{e}{\\left(\\frac{{\\omega }_{n}}{\\omega }\\right)-1}$ (19)\n\nwhere ${\\omega }_{n}$ is the shaft natural frequency.\n\nIt is noticed that when the shaft rotates at an angular speed equal to the natural frequency of the transverse oscillations, its vibration becomes large and shows up as a whirling of the shaft.\n\nNatural frequency is the number of times a system will oscillate (move back and forth) between its original position and its displaced position, if there is no outside interference. Wind turbine shaft can be regarded as simply supported where the ends are free to rotate normal to the axis, and the frequency of this kind of shaft is given by :\n\n$f=\\frac{\\text{π}}{2}{n}^{2}\\sqrt{\\frac{gEI}{WL}}$ (20)\n\nwhere n is the mode, g is the constant of gravity, E is the Young modulus, I is the moment of inertia, W is the shaft weight, and L is the shaft length.\n\nIn the absence of damping, the dynamic character of the blade model can be expressed in matrix form as :\n\n$KV={\\omega }^{2}MV$ (21)\n\nHere K is the stiffness matrix, M is the mass matrix, $\\omega$ is the angular frequency of vibration for a given mode and V is the mode vector that expresses the corresponding mode shape.\n\nThe resolution of this equation is not straight forward due to the complexity of the blade shape and incapability of finding the exact value of K.\n\n2.6. Finite Element Modal Analysis of HAWT Rotors and Shaft\n\nThe imported pressure distributions for the HAWT rotors are presented in the Table 3. It can be noticed that the rotor’s maxi-mum pressure distributions are located to the blade’s tip as expected.\n\nThe HAWT rotors included the imported pressure as applied load, a remote displacement at the edges of the blade roots as support, and a rotational velocity of 50 rad/sec. It is assumed that blades are connected to the HAWT hub by applying the remote displacement as support.\n\nThe imported moment from ANSYS Fluent to ANSYS structural environment\n\nTable 3. Imported pressure distributions.\n\nwas monitored and used as an applied load during the shaft vibration investigation. The boundary conditions for the shaft included two fix supports applied at the top face of each bearing and a moment of 108.64 N・m corresponding to moment imported from ANSYS Fluent. The modal analysis of HAWT rotors and shaft has been performed using these boundary conditions.\n\n3. Results and Discussion\n\n3.1. Aerodynamic Results of Entire HAWT\n\nThe velocity and pressure contours around the entire HAWT are shown in Figure 11(a) and Figure 11(b) respectively. The maximum velocity and pressure\n\nFigure 11. (a) Velocity contour around the entire HAWT surfaces. (b) Pressure contour around the entire HAWT surfaces.\n\ncontours are notice on the blades and tower front side.\n\n3.2. Aerodynamic Results of HAWT Rotors\n\n3.3. Moment and Power Coefficients Results\n\nThe moment coefficient variation with time of three different models at 4 m/s, 6 m/s, 8 m/s, 10 m/s, and 12 m/s inlet wind velocities are shown in Figure 15. The power coefficient can be calculated by multiplying the average moment coefficient with the tip speed ratio. It can be seen from these figures that the three bladed HAWT rotor enjoys the highest average moment coefficient followed by five bladed HAWT rotor.\n\nThe moment coefficients of two, three, and five bladed HAWT rotors at five different inlet wind speeds have been compared and are shown in Figures 16-18 respectively. It can be noticed that the 8 m/s inlet wind speed produced the highest average moment coefficient for all three models. This inlet wind speed can be considered as an optimum wind speed for the models.\n\nFigure 12. Pressure contour around the blade model surface.\n\nTable 4 presents the power and the moment coefficients for three different HAWT models.\n\nIt is noticed that the highest moment and power coefficients values of 0.1527231and 0.43192 respectively are attained by the three bladed HAWT rotor model at 8 m/s inlet wind velocity. All power coefficients result for three blades HAWT are higher than those of two and five bladed HAWT rotors at the same tip speed ratio. The two-bladed HAWT rotor has the lowest moment and power coefficient of 0.034075711 and 0.1927 at 4 m/s inlet velocity respectively. It is noticed that a large power coefficient increase with the increase of number of blades from two to three, whereas the power coefficient increase with the increase of blade number from three to five is relatively low. All power coefficient results for two blades HAWT are lower than those of three and five bladed HAWT rotors at the same tip speed ratio. The power coefficients as function of tip speed ratio for three HAWT rotor models have been plotted. The results are presented in Figure 19 where they are compared with those of Fei-Bin Hsiao et al. . It is noticed that the three-bladed HAWT rotor attained the highest power coefficients of 0.43192 at 2.8281 tip speed ratio (TSR). It can also be noticed that a high increase of power coefficient with the increase of number of blades from two to three, but the increase of power coefficient with the increase\n\nFigure 13. Velocity contour around the blade model surface.\n\nof blades from three to five is relatively low as mentioned earlier. Fei-Bin Hsiao et al. highest power coefficient of 0.428 was attained at 4.92 tip speed ratio. This comparison shows that the current HAWT design has the higher power coefficient than the previous results.\n\n3.4. HAWT Modal Analysis Results\n\nThe entire HAWT and shaft deflection simulations results are shown in Figure 20. The maximum deflection occurred on the HAWT blade and in the shaft center.\n\nFigure 14. Pressure distribution on the HAWT blade models surface.\n\n(a) (b) (c) (d) (e)\n\nFigure 15. Variation of Moment coefficient with time at 4 m/s, 6 m/s, 8 m/s, 10 m/s and 12 m/s inlet wind velocities. (a) For 4 m/s, (b) For 6 m/s. (c) For 8 m/s. (d) For 10 m/s. (e) For 12 m/s.\n\nThe two, three, and five HAWT rotors deflection simulations results are shown in Figures 21-23 respectively. The maximum deflection was notice on the blade tip while the minimum was notice near the root. Also, Table 5 shows the HAWT and shaft natural frequency at different mode shapes.\n\nTable 6 shows the two, three, and five bladed rotors natural frequency at different mode shapes. Aluminum alloy and homogenized composite material used in real wind turbine blade were used.\n\nThe density and orthopedic properties of GE 1.5 XLE wind turbine were used\n\nFigure 16. Moment coefficients variation of two bladed HAWT rotor.\n\nFigure 17. Moment coefficient for three-bladed HAWT model.\n\nFigure 18. Moment coefficient for five-bladed HAWT model.\n\nFigure 19. Fei-Bin Hsiao et al. (2013) and current power coefficients comparison.\n\nFigure 20. Entire HAWT and shaft deflection.\n\nFigure 21. Two-bladed HAWT rotor deflection.\n\nFigure 22. Three-bladed HAWT rotor deflection.\n\nFigure 23. Five-bladed HAWT rotor deflection.\n\nin this research. An increase of natural frequency is observed if a composite material is used instead of aluminum alloy material.\n\nDetail figures on natural frequencies and mode shapes of HAWT, Shaft and blades are shown in Appendix A.\n\nThe computational domain consists of a rotating zone surrounding the blades and a stationary far-field zone. A mesh interface is created between the two zones. The interface is necessary because the nodes on the boundaries of the far-field and rotational zones are intentionally non-conformal. The interface pairs these so that interpolation can occur, and fluid may pass into and out of the rotating region. For each case, a static simulation with moving reference frame (MRF) and a dynamic sliding mesh model (SMM) are completed. The rotation is first defined using the steady-state solver with MRF, and the simulation is then solved in a transient manner using a sliding mesh motion. The converged\n\nTable 4. Power and moment coefficients for three different HAWT models.\n\nTable 5. HAWT and shaft natural frequency at different mode shapes.\n\nTable 6. HAWT rotors natural frequency at different mode shapes.\n\nstatic result from the MRF simulation is used to initialize the transient SMM solver. Example graphs of the residuals for MRF and SMM simulations can be seen in Figure 24. Convergence criteria are kept consistent throughout the study requiring all 5 residuals to decrease to a value of 1e−03.\n\nFigure 24. MRF residuals converged after 225 iterations and SMM residuals.\n\nFor the transient solver, coefficients of moment (Cm) are monitored over time with accurate reference values. Time step size is dependent on the RPM value for each case. Time steps are calculated to account for every 10 degrees of model rotation. For 2 full rotations, 72 time steps per simulation are run with 20 iterations per time step.\n\nBoundary conditions for the simulations are taken from experimental data. These include air velocity inlet speed and corresponding rotational speed of the blades. The pressure outlet is kept at constant atmospheric pressure. The blade walls are given a no slip condition and zero rotational velocity relative to the sliding mesh zone (equal to the rotating fluid domain).\n\nThe realizable k-epsilon model is used with the SIMPLE segregated algorithm. The SIMPLE algorithm uses a relationship between velocity and pressure corrections to enforce mass conservation and to obtain the pressure field (ANSYS Fluent Theory Guide, 2012). For improved accuracy, the double precision option is selected as well as second order upwind based discretization for mean flow, turbulence, and transition equations.\n\n4. Conclusions\n\nThe efficiency of small scale HAWT with various numbers of blades has been investigated in this study and this efficiency is explained by the fact that putting more than three blades on the HAWT rotor will be the waste of material and money while reducing the HAWT performance. An increase of HAWT performance was observed if three bladed HAWT was used instead of two bladed HAWT and the change of performance was relatively low or the same for the five bladed HAWT.\n\nThe HAWT vibration is a big challenge on the HAWT performance and the choice of material to use is crucial during the de-sign process. Homogenized composite material has proven the highest performance due to its resistance to wind load. An average natural frequency increase of 41.7%, 31%, and 42.4% was observed for two, three, and five HAWT rotors if the homogenized composite material is used in wind turbine blade instead of aluminum alloy. Homogenization of material can be suggested during the blade design to increase its stiffness and contribute in its wind air resistance and performance.\n\nAluminum alloy has proven better performance than structural steel material on the HAWT shaft vibration; however, the difference in natural frequency is relatively low. The calculated average natural frequency of 0.3% was observed while the computed value of 0.2% was attained if aluminum alloy material is used instead of structural steel. This can be explained by the highest aluminum young modulus to density ratio compare to that of structural steel material. This quantity was used during the calculation of natural frequencies at different mode shapes. In the paper, the blade was created by modifying the large-scale GE 1.5XLE turbine to enhance the performance of the turbine which is applicable for the real size. But, we made it by this way to suit our laboratory capabilities and equipment. The conclusion obtained in this paper also useful for large scale HAWTs as well. The size of the computational domains is large enough to get these accurate results. The model is not created in the real size to fit our wind tunnel for testing purposes. This work is useful for large scale HAWTs too but within scale. Moreover, this is ingoing work for detailed vibration analysis with the detailed parameters of the blade (chords and twists distributions) and material properties of different parts of the turbine to be published in the next paper.\n\nAcknowledgements\n\nAuthors would express the heartiest deep sense of gratitude to Mechanical Engineering Department of Georgia Southern University. Authors acknowledge NSF-RET: ENgaging Educators in Renewable EnerGY (ENERGY), Award #1609524.\n\nNomenclature\n\nSymbol Explanation\n\nA Rotor Area (m2)\n\nD Rotor diameter (m)\n\nV Wind velocity (m/s)\n\nρ Air density (kg/m3)\n\nRe Reynolds Number\n\nT Torque (Nm)\n\nP Power (W)\n\nCp Power coefficient\n\nN Revolutions per minute\n\nTSR(λ) Tip speed ratio\n\nCm Moment coefficient\n\nNatural Frequencies and Mode Shapes\n\nA1. HAWT and Shaft Natural Frequencies and Mode Shapes\n\nA2. Two Blade HAWT Natural Frequencies\n\nConflicts of Interest\n\nThe authors declare no conflicts of interest.", null, "", null, "", null, "", null, "", null, "[email protected]", null, "+86 18163351462(WhatsApp)", null, "1655362766", null, "", null, "Paper Publishing WeChat", null, "" ]
[ null, "https://www.scirp.org/images/Twitter.svg", null, "https://www.scirp.org/images/fb.svg", null, "https://www.scirp.org/images/in.svg", null, "https://www.scirp.org/images/weibo.svg", null, "https://www.scirp.org/images/emailsrp.png", null, "https://www.scirp.org/images/whatsapplogo.jpg", null, "https://www.scirp.org/Images/qq25.jpg", null, "https://www.scirp.org/images/weixinlogo.jpg", null, "https://www.scirp.org/images/weixinsrp120.jpg", null, "https://www.scirp.org/Images/ccby.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.925908,"math_prob":0.95963895,"size":28843,"snap":"2021-43-2021-49","text_gpt3_token_len":6204,"char_repetition_ratio":0.15368079,"word_repetition_ratio":0.075609244,"special_character_ratio":0.20625456,"punctuation_ratio":0.08655824,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.990483,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-09T01:48:19Z\",\"WARC-Record-ID\":\"<urn:uuid:c71a6623-d564-44d7-bfa6-cf4a6b7ce82e>\",\"Content-Length\":\"159025\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f58558fb-dfe6-4145-a785-edbe23e514e0>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b00b33e-5f7b-4b2c-9827-37f2c0c81f44>\",\"WARC-IP-Address\":\"144.126.144.39\",\"WARC-Target-URI\":\"https://www.scirp.org/journal/paperinformation.aspx?paperid=85633\",\"WARC-Payload-Digest\":\"sha1:4K7B44JX66UNAEDBJLAOLSR25TSAW3RX\",\"WARC-Block-Digest\":\"sha1:RUCUG56GTJLQOM5LU2RDJSSYFSRC3UZ6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363641.20_warc_CC-MAIN-20211209000407-20211209030407-00107.warc.gz\"}"}
https://revfin.org/which-factors/
[ "# Which Factors?\n\nFebruary 5, 2019\n\n(This post is thanks to Editor Amit Goyal)\n\nThe lead article in Volume 23, Issue 1 of the Review of Finance is “Which Factors?” by Kewei Hou, Haitao Mo, Chen Xue, and Lu Zhang.\n\nA factor model proposes why different stocks have different returns. The most famous is the Capital Asset Pricing Model, which argues that assets have higher returns if they are more sensitive to movements in the broader market, because that makes them riskier. Kewei, Haitao, Chen, and Lu’s paper compares the power of various factor models in explaining asset returns. The authors find that the best-performing models are q-factor models. These models start from the basic finance principle that companies should invest more when their cost of capital is lower – so you can back out a firm’s cost of capital (expected returns) from its investment decisions.\n\nFactor Models\n\nIn 1993, Fama and French introduced their celebrated 3-factor model, which proposed that stock returns depend on their sensitivity to not only the market (as in the CAPM) but two additional factors based on size and valuation (measured by the book-to-market ratio). More than 25 years later, there is a renewed resurgence in different factor models that can explain the numerous anomalies in cross-sectional asset pricing (see Harvey, Liu, and Zhu (2016) for a recent review of these anomalies). Some of the most prominent examples are:\n\n Model Authors #Factors Factors q Hou, Xue, and Zhang (2015) 4 Market (MKT), size (Me), investment (I/A), and profitability (Roe) q5 Hou, Mo, Xue, and Zhang (2018) 5 Market (MKT), size (Me), investment (I/A), profitability (Roe), and expected growth (Eg) FF5 Fama and French (2015) 5 Market (MKT), size (SMB), value (HML), investment (CMA), and profitability (RMW) FF6 Fama and French (2018) 6 Market (MKT), size (SMB), value (HML), investment (CMA), profitability (RMW) or cash-based profitability (RMWc), and momentum (UMD) SY Stambaugh and Yuan (2017) 4 Market (MKT), size (SMB), management (MGMT), and performance (PERF) BS Barillas and Shanken (2018) 6 Market (MKT), size (SMB), value (HMLm), investment (I/A), profitability (Roe), and momentum (UMD) DHS Daniel, Hirshleifer, and Sun (2018) 3 Market (MKT), financing (FIN), and post-earnings-announcement-drift (PEAD)\n\nThese studies differ not only in terms of which factors they include, but also how the factors are constructed. For instance:\n\n• Fama and French construct the value factor (HML) using annually updated variables while Barillas and Shanken use monthly updating (HMLm).\n• The profitability factor is constructed using different variables in Fama and French models (RMW) versus that in q-models (Roe).\n• Some studies use 30/70 cutoffs (e.g. when the size factor is calculated, large firms are defined as being above the 70th size percentile and small firms as being below the 30th) while others use 20/80 cutoffs, etc.\n\nChoosing Between Models\n\nHow do you demonstrate the effectiveness of your favorite factor model? One way is to show that it explains the cross-section of returns. This boils down to calculating alphas from time-series regressions on a set of test assets/portfolios – the so-called left-hand-side (LHS) approach to judge models. (The alpha is the intercept of the regression, and measures extra return an asset earns over and above what the factor model would predict, and thus measures the model’s error). While most studies indeed use this approach, the results are sometimes hard to compare across different factor models because different studies use different test portfolios. It is also well-known (see, for example, Lewellen, Nagel, and Shanken (2010)) that the choice of test assets is not innocuous.\n\nRecent work by Barillas and Shanken (2017, 2018) offers a simple and elegant way out of these difficulties. These authors propose a right-hand-side (RHS) approach that uses spanning regressions to judge whether individual factors add explanatory power. Loosely speaking, you regress a factor from a candidate model on all the factors in another benchmark model. If the intercept is non-zero, the candidate factor/model is useful – it adds to the explanatory power of the benchmark model.  If the intercept is zero, the candidate factor provides no incremental information (see Gibbons, Ross, and Shanken (1985) for the original illustration of the idea). Thus, this RHS approach side steps the thorny issue of test assets as it involves calculating only the alphas of the factors from one model on the factors from another model, and vice-versa. This is the primary approach adopted in the current paper.\n\nThe Results\n\nThe authors find that the q– and q5-models largely subsume the Fama and French (FF) models. From January 1967 to December 2016, the alphas of the value, investment, profitability, and momentum factors (HML, CMA, RMW, and UMD) in the FF-models relative to the q-model are economically small (maximum is 0.12% per month) and statistically insignificant, although the cash-based profitability factor (RMWc) from FF6 has a q-model alpha of 0.25% (t = 3.83). In contrast, the investment and profitability factors (I/A and Roe) have economically large alphas (maximum is 0.80% per month) when regressed on the FF-model, which are strongly significant. Thus, the q-model has significant explanatory power over relative to the FF-models.\n\nThe Stambaugh and Yuan (SY) factors and the Daniel, Hirshleifer, and Sun (DHS) factors both have significant alphas relative to the q-model. Thus, they add incremental value to the q-model. At the same time, the q-model factors also have alphas relative to the SY model, and the investment (I/A) factor in the q-model has alpha relative to the DHS model. Finally, the monthly-formed HMLm factor in the Barillas and Shanken (BS) model also has alpha relative to the q– model.\n\nThe authors also reconstruct some of the factors from other studies in a more consistent way for example, by using NYSE breakpoints and the 30th/70th percentiles. They find that the performance of these reconstructed factors is different from that of the original factors. For example, the reconstructed performance factor (PERF) in the Stambaugh and Yuan model does not have an alpha relative to the q-model. Interestingly, the authors report that the reconstructed factors have high correlations with the factors in the q-model.\n\nIdeas for Future Research\n\nThe authors’ results are very thought-provoking. They have done a very nice job of putting the factors in different models on the same pedestal before comparing them. The idea of recreating some of the factors to have a consistent method of construction is also a good one. Future work could try to address other unresolved issues, some of which I mention below.\n\nAll methods of factor construction are ad-hoc. For instance, there is nothing magical about a 30-70 or a 20-80 split—one is not better than the other. Stambaugh and Yuan explicitly recognize this possibility and mention their motivation for choosing the 20-80 split. Similarly, annual/quarterly/monthly updating of variables in the factors could be driven by data considerations (I have not seen a reconstruction of the Fama and French factors using quarterly accounting data; it remains an open question how these reconstructed factors would fare against the quarterly-formed factors in the q-model). The authors’ work is useful in highlighting the importance of these choices. Nevertheless, one still has little guidance on how to construct better factors. Some of these decisions will be governed by the objective functions while others may rely on statistical criterion (Grinblatt and Saxena (2017) is an important contribution in this area).\n\nSpanning tests essentially compare Sharpe ratios from one set of factors to another set of factors. As mentioned by Barillas and Shanken (2017), comparing nested models using these metrics is straightforward but comparing non-nested models is not. An additional complication is that a simple comparison of Sharpe ratios (or GRS statistics) ignores their sampling variation. Fama and French (2018) use simulations to get a sense of the sampling variation but more formal work would be welcome.\n\nFinally, while the RHS approach is elegant and useful, many readers remain interested in understanding how factors explain LHS returns. For example, practitioners would like to know which alphas could be generated. In this sense, it is still useful to know how the factors explain anomalies. Therefore, extensions of Fama and French (2016, 2017) and Hou, Xue, and Zhang (2015, 2018) would be welcome." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92192465,"math_prob":0.83364564,"size":8593,"snap":"2021-31-2021-39","text_gpt3_token_len":1956,"char_repetition_ratio":0.13214576,"word_repetition_ratio":0.020044543,"special_character_ratio":0.22460142,"punctuation_ratio":0.10482846,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95514756,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-01T22:22:11Z\",\"WARC-Record-ID\":\"<urn:uuid:ac7e5c78-849f-4d1b-a7ea-3ff28282cd25>\",\"Content-Length\":\"33389\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d991562-3631-46c6-97c6-b5c404b5b6f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:3f448909-2220-412c-ade4-a4c7d042f713>\",\"WARC-IP-Address\":\"160.153.128.5\",\"WARC-Target-URI\":\"https://revfin.org/which-factors/\",\"WARC-Payload-Digest\":\"sha1:NCBRJJI74L435SOVLUZQUJR2T2HN45MM\",\"WARC-Block-Digest\":\"sha1:RPQTKC3ZGFYRIAZR5MXVFTR6X4QJWBQR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154277.15_warc_CC-MAIN-20210801221329-20210802011329-00508.warc.gz\"}"}
https://numbermatics.com/n/963/
[ "# 963\n\n## 963 is an odd composite number composed of two prime numbers multiplied together.\n\nWhat does the number 963 look like?\n\nThis visualization shows the relationship between its 2 prime factors (large circles) and 6 divisors.\n\n963 is an odd composite number. It is composed of two distinct prime numbers multiplied together. It has a total of six divisors.\n\n## Prime factorization of 963:\n\n### 32 × 107\n\n(3 × 3 × 107)\n\nSee below for interesting mathematical facts about the number 963 from the Numbermatics database.\n\n### Names of 963\n\n• Cardinal: 963 can be written as Nine hundred sixty-three.\n\n### Scientific notation\n\n• Scientific notation: 9.63 × 102\n\n### Factors of 963\n\n• Number of distinct prime factors ω(n): 2\n• Total number of prime factors Ω(n): 3\n• Sum of prime factors: 110\n\n### Divisors of 963\n\n• Number of divisors d(n): 6\n• Complete list of divisors:\n• Sum of all divisors σ(n): 1404\n• Sum of proper divisors (its aliquot sum) s(n): 441\n• 963 is a deficient number, because the sum of its proper divisors (441) is less than itself. Its deficiency is 522\n\n### Bases of 963\n\n• Binary: 11110000112\n• Base-36: QR\n\n### Squares and roots of 963\n\n• 963 squared (9632) is 927369\n• 963 cubed (9633) is 893056347\n• The square root of 963 is 31.0322412983\n• The cube root of 963 is 9.8751134955\n\n### Scales and comparisons\n\nHow big is 963?\n• 963 seconds is equal to 16 minutes, 3 seconds.\n• To count from 1 to 963 would take you about eight minutes.\n\nThis is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!)\n\n• A cube with a volume of 963 cubic inches would be around 0.8 feet tall.\n\n### Recreational maths with 963\n\n• 963 backwards is 369\n• The number of decimal digits it has is: 3\n• The sum of 963's digits is 18\n• More coming soon!\n\nMLA style:\n\"Number 963 - Facts about the integer\". Numbermatics.com. 2023. Web. 3 February 2023.\n\nAPA style:\nNumbermatics. (2023). Number 963 - Facts about the integer. Retrieved 3 February 2023, from https://numbermatics.com/n/963/\n\nChicago style:\nNumbermatics. 2023. \"Number 963 - Facts about the integer\". https://numbermatics.com/n/963/\n\nThe information we have on file for 963 includes mathematical data and numerical statistics calculated using standard algorithms and methods. We are adding more all the time. If there are any features you would like to see, please contact us. Information provided for educational use, intellectual curiosity and fun!\n\nKeywords: Divisors of 963, math, Factors of 963, curriculum, school, college, exams, university, Prime factorization of 963, STEM, science, technology, engineering, physics, economics, calculator, nine hundred sixty-three.\n\nOh no. Javascript is switched off in your browser.\nSome bits of this website may not work unless you switch it on." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87218446,"math_prob":0.96221465,"size":2401,"snap":"2022-40-2023-06","text_gpt3_token_len":594,"char_repetition_ratio":0.10805173,"word_repetition_ratio":0.032994922,"special_character_ratio":0.27197,"punctuation_ratio":0.16352202,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98710215,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T14:02:45Z\",\"WARC-Record-ID\":\"<urn:uuid:614fcf3e-f6ca-4357-b22e-68458be58352>\",\"Content-Length\":\"17045\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81b5796f-88f5-49c7-8467-8dcd3d688931>\",\"WARC-Concurrent-To\":\"<urn:uuid:73373779-ee0d-42dc-86cc-937d7734d332>\",\"WARC-IP-Address\":\"72.44.94.106\",\"WARC-Target-URI\":\"https://numbermatics.com/n/963/\",\"WARC-Payload-Digest\":\"sha1:EKSYVXABJDB2QU2J2SFMGBC6HS73I5P5\",\"WARC-Block-Digest\":\"sha1:ATLGACEUQ5IWYLVIWBPGX6S6YYHM34KH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500056.55_warc_CC-MAIN-20230203122526-20230203152526-00756.warc.gz\"}"}
https://rdrr.io/cran/Anthropometry/man/getDistMatrix.html
[ "# getDistMatrix: Dissimilarity matrix between individuals and prototypes In Anthropometry: Statistical Methods for Anthropometric Data\n\n## Description\n\nIn the definition of a sizing system, a distance function allows us to represent mathematically the idea of garment fit and it is a key element to quantify the misfit between an individual and the prototype.\n\nThis function computes the dissimilarity defined in McCulloch et al. (1998), which is used in `trimowa` and `hipamAnthropom`. For more details, see also Ibanez et al. (2012) and Vinue et al. (2014).\n\n## Usage\n\n `1` ```getDistMatrix(data,np,nv,w,bl,bh,al,ah,verbose) ```\n\n## Arguments\n\n `data` Data vector. `np` Number of observations in the database. `nv` Number of variables in the database. `w` Weights for the OWA operator computed by means of `weightsMixtureUB`. `bl,bh,al,ah` Constants required to specify the distance function. `verbose` Boolean variable (TRUE or FALSE) to indicate whether to report information on progress.\n\n## Details\n\nAt the computational level, it is asummed that all the `bh` values are negative, all the `bl` values are positive and all the `al` and `ah` slopes are positive (the sign of `al` is changed within the function when computing the dissimilarities).\n\n## Value\n\nA symmetric `np` x `np` matrix of dissimilarities.\n\n## Note\n\nThis function requires a C code called cast.c. In order to use `getDistMatrix` outside the package, the dynamic-link library is called by means of the sentence `dyn.load(\"cast.so\")` (In Windows, it would be `dyn.load(\"cast.dll\")`).\n\nJuan Domingo\n\n## References\n\nMcCulloch, C., Paal, B., and Ashdown, S., (1998). An optimization approach to apparel sizing, Journal of the Operational Research Society 49, 492–499.\n\nIbanez, M. V., Vinue, G., Alemany, S., Simo, A., Epifanio, I., Domingo, J., and Ayala, G., (2012). Apparel sizing using trimmed PAM and OWA operators, Expert Systems with Applications 39, 10512–10520.\n\nVinue, G., Leon, T., Alemany, S., and Ayala, G., (2014). Looking for representative fit models for apparel sizing, Decision Support Systems 57, 22–33.\n\nLeon, T., Zuccarello, P., Ayala, G., de Ves, E., and Domingo, J., (2007), Applying logistic regression to relevance feedback in image retrieval systems, Pattern Recognition 40, 2621–2632.\n\n`trimowa`, `hipamAnthropom`\n ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28``` ```#Data loading: dataTrimowa <- sampleSpanishSurvey bust <- dataTrimowa\\$bust #First bust class: data <- dataTrimowa[(bust >= 74) & (bust < 78), ] numVar <- dim(dataTrimowa) #Weights calculation: orness <- 0.7 weightsTrimowa <- weightsMixtureUB(orness,numVar) #Constants required to specify the distance function: numClust <- 3 bh <- (apply(as.matrix(log(data)),2,range)[2,] - apply(as.matrix(log(data)),2,range)[1,]) / ((numClust-1) * 8) bl <- -3 * bh ah <- c(23,28,20,25,25) al <- 3 * ah #Data processing. num.persons <- dim(data) num.variables <- dim(data) datam <- as.matrix(data) datat <- aperm(datam, c(2,1)) dim(datat) <- c(1,num.persons * num.variables) #Dissimilarity matrix: D <- getDistMatrix(datat, num.persons, numVar, weightsTrimowa, bl, bh, al, ah, FALSE) ```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59287655,"math_prob":0.9511117,"size":3089,"snap":"2022-40-2023-06","text_gpt3_token_len":905,"char_repetition_ratio":0.08622366,"word_repetition_ratio":0.004237288,"special_character_ratio":0.30171576,"punctuation_ratio":0.23343374,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97092,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T22:10:55Z\",\"WARC-Record-ID\":\"<urn:uuid:481cd4d9-da3f-4ba6-827d-e0b9e401f815>\",\"Content-Length\":\"61788\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:378121f9-735d-4f1f-83be-cb36b44bd751>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a8ca4ff-8eb2-4fe6-9901-de671c223e41>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/cran/Anthropometry/man/getDistMatrix.html\",\"WARC-Payload-Digest\":\"sha1:XN676W4L6JEMHVYCBDPPQQ6KG3U6TMKZ\",\"WARC-Block-Digest\":\"sha1:W7SUAMUNNIVYWOSKPOQZAHC4OHJGZ3UI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337668.62_warc_CC-MAIN-20221005203530-20221005233530-00752.warc.gz\"}"}
https://jerrymahun.com/index.php/home/open-access/viii-curves/57-chapter-d-multi-center-horizontal-curves?start=1
[ "### 2. Compound curve\n\n#### a. Nomenclature\n\nStarting with two tangent lines intersecting at a PI, a compound curve is fit between them, Figure D-9.", null, "Figure D-9 Compound curve\n\nFigure D-10 shows major parts of the compound curve.", null, "Figure D-10 Compound curve parts\n\nThe PCC (Point of Curve to Curve) is the EC of the first curve and BC of the second. A tangent line at the PCC intersects the incoming tangent at the PI1 and the outgoing tangent at PI2.\n\nTi is the distance from the BC1 to the PI; To the distance from the PI to the EC2.\n\nThere are seven parts to a compound curve: Ti, To, R1, R2, Δ1, Δ2, and Δ. Because the tangents deflect Δ at the PI and the compound curve transitions from the incoming to the outgoing tangent:\n\n Δ = Δ1+ Δ2 Equation D-1\n\nThat means there are six independent parts. For a unique solution four parts must be fixed, including at least one angle and at least two distances.\n\nBecause they share a point (PCC) and tangent line, once all six parts are determined, the compound curve can be computed as two simple curves, Figure D-11. Individual curve attributes can be computed using the single curve equations in Chapter C.", null, "Figure D-11 Two simple curves\n\nDepending on which parts are initially fixed, there are different ways to compute the remaining parts.\n\nOne way is solving the vertex triangle, Figure D-12, created by the three PIs.", null, "Figure D-12 Vertex triangle\n\nAnother method is to use a closed loop traverse through points O1-BC1-PI-EC2-O2, Figure D-13.", null, "Figure D-13 Traverse method\n\nThe direction of one line is assumed to be in a cardinal direction and the others written in terms of 90° angles and Δs. Distances are Rs and Ts. Latitudes and departures are computed, summed, are set equal to zero to force closure. This results in two equations in two unknowns which can then be simultaneously solved.\n\n#### b. Examples\n\n##### (1) Example 1\n\nA PI is located at station 38+00.00 with a left deflection of 72°00'00\"L. The compound curve begins at sta 33+50.00. The first curve has a 700.00 ft radius and 30°00'00\" central angle.\n\nDetermine the radius and central angle of the second curve and the length of both curves.\n\nSketch:", null, "We'll try a vertex triangle solution. Isolate the triangle and label the tangents:", null, "Since we have the radius and central angle of the first curve, we can compute its tangent:", null, "Compute Δ2 using Equation D-1", null, "Determine the distance from the PI1 to the PI which is a side of the triangle.", null, "The distance between PI1 and PI2 is the sum of the curve tangents. Using the Law of Sines and the known T1, we can compute T2.", null, "Using T2 and Δ2, R2 can be determined.", null, "Finally, compute each curve's length.", null, "##### (2) Example 2\n\nA PI is located at 55+69.23 with a deflection angle between tangents of 85°00'00\"R. The compound curve must begin 463.64 ft before the PI and end 405.60 ft after. Central angles of the two curves are 30°00'00\" and 55°00'00\", respectively.\n\nDetermine the radius of each curve,\n\nSketch:", null, "Because we only have angles of the vertex triangle with no distances (and insufficient given data to compute any), the curve system can't be solved that way. Instead, we will use the traverse method.\n\nTo start, the curve system is rotated to make the initial radial line run North. Then using right angles and Δs, bearings (or azimuths) of the other lines can be determined.\n\nUpdated sketch with bearings:", null, "Compute Latitudes and Departures:\n\n Line Bearing Length Latitude Departure O1-BC1 North R1 R1 0.00 BC1-PI East 463.64 0.00 463.64 PI-EC2 S 5°00'00\"E 405.60 -405.60 x cos(5°00'00\") +405.60 x sin(5°00'00\") EC2-O2 S 85°00'00\"W R2 -R2 x cos(85°00'00\") -R2 x sin(85°00'00\") O2-O1 S 30°00'00\"W R1-R2 -(R1-R2) x cos(30°00'00\") -(R1-R2) x sin(30°00'00\")\n\nSum and reduce the Latitudes:", null, "Sum and reduce the Departures:", null, "We have two equations with unknowns R1 and R2. Solve them simultaneously. We'll use substitution.\n\nSolve the Latitude summation for R1", null, "Substitute the equation for R1 in the Departures summation and solve R2:", null, "Solve R1:", null, "With R1 and R2 computed, we have two geometric attributes for each curve. Using Chapter C. Horizontal Curves equations, their remaining attributes can be determined.\n\n#### c. Stationing\n\nAs with a single curve, there are two paths to the EC from the BC. One goes up and down the original tangents, the other along the curves. Whether a station equation is used at the EC depends on how the entire alignment is stationed. Refer to the Station Equation discussion in Chapter C. Horizontal Curves.\n\nEquations to compute the curves' endpoints are:", null, "Equation D-2", null, "Equation D-3", null, "Equation D-4", null, "Equation D-5\n\nThe stationing for Example (2)  is:", null, "Each curve's table is computed as described in Chapter C. Horizontal Curves.\n\nFollowing are radial chord tables for each curve in Example (2) (computations aren't shown).\n\n Curve 1 Sta Defl ang Chord EC 54+10.64 15º00'00\" 301.58 54+00 14º28'36\" 291.29 53+00 9º33'34\" 193.51 52+00 4º38'32\" 94.31 BC 51+05.59 0º00'00\" 0.00\n\n Curve 2 Sta Defl ang Chord EC 58+12.43 27º30'00\" 386.54 58+00 26º38'57\" 375.47 57+00 19º48'17\" 283.63 56+00 12º57'37\" 187.77 55+00 6º06'57\" 89.109 BC 54+10.64 0º00'00\" 0.00\n\nStakeout procedure\n\nMeasure Ti and To from the PI along the tangents to set the BC1 and EC2.\n\nStake Curve 1\n\nSetup instrument on BC1.\n\nSight PI with 0º00'00\" on the horizontal circle.\n\nStake curve points using the curve table.\n\nStake Curve 2\n\nSetup on PCC\n\nSight BC1 with -(Δ1/2) = -15º00'00\"=345°00'00\" on the horizontal circle, Figure D-14a.\n\nInverse the scope to sight away from the BC1, Figure D-14b.\n\nRotating right to a horizontal angle of 0º00'00\" orients instrument tangent to the curves at the PCC, Figure D14c.\n\nStake curve points using the curve table.\n\nLast point should match the previously set EC2.", null, "a. Sight BC1 with  -(Δ1/2) reading", null, "b. Inverse telescope", null, "c. Rotate to 00°00'00\" Figure D-14 Orienting at PCC" ]
[ null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc22.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc23.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc24.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc26a.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc27a.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc30_1.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc30_2.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc30_4.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc30_3.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc30_5.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc30_6.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc30_7.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc30_8.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc35_1.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc35_2.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc35_3.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc35_5.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc35_4.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc35_6.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/mc35_7.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/MC40.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/MC41.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/MC42.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/MC43.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/MC49.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/MC50.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/MC51.png", null, "https://jerrymahun.com/images/open_access/curves/horizontal/multi/MC52.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87324333,"math_prob":0.9800784,"size":5859,"snap":"2021-31-2021-39","text_gpt3_token_len":1715,"char_repetition_ratio":0.14517507,"word_repetition_ratio":0.018518519,"special_character_ratio":0.31524152,"punctuation_ratio":0.11730927,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99777853,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56],"im_url_duplicate_count":[null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-27T12:58:31Z\",\"WARC-Record-ID\":\"<urn:uuid:bc051b78-a8ee-48d0-9327-e576b78bc64c>\",\"Content-Length\":\"42843\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f9d05e52-a5c5-42be-b1cf-af732d301e40>\",\"WARC-Concurrent-To\":\"<urn:uuid:c7f256f1-361c-44b4-ac06-2f0a5f649240>\",\"WARC-IP-Address\":\"198.46.82.242\",\"WARC-Target-URI\":\"https://jerrymahun.com/index.php/home/open-access/viii-curves/57-chapter-d-multi-center-horizontal-curves?start=1\",\"WARC-Payload-Digest\":\"sha1:7NSWCP7525WWCRSA5L2XNKEZ4UYXM3SP\",\"WARC-Block-Digest\":\"sha1:7I7HS6Q4Q36RJNWXW44YTKC7XWLXRTOI\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153391.5_warc_CC-MAIN-20210727103626-20210727133626-00315.warc.gz\"}"}
https://math.stackexchange.com/questions/3256090/calculate-sum-n-infty-infty-frac1-cosanan2
[ "# Calculate $\\sum_{n=-\\infty}^{\\infty}\\frac{1-\\cos(an)}{(an)^2}$\n\nAfter playing with some series in a numerical math website, it seems to me like the following identity holds:\n\n$$\\sum_{n=-\\infty}^{\\infty}\\frac{1-\\cos(an)}{(an)^2}=\\frac{\\pi}{a}$$\n\nIt seems a little bit surprising to me, and I was wondering if there is an elementary way to see it. Convergence is trivial due to comparison with $$\\frac{1}{n^2}$$, but the specific value is interesting. I would think that some Fourier analysis might be applicable, mainly because $$\\pi$$ appeared here, but couldn't make it work.\n\nP.S: Even a way to see the behavior $$\\sum_{k=-\\infty}^{\\infty}\\frac{1-\\cos(an)}{(an)^2}\\propto \\frac{1}{a}$$ is interesting to me, and presumably simpler.\n\nP.S2: It seems that for large $$a$$ (possibly just $$a>2\\pi$$) the claim is incorrect, see comment. Still, it is interesting to calculate, even if only for $$|a|<2\\pi$$.\n\n• Is $n$ allowed to be $0$? – xbh Jun 9 '19 at 10:50\n• @xbh Thanks for asking. At $n=0$, using Taylor, $\\frac{|1-\\cos(an)|}{(an)^2}=\\frac{1}{2}$ is reasonable. Yet, this shows that my hypothesis isn't true for $a>2\\pi$. Indeed, going back to my numerical website, I see that for large $a$ the answer is different. For instance, for $a=100$, it is $\\frac{\\pi}{500}(155-24\\pi)$. Still interesting to calculate this. It seems like the answer relates to the polylogarithm function. – The way of life Jun 9 '19 at 10:59\n• Well… according to WolframAlpha, the $\\sum_1^\\infty \\cos (an)/(an)^2$ doesn't have some nice form, like this. – xbh Jun 9 '19 at 11:27\n\nDefine\n\n$$f(x)=\\sum_{n=-\\infty}^{\\infty}\\frac{1-\\cos(nx)}{(nx)^2}$$\n\nfor $$x\\in (0,2\\pi]$$. We can differentiate this function to get\n\n$$f'(x)=\\frac{d}{dx}\\left(\\sum_{n=-\\infty}^{\\infty}\\frac{1-\\cos(nx)}{(nx)^2}\\right)=\\sum_{n=-\\infty}^{\\infty}\\frac{d}{dx}\\left(\\frac{1-\\cos(nx)}{(nx)^2}\\right)$$\n\n$$=\\sum_{n=-\\infty}^{\\infty}\\left(\\frac{\\sin (n x)}{n x^2}-\\frac{2 (1-\\cos (n x))}{n^2 x^3}\\right)=\\frac{1}{x^2}\\sum_{n=-\\infty}^{\\infty}\\frac{\\sin(nx)}{n}-\\frac{2}{x}f(x).$$\n\nFrom the answer to this question, we know that for $$x\\in (0,2\\pi)$$\n\n$$\\sum_{n=1}^\\infty \\frac{\\sin(nx)}{n}=\\frac{\\pi-x}{2}.$$\n\nSince $$\\lim_{n\\to 0}{\\frac{\\sin(nx)}{n}}=x$$ and $$\\frac{\\sin(nx)}{n}$$ is even, this implies\n\n$$\\frac{1}{x^2}\\sum_{n=-\\infty}^\\infty \\frac{\\sin(nx)}{n}=\\frac{1}{x^2}\\left(x+2\\sum_{n=1}^\\infty \\frac{\\sin(nx)}{n}\\right)=\\frac{1}{x^2}(x+\\pi-x)=\\frac{\\pi}{x^2}.$$\n\nThis then gives us the ODE\n\n$$f'(x)=\\frac{\\pi}{x^2}-\\frac{2}{x}f(x).$$\n\nSolving this, we get that\n\n$$f(x)=\\frac{\\pi}{x}+\\frac{C}{x^2}$$\n\nfor some constant $$C$$. We can find this by noting that\n\n$$f(\\pi)=\\sum_{n=-\\infty}^{\\infty}\\frac{1-\\cos(nx)}{(nx)^2}=\\frac{1}{2}+\\frac{2}{\\pi^2}\\sum_{n=1}^\\infty \\frac{2}{(2n-1)^2}=\\frac{1}{2}+\\frac{2}{\\pi^2}\\frac{\\pi^2}{4}=1.$$\n\nThen $$C=0$$, and we get $$f(x)=\\frac{\\pi}{x}$$ for $$x\\in (0,2\\pi)$$. To finish the proof, note that\n\n$$f(2\\pi)=\\sum_{n=-\\infty}^{\\infty}\\frac{1-\\cos(n2\\pi)}{(n2\\pi)^2}=...+0+0+\\frac{1}{2}+0+0+...=\\frac{1}{2}$$\n\nas expected.\n\n• Well… the differentiation is suspicious… $f(x)$ does not converge uniformly, since the general term does not converge to $0$ uniformly. – xbh Jun 9 '19 at 14:54\n• True, but it converges uniformly for any set $[\\epsilon,2\\pi]$, so for any $x$ just take $\\epsilon$ to be $x/2$. – QC_QAOA Jun 9 '19 at 14:56\n• Thanks! I forgot this! +1 for this answer. – xbh Jun 9 '19 at 15:02" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9475805,"math_prob":0.99999535,"size":824,"snap":"2020-45-2020-50","text_gpt3_token_len":237,"char_repetition_ratio":0.108536586,"word_repetition_ratio":0.0,"special_character_ratio":0.29004854,"punctuation_ratio":0.11627907,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000093,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T07:32:08Z\",\"WARC-Record-ID\":\"<urn:uuid:3dbe9325-5da2-4d6c-a654-9679562b5caa>\",\"Content-Length\":\"154868\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d5b58b15-b461-4d0b-89da-fe4654a2b3eb>\",\"WARC-Concurrent-To\":\"<urn:uuid:03c7de36-15dc-48da-a8f8-287582880051>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3256090/calculate-sum-n-infty-infty-frac1-cosanan2\",\"WARC-Payload-Digest\":\"sha1:MT2IJKA6JCCAQP4LAYHFEGXB6YOISO6L\",\"WARC-Block-Digest\":\"sha1:I25YOAX25L7IW2T6EF2YGZRF4RJ3PPY5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107890586.57_warc_CC-MAIN-20201026061044-20201026091044-00290.warc.gz\"}"}
https://www.colorhexa.com/00e3b3
[ "# #00e3b3 Color Information\n\nIn a RGB color space, hex #00e3b3 is composed of 0% red, 89% green and 70.2% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 0% magenta, 21.1% yellow and 11% black. It has a hue angle of 167.3 degrees, a saturation of 100% and a lightness of 44.5%. #00e3b3 color hex could be obtained by blending #00ffff with #00c767. Closest websafe color is: #00cccc.\n\n• R 0\n• G 89\n• B 70\nRGB color chart\n• C 100\n• M 0\n• Y 21\n• K 11\nCMYK color chart\n\n#00e3b3 color description : Pure (or mostly pure) cyan.\n\n# #00e3b3 Color Conversion\n\nThe hexadecimal color #00e3b3 has RGB values of R:0, G:227, B:179 and CMYK values of C:1, M:0, Y:0.21, K:0.11. Its decimal value is 58291.\n\nHex triplet RGB Decimal 00e3b3 `#00e3b3` 0, 227, 179 `rgb(0,227,179)` 0, 89, 70.2 `rgb(0%,89%,70.2%)` 100, 0, 21, 11 167.3°, 100, 44.5 `hsl(167.3,100%,44.5%)` 167.3°, 100, 89 00cccc `#00cccc`\nCIE-LAB 80.844, -57.003, 10.641 35.603, 58.189, 52 0.244, 0.399, 58.189 80.844, 57.987, 169.426 80.844, -67.315, 24.867 76.282, -50.183, 12.98 00000000, 11100011, 10110011\n\n# Color Schemes with #00e3b3\n\n• #00e3b3\n``#00e3b3` `rgb(0,227,179)``\n• #e30030\n``#e30030` `rgb(227,0,48)``\nComplementary Color\n• #00e341\n``#00e341` `rgb(0,227,65)``\n• #00e3b3\n``#00e3b3` `rgb(0,227,179)``\n• #00a2e3\n``#00a2e3` `rgb(0,162,227)``\nAnalogous Color\n• #e34100\n``#e34100` `rgb(227,65,0)``\n• #00e3b3\n``#00e3b3` `rgb(0,227,179)``\n• #e300a2\n``#e300a2` `rgb(227,0,162)``\nSplit Complementary Color\n• #e3b300\n``#e3b300` `rgb(227,179,0)``\n• #00e3b3\n``#00e3b3` `rgb(0,227,179)``\n• #b300e3\n``#b300e3` `rgb(179,0,227)``\n• #30e300\n``#30e300` `rgb(48,227,0)``\n• #00e3b3\n``#00e3b3` `rgb(0,227,179)``\n• #b300e3\n``#b300e3` `rgb(179,0,227)``\n• #e30030\n``#e30030` `rgb(227,0,48)``\n• #009777\n``#009777` `rgb(0,151,119)``\n• #00b08b\n``#00b08b` `rgb(0,176,139)``\n• #00ca9f\n``#00ca9f` `rgb(0,202,159)``\n• #00e3b3\n``#00e3b3` `rgb(0,227,179)``\n• #00fdc7\n``#00fdc7` `rgb(0,253,199)``\n• #17ffce\n``#17ffce` `rgb(23,255,206)``\n• #31ffd3\n``#31ffd3` `rgb(49,255,211)``\nMonochromatic Color\n\n# Alternatives to #00e3b3\n\nBelow, you can see some colors close to #00e3b3. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #00e37a\n``#00e37a` `rgb(0,227,122)``\n• #00e38d\n``#00e38d` `rgb(0,227,141)``\n• #00e3a0\n``#00e3a0` `rgb(0,227,160)``\n• #00e3b3\n``#00e3b3` `rgb(0,227,179)``\n• #00e3c6\n``#00e3c6` `rgb(0,227,198)``\n• #00e3d9\n``#00e3d9` `rgb(0,227,217)``\n• #00dae3\n``#00dae3` `rgb(0,218,227)``\nSimilar Colors\n\n# #00e3b3 Preview\n\nThis text has a font color of #00e3b3.\n\n``<span style=\"color:#00e3b3;\">Text here</span>``\n#00e3b3 background color\n\nThis paragraph has a background color of #00e3b3.\n\n``<p style=\"background-color:#00e3b3;\">Content here</p>``\n#00e3b3 border color\n\nThis element has a border color of #00e3b3.\n\n``<div style=\"border:1px solid #00e3b3;\">Content here</div>``\nCSS codes\n``.text {color:#00e3b3;}``\n``.background {background-color:#00e3b3;}``\n``.border {border:1px solid #00e3b3;}``\n\n# Shades and Tints of #00e3b3\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000b09 is the darkest color, while #f7fffd is the lightest one.\n\n• #000b09\n``#000b09` `rgb(0,11,9)``\n• #001f18\n``#001f18` `rgb(0,31,24)``\n• #003228\n``#003228` `rgb(0,50,40)``\n• #004637\n``#004637` `rgb(0,70,55)``\n• #005a47\n``#005a47` `rgb(0,90,71)``\n• #006d56\n``#006d56` `rgb(0,109,86)``\n• #008166\n``#008166` `rgb(0,129,102)``\n• #009575\n``#009575` `rgb(0,149,117)``\n• #00a885\n``#00a885` `rgb(0,168,133)``\n• #00bc94\n``#00bc94` `rgb(0,188,148)``\n• #00cfa4\n``#00cfa4` `rgb(0,207,164)``\n• #00e3b3\n``#00e3b3` `rgb(0,227,179)``\n• #00f7c2\n``#00f7c2` `rgb(0,247,194)``\n• #0bffcb\n``#0bffcb` `rgb(11,255,203)``\n• #1fffd0\n``#1fffd0` `rgb(31,255,208)``\n• #32ffd4\n``#32ffd4` `rgb(50,255,212)``\n• #46ffd8\n``#46ffd8` `rgb(70,255,216)``\n• #5affdc\n``#5affdc` `rgb(90,255,220)``\n• #6dffe0\n``#6dffe0` `rgb(109,255,224)``\n• #81ffe4\n``#81ffe4` `rgb(129,255,228)``\n• #95ffe8\n``#95ffe8` `rgb(149,255,232)``\n• #a8ffed\n``#a8ffed` `rgb(168,255,237)``\n• #bcfff1\n``#bcfff1` `rgb(188,255,241)``\n• #cffff5\n``#cffff5` `rgb(207,255,245)``\n• #e3fff9\n``#e3fff9` `rgb(227,255,249)``\n• #f7fffd\n``#f7fffd` `rgb(247,255,253)``\nTint Color Variation\n\n# Tones of #00e3b3\n\nA tone is produced by adding gray to any pure hue. In this case, #697a77 is the less saturated color, while #00e3b3 is the most saturated one.\n\n• #697a77\n``#697a77` `rgb(105,122,119)``\n• #60837c\n``#60837c` `rgb(96,131,124)``\n• #578c81\n``#578c81` `rgb(87,140,129)``\n• #4f9486\n``#4f9486` `rgb(79,148,134)``\n• #469d8b\n``#469d8b` `rgb(70,157,139)``\n• #3da690\n``#3da690` `rgb(61,166,144)``\n• #34af95\n``#34af95` `rgb(52,175,149)``\n• #2cb79a\n``#2cb79a` `rgb(44,183,154)``\n• #23c09f\n``#23c09f` `rgb(35,192,159)``\n• #1ac9a4\n``#1ac9a4` `rgb(26,201,164)``\n• #11d2a9\n``#11d2a9` `rgb(17,210,169)``\n• #09daae\n``#09daae` `rgb(9,218,174)``\n• #00e3b3\n``#00e3b3` `rgb(0,227,179)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #00e3b3 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5295066,"math_prob":0.83119994,"size":3684,"snap":"2021-31-2021-39","text_gpt3_token_len":1645,"char_repetition_ratio":0.1375,"word_repetition_ratio":0.011049724,"special_character_ratio":0.5461455,"punctuation_ratio":0.2283105,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9892874,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-01T11:57:14Z\",\"WARC-Record-ID\":\"<urn:uuid:ef5ae131-a81a-4510-8c60-44deca4dfbd8>\",\"Content-Length\":\"36143\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e4e99b13-f84f-4dc5-ba2d-401354471b92>\",\"WARC-Concurrent-To\":\"<urn:uuid:ce81c04f-70a1-483f-9a71-ccd2baaecb03>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/00e3b3\",\"WARC-Payload-Digest\":\"sha1:JSFAJIWI4XQG3SUEKBS3QWTYX2WMUPOR\",\"WARC-Block-Digest\":\"sha1:MUUMANSRIGAZIOUXD2VYJD7P3RPJLTPI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154175.76_warc_CC-MAIN-20210801092716-20210801122716-00601.warc.gz\"}"}
https://marketplace.yoyogames.com/assets/5244/extra-functions
[ "", null, "", null, "# Extra Functions\n\n#### Black Smite\n\nYou must be logged in to obtain assets\n\n### Description\n\nA few extra functions that are useful,a few of them are-\n\n-percent_chance(chance) eg. if(percent_chance(50)); { //Do something } there is a 50% chance this will return true.\n\n-approach(variable, target, increment/decrement) eg. Imagine that x = 10 and y = 20 Step Event: x = approach(x,y,1); This will add 1 to x each step until it is equal to y. or y = approach(y,x,1); this will subtract 1 from y each step until it is equal to x\n\n-rotate_towards_point(currentangle,targetangle,rotationspeed) eg. var mangle = point_direction(x,y,mouse_x,mouse_y); image_angle = rotate_towards_point(image_angle, mangle,5); This will smoothly rotate the image to the mouse with a rotation speed of 5.\n\n-move_flee(x,y) The opposite of move_towards_point(x,y) moves away from the given x & y\n\n+More!" ]
[ null, "https://marketplace.yoyogames.com/static_assets/core/studio2_g-1c530cde87af4c91c76d55115f8a9790106ef3529fff60d53b6dd649ed3f37c8.png", null, "https://marketplacecdn.yoyogames.com/images/assets/5244/icon/1491300330_large.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7904677,"math_prob":0.9873473,"size":812,"snap":"2022-27-2022-33","text_gpt3_token_len":218,"char_repetition_ratio":0.11014851,"word_repetition_ratio":0.051282052,"special_character_ratio":0.27216747,"punctuation_ratio":0.17964073,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9900914,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T17:14:55Z\",\"WARC-Record-ID\":\"<urn:uuid:85bb4b3f-97a3-46d9-9942-eb55157ac233>\",\"Content-Length\":\"37641\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1213d933-f036-4792-8d08-ae0533bd8083>\",\"WARC-Concurrent-To\":\"<urn:uuid:adf0d10c-2ee5-4c82-99e6-5da70d9a1c84>\",\"WARC-IP-Address\":\"104.18.245.218\",\"WARC-Target-URI\":\"https://marketplace.yoyogames.com/assets/5244/extra-functions\",\"WARC-Payload-Digest\":\"sha1:OI4AXEIFVOLT4HYXFP7XVUCR4RFWL2IQ\",\"WARC-Block-Digest\":\"sha1:WIPGATAL7AOGMG5PUSMYP7BURLKHNS24\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103271763.15_warc_CC-MAIN-20220626161834-20220626191834-00670.warc.gz\"}"}
https://studyres.com/doc/10253824/unit-3--algebraic-connections
[ "• Study Resource\n• Explore\n\nSurvey\n\n* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project\n\nDocument related concepts\n\nProofs of Fermat's little theorem wikipedia, lookup\n\nElementary mathematics wikipedia, lookup\n\nArithmetic wikipedia, lookup\n\nMathematics of radio engineering wikipedia, lookup\n\nLine (geometry) wikipedia, lookup\n\nEthnomathematics wikipedia, lookup\n\nSecondary School Mathematics Curriculum Improvement Study wikipedia, lookup\n\nCartesian coordinate system wikipedia, lookup\n\nMinkowski diagram wikipedia, lookup\n\nBracket wikipedia, lookup\n\nPatterns in nature wikipedia, lookup\n\nTranscript\n```Middletown Public Schools\nMathematics Unit Planning Organizer\nMathematics - Operations and Algebraic Thinking Geometry\nAlgebraic Connections\nDuration\nSubject\nUnit 3\nBig Idea\nEssential\nQuestion\n5\n15 Instructional Days (+ 5 Reteaching/Extension Days)\nNumerical expressions are interpreted through an understanding of operations and grouping of symbols.\nHow do I use numerical symbols to solve equations?\nHow do I read and interpret numerical expressions?\nMathematical Practices\nPractices in bold are to be emphasized in the unit.\n1. Make sense of problems and persevere in solving them.\n2. Reason abstractly and quantitatively.\n3. Construct viable arguments and critique the reasoning of others.\n4. Model with mathematics.\n5. Use appropriate tools strategically.\n6. Attend to precision.\n7. Look for and make use of structure.\n8. Look for and express regularity in repeated reasoning.\nDomain and Standards Overview\nOperations and Algebraic Thinking\n• Write and interpret numerical expressions.\n• Analyze patterns and relationships.\nCC.5.OA.1 Use parentheses, brackets, or braces in numerical expressions, and evaluate expressions with these symbols\nCC.5.OA.2 Write simple expressions that record calculations with numbers, and interpret numerical expressions without evaluating them\nCC.5.OA.3 Generate two numerical patterns using two given rules. Identify apparent relationships between corresponding terms. Form ordered pairs consisting of corresponding terms from the two patterns, and graph the\nordered pairs on a coordinate plane.\nGeometry\n\nGraph points on the coordinate plane to solve real-world and mathematical problems.\nCC.5.G.1 Use a pair of perpendicular number lines, called axes, to define a coordinate system, with the intersection of the lines (the origin) arranged to coincide with the 0 on each line and a given\npoint in the plane located by using an ordered pair of numbers, , called its coordinates. Understand that the first number indicates how far to travel from the origin in the direction of one axis, and the\nsecond number indicates how far to travel in the direction of the second axis, with the convention that the names of the two axes and the coordinates correspond.\nCC.5.G.2 Represent real world and mathematical problems by graphing points in the first quadrant of the coordinate plane, and interpret coordinate values of points in the context of\nthe situation.\nPriority and Supporting Common Core State Standards\nBold Standards are Priority\n5.OA.1 Use parentheses, brackets, or braces in numerical expressions, and\nevaluate expressions with these symbols.\nGrade 5 Unit 3 Algebraic Connections\nExplanations and Examples\n5.OA.1 This standard builds on the expectations of third grade where students are expected to start learning\nthe conventional order. Students need experiences with multiple expressions that use grouping symbols\nthroughout the year to develop understanding of when and how to use parentheses, brackets, and braces. First,\nstudents use these symbols with whole numbers. Then the symbols can be used as students add, subtract,\nmultiply and divide decimals and fractions.\nMarch 2013\n5.OA.3 Generate two numerical patterns using two given rules. Identify apparent\nrelationships between corresponding terms. Form ordered pairs consisting of\ncorresponding terms from the two patterns, and graph the ordered pairs on a coordinate\nplane. For example, given the rule “Add 3” and the starting number 0, and given the\nrule “Add 6” and the starting number 0, generate terms in the resulting sequences, and\nobserve that the terms in one sequence are twice the corresponding terms in the other\nsequence. Explain informally why this is so.\nExamples:\n• (26 + 18) 4\n• {[2 x (3+5)] – 9} + [5 x (23-18)]\n• 12 – (0.4 x 2)\n• (2 + 3) x (1.5 – 0.5)\n• 6 (- + )\n• { 80 [ 2 x (3 ½ + 1 ½ ) ] }+ 100\nTo further develop students’ understanding of grouping symbols and facility with operations, students place\ngrouping symbols in equations to make the equations true or they compare expressions that are grouped\ndifferently.\nExamples:\n• 15 + 7 – 2 = 10 → 15 + (7 – 2) = 10\n• 3 x 125 ÷ 25 + 7 = 22 → [3 x (125 ÷ 25)] + 7 = 22\n• 24 ÷ 12 ÷ 6 ÷ 2 = 2 x 9 + 3 ÷ ½ → 24 ÷ [(12 ÷ 6) ÷ 2] = (2 x 9) + (3 ÷ ½)\n• Compare 3 x 2 + 5 and 3 x (2 + 5)\n• Compare 15 – 6 + 7 and 15 – (6 + 7)\n5.OA.3 Example:\nUse the rule “add 3” to write a sequence of numbers. Starting with a 0, students write 0, 3, 6, 9, 12, . . .\nUse the rule “add 6” to write a sequence of numbers. Starting with 0, students write 0, 6, 12, 18, 24, . . .\nAfter comparing these two sequences, the students notice that each term in the second sequence is twice the\ncorresponding terms of the first sequence. One way they justify this is by describing the patterns of the terms.\nTheir justification may include some mathematical notation (See example below). A student may explain that\nboth sequences start with zero and to generate each term of the second sequence he/she added 6, which is\ntwice as much as was added to produce the terms in the first sequence. Students may also use the distributive\nproperty to describe the relationship between the two numerical patterns by reasoning that 6 + 6 + 6 = 2 (3 + 3\n+ 3).\n0, +3 3, +3 6, +3 9, +312, . . .\n0,\n+6\n6,\n+6\n12,\n+6\n18,\n+6\n24, . . .\nOnce students can describe that the second sequence of numbers is twice the corresponding terms of the first\nsequence, the terms can be written in ordered pairs and then graphed on a coordinate grid. (Cont.)\n5.G.2 Represent real world and mathematical problems by graphing points in the\nfirst quadrant of the coordinate plane, and interpret coordinate values of points in\nthe context of the situation.\n5.G.2 Examples:\n• Sara has saved \\$20. She earns \\$8 for each hour she works.\no If Sara saves all of her money, how much will she have after working 3 hours? 5 hours? 10 hours?\no Create a graph that shows the relationship between the hours Sara worked and the amount of\nmoney she has saved.\no What other information do you know from analyzing the graph?\n• Use the graph below to determine how much money Jack makes after working exactly 9 hours.\n5.G.1 Use a pair of perpendicular number lines, called axes, to define a coordinate\nsystem, with the intersection of the lines (the origin) arranged to coincide with the 0 on\neach line and a given point in the plane located by using an ordered pair of numbers,\ncalled its coordinates. Understand that the first number indicates how far to travel from\n5.G.1 Examples:\n• Students can use a classroom size coordinate grid to physically locate the coordinate point (5, 3) by starting\nat the origin point (0,0), walking 5 units along the x axis to find the first number in the pair (5), and then\nwalking up 3 units for the second number in the pair (3). The ordered pair names a point on the grid.\nGrade 5 Unit 3 Algebraic Connections\nMarch 2013\nthe origin in the direction of one axis, and the second number indicates how far to travel\nin the direction of the second axis, with the convention that the names of the two axes\nand the coordinates correspond (e.g., x-axis and x-coordinate, y-axis and y-coordinate).\n• Graph and label the points below on a coordinate plane.\no A (0, 0)\no D (-4, 1)\no B (2, -4)\no E (2.5, -6)\no C (5, 5)\no F (-3, -2)\n5.G.1 Examples:\nConcepts\n\n\n\n\nWhat Students Need to Know\nNumerical expressions with grouping symbols\no Parentheses\no Brackets\no Braces\nNumerical patterns\no Rules\no Terms\nOrdered pairs\no x-coordinate\no y-coordinate\nCoordinate system\no Coordinate plane\no Axes (x-axis and y-axis)\no Origin\nSkills\n\n\n\n\n\n\n\nWhat Students Need to Be Able to Do\nUSE ( grouping symbols)\nO EVALUATE (numerical expressions)\nO WRITE ( numerical expressions)\nINTERPRET (numerical expressions without\nevaluating)\nWRITE (numerical patterns using rules)\nIDENTIFY (relationships between corresponding\nterms)\nGRAPH (ordered pairs on a coordinate plane)\nO UNDERSTAND\n x-coordinate indicates distance from the\norigin in the direction of the x-axis\n y-coordinate indicates distance from the\norigin in the direction of the y-axis\nINTERPRET (coordinate values in context)\nDEFINE (coordinate system)\nO Coordinate plane\nO X-axis and y-axis\nO Origin\nBloom’s Taxonomy Levels\n2\n3\n3\n4\n3\n3\n3\n2\n3\n1\nLearning Progressions\nStandard\nCC.5.OA.1 Use parentheses, brackets, or braces\nin numerical expressions, and evaluate\nexpressions with these symbols.\nCC.5.OA.2 Write simple expressions that\nGrade 5 Unit 3 Algebraic Connections\nPrerequisite Skills\nCC.4.OA.1 Interpret a multiplication equation\nas a comparison, e.g., interpret 35 = 5 x 7 as a\nstatement that 35 is 5 times as many as 7 and 7\ntimes as many as 5.\nAcceleration\nCC.6.EE.1 Write and evaluate numerical\nexpressions involving whole-number\nexponents.\nMarch 2013\nrecord calculations with numbers, and interpret\nnumerical expressions without evaluating them.\nCC.5.OA.3 Generate two numerical patterns\nusing given rules. Identify apparent\nrelationships between corresponding terms.\nForm ordered pairs consisting of corresponding\nterms from the two patterns, and graph the\nordered pairs on a coordinate plane.\nCC.5.G.1 Use a pair of perpendicular number\nlines, called axis, to define a coordinate system,\nwith the intersection of the lines arranged to\ncoincide with the 0 on each line and a given\npoint in the plane located by using an ordered\npair of numbers, called its coordinates.\nUnderstand that the first number indicates how\nfar to travel from the origin in the direction of\none axis, and the second number indicates how\nfar to travel in the direction of the second axis\nwith the convention that the names of the two\naxis and the coordinates correspond..\nCC.5.G.2 Represent real world and\nmathematical problems by graphing points in\nthe first quadrant of the coordinate plane and\ninterpret coordinate values of points in the\ncontext of the situation.\nCC.4.OA.5 Generate a number or shape pattern\nthat follows a given rule. Identify apparent\nfeatures of the pattern that were not explicit in\nthe rule itself.\nexpressions in which letters stand for numbers.\nA. Write expressions that record operations\nwith numbers and with letters standing for\nnumbers. B. Identify parts of an expression\nusing mathematical terms; view one or more\nparts of an expression as a single entity. C.\nEvaluate expressions at specific values of their\nvariable. Include expressions that arise from\nformulas used in real word problems. Perform\narithmetic operations, including those involving\nwhole-number exponents in the conventional\norder when there are no parentheses to specify\nparticular order.\nUnit Assessments\nAdminister Pre and Post Assessments for Unit 3 in the Fifth Grade Share Point folder\nGrade 5 Unit 3 Algebraic Connections\nMarch 2013\n```\nRelated documents" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86880195,"math_prob":0.9703433,"size":11313,"snap":"2019-26-2019-30","text_gpt3_token_len":2588,"char_repetition_ratio":0.1497922,"word_repetition_ratio":0.26402116,"special_character_ratio":0.23777954,"punctuation_ratio":0.12707183,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99531436,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-16T06:17:03Z\",\"WARC-Record-ID\":\"<urn:uuid:a7a211f2-bcec-476e-a65f-ee47fcce9756>\",\"Content-Length\":\"76470\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:03d34fe2-2712-4419-9ae6-4e48bc6c0123>\",\"WARC-Concurrent-To\":\"<urn:uuid:183fd18e-3ba2-4fce-ba72-e8e1fe6cee2b>\",\"WARC-IP-Address\":\"54.209.112.17\",\"WARC-Target-URI\":\"https://studyres.com/doc/10253824/unit-3--algebraic-connections\",\"WARC-Payload-Digest\":\"sha1:SSOPVA6Q365OVU7OVHYRK6YJ4NW6YYQY\",\"WARC-Block-Digest\":\"sha1:ZB65AI3CF73ZU3P67Y2P2MDQGM6KI7LR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195524503.7_warc_CC-MAIN-20190716055158-20190716081158-00390.warc.gz\"}"}
http://www.hear.org/Pier/wra/pacific/schinus_molle_htmlwra.htm
[ "Pacific Island Ecosystems at Risk (PIER)\n\nRISK ASSESSMENT RESULTS: High risk, score: 10", null, "Australian/New Zealand Weed Risk Assessment adapted for Hawai‘i. Research directed by C. Daehler (UH Botany) with funding from the Kaulunani Urban Forestry Program and US Forest Service Information on Risk Assessments Original risk assessment\n Schinus molle (Peruvian peppertree; California peppertree) Answer 1.01 Is the species highly domesticated? y=-3, n=0 n 1.02 Has the species become naturalized where grown? y=-1, n=-1 y 1.03 Does the species have weedy races? y=-1, n=-1 n 2.01 Species suited to tropical or subtropical climate(s) (0-low; 1-intermediate; 2-high) – If island is primarily wet habitat, then substitute “wet tropical” for “tropical or subtropical” See Append 2 2 2.02 Quality of climate match data (0-low; 1-intermediate; 2-high) see appendix 2 2 2.03 Broad climate suitability (environmental versatility) y=1, n=0 n 2.04 Native or naturalized in regions with tropical or subtropical climates y=1, n=0 y 2.05 Does the species have a history of repeated introductions outside its natural range? y=-2 ?=-1, n=0 y 3.01 Naturalized beyond native range y = 1*multiplier (see Append 2), n= question 2.05 y 3.02 Garden/amenity/disturbance weed y = 1*multiplier (see Append 2) n=0 n 3.03 Agricultural/forestry/horticultural weed y = 2*multiplier (see Append 2) n=0 n 3.04 Environmental weed y = 2*multiplier (see Append 2) n=0 y 3.05 Congeneric weed y = 1*multiplier (see Append 2) n=0 y 4.01 Produces spines, thorns or burrs y=1, n=0 n 4.02 Allelopathic y=1, n=0 y 4.03 Parasitic y=1, n=0 n 4.04 Unpalatable to grazing animals y=1, n=-1 4.05 Toxic to animals y=1, n=0 n 4.06 Host for recognized pests and pathogens y=1, n=0 4.07 Causes allergies or is otherwise toxic to humans y=1, n=0 n 4.08 Creates a fire hazard in natural ecosystems y=1, n=0 4.09 Is a shade tolerant plant at some stage of its life cycle y=1, n=0 4.1 Tolerates a wide range of soil conditions (or limestone conditions if not a volcanic island) y=1, n=0 y 4.11 Climbing or smothering growth habit y=1, n=0 n 4.12 Forms dense thickets y=1, n=0 n 5.01 Aquatic y=5, n=0 n 5.02 Grass y=1, n=0 n 5.03 Nitrogen fixing woody plant y=1, n=0 5.04 Geophyte (herbaceous with underground storage organs -- bulbs, corms, or tubers) y=1, n=0 n 6.01 Evidence of substantial reproductive failure in native habitat y=1, n=0 n 6.02 Produces viable seed. y=1, n=-1 y 6.03 Hybridizes naturally y=1, n=-1 6.04 Self-compatible or apomictic y=1, n=-1 6.05 Requires specialist pollinators y=-1, n=0 n 6.06 Reproduction by vegetative fragmentation y=1, n=-1 n 6.07 Minimum generative time (years) 1 year = 1, 2 or 3 years = 0, 4+ years = -1 See left 3 7.01 Propagules likely to be dispersed unintentionally (plants growing in heavily trafficked areas) y=1, n=-1 n 7.02 Propagules dispersed intentionally by people y=1, n=-1 y 7.03 Propagules likely to disperse as a produce contaminant y=1, n=-1 n 7.04 Propagules adapted to wind dispersal y=1, n=-1 n 7.05 Propagules water dispersed y=1, n=-1 n 7.06 Propagules bird dispersed y=1, n=-1 y 7.07 Propagules dispersed by other animals (externally) y=1, n=-1 n 7.08 Propagules survive passage through the gut y=1, n=-1 y 8.01 Prolific seed production (>1000/m2) y=1, n=-1 8.02 Evidence that a persistent propagule bank is formed (>1 yr) y=1, n=-1 8.03 Well controlled by herbicides y=-1, n=1 8.04 Tolerates, or benefits from, mutilation, cultivation, or fire y=1, n=-1 y 8.05 Effective natural enemies present locally (e.g. introduced biocontrol agents) y=-1, n=1 Total score: 10\n\nSupporting data:" ]
[ null, "http://www.hear.org/Pier/images/wralogo.bmp", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7298819,"math_prob":0.8865486,"size":12426,"snap":"2019-26-2019-30","text_gpt3_token_len":4080,"char_repetition_ratio":0.12606665,"word_repetition_ratio":0.18815166,"special_character_ratio":0.33051667,"punctuation_ratio":0.20532319,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95671296,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-17T04:58:28Z\",\"WARC-Record-ID\":\"<urn:uuid:6e9c2441-a8d9-41e8-b8b6-0c36b7079055>\",\"Content-Length\":\"46487\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:18c7829e-20bd-4800-b49a-5e76500963a8>\",\"WARC-Concurrent-To\":\"<urn:uuid:150a100e-c187-4248-b12f-690c02d80cd8>\",\"WARC-IP-Address\":\"128.171.35.76\",\"WARC-Target-URI\":\"http://www.hear.org/Pier/wra/pacific/schinus_molle_htmlwra.htm\",\"WARC-Payload-Digest\":\"sha1:RFMTE6UDSUJ2MEHCEDZJL3K6VZMPNJ22\",\"WARC-Block-Digest\":\"sha1:WFUHHTZOD5QDHEPG6C63K5523QPMDTG6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525046.5_warc_CC-MAIN-20190717041500-20190717063500-00409.warc.gz\"}"}
https://mathematica.stackexchange.com/questions/4702/how-can-i-solve-a-difference-differential-equation
[ "# How can I solve a difference-differential equation?\n\nHow do I ask Mathematica to try to solve a recursive relation that defines a sequence of functions? For example, suppose I know that $g_n(x) = g_{n-1}'(x)$ for $n > 0$ and that $g_0(x) = e^{2x}$. How can I ask Mathematica to find a closed form for $g_n(x)$? (This is just a placeholder equation to highlight my question; I know the answer of $g_n(x) = 2^n e^{2x}$, $n\\geq 0$.)\n\nA less trivial instance of the problem would be the Hermite polynomial recursion, $$H_{n+1}(x) = 2xH_n(x) - H_n'(x)$$\n\nI don't see how to convince either DSolve or RSolve to solve it for me. DSolve is unhappy because $n-1$ is used on the RHS:\n\nDSolve[{g[n, x] == 2*D[g[n - 1, x], x]}, g, {n, x}]\n\n\nRSolve just echoes my input:\n\nRSolve[{g[n, x] == 2*D[g[n - 1, x], x], g[0, x] == Exp[2*x]}, g, {n, x}]\n\n\nI know finding a closed-form solution is going to be hopeless in most instances, but it seems like some cases like the above $g_n(x)$ should be doable. I have been unable to find any examples in the Mathematica documentation addressing this type of problem.\n\n• Are you looking for a closed form solution in terms of $n$, or you just want to compute the result for each $n$ successively? – Szabolcs Apr 24 '12 at 15:38\n• You may look here, for recursive definitions of Hermite polynomials : mathematica.stackexchange.com/questions/4652/… – Artes Apr 24 '12 at 15:42\n• I tried Solve[{Series[g[n, x], {x, 0, 5}, {n, 0, 5}] == Series[2*D[g[n - 1, x], x], {x, 0, 5}, {n, 0, 5}]}, g, {n, x}] but didn't found the solution. I think you will try to search a closed form asymptotically. – GarouDan Apr 24 '12 at 16:33\n• @Szabolcs I'm looking for closed forms. I know that g=Table[0,{t,0,10}]; g[]=Exp[2*x]; Table[g[[i+1]]=D[g[[i]],x],{i,1,9}] will compute the result (I'm sure there are more elegant ways). Artes Thanks. I'm more interested in how to deal with this paradigm rather than Hermite polynomials in particular. – UVW Apr 24 '12 at 17:01\n\nFor the simple example in the question, FindSequenceFunction can be used to infer the general form:\n\ng=Exp[2x];\ng[n_]:=g[n]=Expand[D[g[n-1],x]]\n\nFindSequenceFunction[g/@Range,n]\nOut= 2^n E^(2 x)\n\n• +1 Really nice. The sentence For the simple example in the question should not be taken lightly, as FindInstance is not able to find many \"easy\" sequences. – Dr. belisarius Apr 24 '12 at 21:53\n• Thanks, @Simon. It was probably unreasonable of me to hope for more. This looks like a reasonable approach. – UVW Apr 25 '12 at 16:35\n\nThis is recursion, not solution-finding. That makes it fast and straightforward. For instance, the Hermite polynomial example, with memoization of the function (not just of its values at previous arguments of $x$), might look like this (although I'm sure the real experts can find a more elegant way to accomplish the same thing):\n\nClearAll[g];\ng[n_Integer, x_] := g[n][x];\ng = Function[{x}, 0];\ng = Function[{x}, 1];\ng[n_Integer][x_] := With[{},\ng[n] = Function[{y}, Evaluate@ Expand[2 y g[n - 1, y] - D[g[n - 1, y], y]]];\ng[n][x]\n]\n\n\nAfter executing, say,\n\nIn:= g[5,x]\nOut= 12 - 48 x^2 + 16 x^4\n\n\nthe definition of g will be\n\n? g\n\ng[n_Integer][x_]:=With[{},g[n]=Function[{y},Evaluate[Expand[2 y g[n-1,y]-\\!$$\\*SubscriptBox[\\(\\[PartialD]$$, $$y$$]$$g[n - 1, y]$$\\)]]];g[n][x]]\n\ng=Function[{x},0]\ng=Function[{x},1]\ng=Function[{y$},2 y$]\ng=Function[{y$},-2+4 y$^2]\ng=Function[{y$},-12 y$+8 y$^3] g=Function[{y$},12-48 y$^2+16 y$^4]\ng[n_Integer,x_]:=g[n][x]\n\n\n• Those are just closed-forms for particular values of $n$. I thought the O.P. wanted a closed-form expression as a function of $n$ (and $x$, of course). – murray Apr 24 '12 at 19:34\n• OK, thanks for the clarification @UVW. Nevertheless, often a list of solutions can be a good start: you can then pick out the coefficients of the components of the solutions and search for closed forms using FindSequenceFunction and their ilk. Sometimes it works! – whuber Apr 24 '12 at 20:14" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89277333,"math_prob":0.99557525,"size":1275,"snap":"2020-10-2020-16","text_gpt3_token_len":440,"char_repetition_ratio":0.09756097,"word_repetition_ratio":0.0349345,"special_character_ratio":0.36235294,"punctuation_ratio":0.15806451,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99955165,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-19T00:16:21Z\",\"WARC-Record-ID\":\"<urn:uuid:63024e6b-5d0f-40a7-a0e0-dfd592492c4f>\",\"Content-Length\":\"159318\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b4ea8e8-64ac-4e07-8dae-81b083d2fa93>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab4429a1-fdfa-4fc0-bbb6-4c8b0b5a6dbc>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/4702/how-can-i-solve-a-difference-differential-equation\",\"WARC-Payload-Digest\":\"sha1:NML2NYECLOX2FGJUVI5O7Q6BGAO6GHD4\",\"WARC-Block-Digest\":\"sha1:VCIQPUCJLPXAQQ7RSZ2ZIMASCDBZ5RG7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875143963.79_warc_CC-MAIN-20200219000604-20200219030604-00355.warc.gz\"}"}
https://betterlesson.com/lesson/640252/prove-triangle-midsegment-theorem-using-analytic-geometry?from=cc_lesson
[ "# Prove Triangle Midsegment Theorem using Analytic Geometry\n\n## Objective\n\nSWBAT use analytic geometry to prove the triangle midsegment theorem.\n\n#### Big Idea\n\nAs they say in real estate...\"Location, Location, Location.\" In this lesson, students experience the impact of positioning when writing coordinate proofs.\n\n## Understanding the Theorem\n\n10 minutes\n\nIn this section of the lesson, my goal is for students to understand what the Triangle Midsegment Theorem says and to understand what we will need to do in order to prove it (MP1).\n\nTo achieve this goal, I guide students through the first page of Part 1_Proving Triangle Midsegment Theorem. We start with the generalized statement of the theorem. Then I introduce a diagram of a triangle. Next I ask students to restate the theorem in the context of the triangle in the diagram. I provide a sentence frame for this restatement and students must fill in the blanks to complete it. I give the students 1 or 2 minutes to fill in the blanks on their own before consulting with a partner. After that, I reveal the correct re-statement while carefully explaining each part of the statement and how it follows from the original statement of the theorem.\n\nNext students translate the restated theorem into precise mathematical notation (MP6). In order to do this, they will need to introduce the midpoints of the sides of the triangle and name them. I give students time to figure this out as this has been one of my overarching goals for the course: getting students to realize that we introduce and name figures in diagrams so that we can introduce them into language using mathematical notation.\n\nWhen students have had time to write their statements, I have them compare with a neighbor to do some initial quality control. Then I show the correct statement under the document camera.\n\nSo at this point, students have basically understood what the theorem says and what they will need to prove for this particular triangle. Now they are ready to start proving.\n\n## Part 1: School of Hard Knocks\n\n25 minutes\n\nThis section is called 'School of Hard Knocks' because, in it, students will have to pay their dues by grappling with some messy abstract coordinates. This is all because of my intentionally horrible choice of position/orientation for the triangle on the coordinate plane. SEE THE REFLECTION IN THIS SECTION FOR MY THINKING ON DESIGNING THE LESSON THIS WAY.\n\nOn page 2 of Part 1_Proving Triangle Midsegment Theorem Students have three tasks:\n\n1. Use the midpoint formula to find the coordinates of the midpoints.\n\n2.  Use the slope formula to show that the midsegment and third side are parallel.\n\n3. Use the distance formula to show that the length of the midsegment is half that of the third side.\n\nRecognizing that students can easily get stuck or off track  I structure this portion of the lesson pretty tightly so that students get immediate feedback after they've taken a risk to attempt a problem.\n\nFirst, I give them 2 minutes to find the coordinates of midpoints D and E. Then I model the process. I show, for example, how I like to leave leading terms positive when possible (e.g. b-a as opposed to -a+b) SEE REFLECTION IN THIS SECTION FOR OTHER OPPORTUNITIES TO MODEL SEEING STRUCTURE IN EXPRESSIONS.\n\nNext we move on to #2, which asks students to show that the midsegment is parallel to the third side of the triangle. I begin with a pair-share, asking students how we might use coordinate (analytic) geometry to show that two segments are parallel. After that, students try their hand at finding the slopes of the two segments. I caution my students to be careful with signs, use parentheses where appropriate, and remember to \"distribute the negative\". When students have had enough time, I show the process for obtaining the answers. Again, there is lots to model with regard to seeing structure and performing mindful algebraic manipulations so I definitely take advantage of these at this point in the lesson.\n\nFinally, we go through a similar process with #3 as we did with #2. There are good opportunities here to model mindful algebraic manipulation as well. For example factoring the 4 out of the radicand to reveal that the third side is twice as long as the midsegment.\n\nAt this point, we've proven the midsegment theorem. However, since the activity was scaffolded, it's possible that some students have just been going through the motions without understanding what happened. For that reason, I have students take turns explaining what happened in #2 and #3. The student explaining will explain in their own words the work that was done, why it was done, and what it establishes. Then the non-explaining partner will re-voice what they have heard. Then the roles reverse.\n\n## Part 2: The Power of Choices\n\n25 minutes\n\nAfter the last section, this section should be a welcome relief for students. I give each student Part 2_Proving Triangle Midsegment Theorem. As they read, they learn (if they didn't realize already) that we made our lives more difficult than we needed to in Part 1. Now they have a chance to make a better choice.\n\nOn the first page of the handout, students need to re-sketch the triangle from Part 1 with position and orientation that will facilitate the proof writing process. Then they need to explain how their choice will make things more convenient. This is an independent process because I want each student to think on their own and make their own choices. They have experience with this type of thinking from a previous lesson on proving the medians of a triangle are concurrent.\n\nAfter students have completed their sketches and responded to the prompt at the bottom of the first page, we move on the next page to see how the triangle should be positioned and oriented.\n\nThen students will be left alone to prove the Triangle Midsegment Theorem using this diagram. Although easier than the Part 1 task, it is a good way for me to know if students have understood the work from part 1. One thing I tend to see is that students use the distance formula to find the length of horizontal segments. This is a good opportunity to talk about staying present current scenario and not just repeating the exact same procedures from Part 1. It's also a good time to reinforce (MP8)...when we want the length of horizontal segments, we simply subtract their x coordinates and take the absolute value.\n\nWhen students have had enough time to finish working, I have them get together with their A-B partners and take turns rehearsing what they would say if they were called up to present their proof.\n\nWhen they have had time to do this, I call on two or three randomly chosen non-volunteers to come to the front of the class and present their proofs.\n\n## Follow-Up\n\n20 minutes\n\nIn this final section, students work on the last page of Part 2_Proving Triangle Midsegment Theorem. The first item gets at the idea that we do not need to prove the theorem for all three midsegments because we can perform rigid transformations (preserving all lengths and angles/slopes) such that a any vertex can be brought to the origin and any other vertex brought to the x-axis.\n\nItems 2 and 3 require students to apply their knowledge of the Triangle Midsegment Theorem in order to classify quadrilaterals.\n\nFinally, items 4 and 5 deal with the fact that the midsegment triangle is similar to the original triangle with scale factor 1:2.\n\nWhen students are finished, I collect the papers and post model responses on the internet for students to see and learn from." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9532019,"math_prob":0.73771,"size":7708,"snap":"2019-43-2019-47","text_gpt3_token_len":1592,"char_repetition_ratio":0.15719107,"word_repetition_ratio":0.010542168,"special_character_ratio":0.19810586,"punctuation_ratio":0.07986348,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97475845,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T21:41:21Z\",\"WARC-Record-ID\":\"<urn:uuid:a4a17c7b-2e7a-4436-89fd-7c6187197cd6>\",\"Content-Length\":\"132175\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f9685eec-30cf-4fba-8546-c4b11533e273>\",\"WARC-Concurrent-To\":\"<urn:uuid:0ae78ab1-197e-45d9-bd0e-f809b6060d5d>\",\"WARC-IP-Address\":\"107.21.10.124\",\"WARC-Target-URI\":\"https://betterlesson.com/lesson/640252/prove-triangle-midsegment-theorem-using-analytic-geometry?from=cc_lesson\",\"WARC-Payload-Digest\":\"sha1:XLD23GA2B53IWCJZPRR3AO7PIY6XPQNM\",\"WARC-Block-Digest\":\"sha1:O3CQBFFY5CORVC2SULMU2BT5DMMFGLEN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668712.57_warc_CC-MAIN-20191115195132-20191115223132-00552.warc.gz\"}"}
https://fromthebottomoftheheap.net/2017/05/01/glm-prediction-intervals-i/
[ "One of my more popular answers on StackOverflow concerns the issue of prediction intervals for a generalized linear model (GLM). My answer really only addresses how to compute confidence intervals for parameters but in the comments I discuss the more substantive points raised by the OP in their question. Lately there’s been a bit of back and forth between Jarrett Byrnes and myself about what a prediction “interval” for a GLM might mean. Comments, even on StackOverflow, aren’t a good place for a discussion so I thought I’d post something here that went into a bit more detail as to why, for some common types of GLMs, prediction intervals aren’t that useful and require a lot more thinking about what they mean and how they should be calculated. For illustration, I thought I’d use some small teaching example data sets, but whilst writing the post it started to get a little on the long side. So, I’ve broken it into two and in this part I look at logistic regression.\n\nThe first example concerns a small experiment on the rare insectivorous pitcher plant Darlingtonia californica (the cobra lily) used as an example in Gotelli and Ellison (2013) and originally reported in Dixon et al. (2005). Darlingtonia grows leaves that are modified to form a pitcher trap, which is filled with nectar that attracts insects, in particular vespulid wasps (Vespula atropilosa). The observations in the data set are on the height of pitcher traps (leafHeight) and whether or not the leaf was visited by a wasp (visited). The code chunk below downloads the data from the book’s website and loads it into R ready for use.\n\nKernel density estimates of the distributions of the leaf heights for visited and unvisited leaves is one way to visualise these data. Here we use ggplot2\n\nWe’re interested in modelling the probability of leaf visitation as a function of leaf height. For this a binomial GLM is a logical choice, with the canonical link function, the logit or logistic function. Such a model is fitted using glm() as follows\n\nThe model summary suggests an effect of leaf height that is unlikely to be observed if there were no effect. For a unit increase in leaf height, the odds of visitation increase by 1.12 times (given by exp(coef(m))).\n\nHow the probability of visitation varies as a function of leaf height, as estimated by the binomial GLM, can be visualised by predicting for a grid of values over the observed range of leaf heights. An approximate 95% point-wise confidence interval can also be created for the fitted function. In this case, we should create the confidence interval on the scale of the linear predictor where we assume things behave in a more Gaussian-like manner, and then backtransform the calculated interval on to the probability scale using the invers of the link function. The code below shows a general solution for this, where the inverse link function is obtained from the family() object contained within the fitted GLM object\n\nSo far, so standard; the confidence interval is just that, a Wald confidence interval on the fitted function based on the standard errors of the estimates of the model coefficients. It is not a prediction interval, however.\n\nThe fitted model can be interpreted as describing the binomial distribution for any given value of leafHeight. The binomial distribution is specified by two parameters; n the number of trials (specified via argument size in R’s dbinom() and related functions), and p the probability of success. In the Darlingtonia example, n is 1 because each leaf was the result of 1 trial; was the leaf visited or not during the experiment? p is given by $$g(\\eta)^{-1} = g(\\beta_0 + \\beta_1 \\text{leaf height})^{-1}$$, where $$g$$ is the logit link function and $$g^{-1}$$ is its inverse. In other words, the probability parameter of the binomial distribution is a function of leafHeight.\n\nTo create a prediction interval for a value of leafHeight, we could look at the probability quantiles of the binomial distribution with size = 1 and prob = Fitted[leafHeight]. For example, for the minimum and maximum observed leaf heights the extreme 2.5% and 97.5% probability quantiles are\n\nIn the first instance, for the minimum observed leaf height, the prediction interval is 0. Yes, just 0. For the maximum observed leaf height the 95% prediction interval is 0–1. Neither of these is very useful; one isn’t even an interval in the usual sense of the word, and the other is so wide as to encompass both 0 and 1, which is no more information than we had before we started the whole exercise — a leaf can only be visited or not.\n\nBut this isn’t quite what we want; we’ve only explore the quantiles of the distributions conditional upon the estimated probability. A real prediction interval would account for the uncertainty in this estimate. For that, we need the upper and lower confidence limits for the estimated probability.\n\nI think we can all agree that these intervals aren’t really that useful…\n\nAnother way to use the fitted model is via what it says about the posterior density of the two possible predicted values, visited or unvisited. This can be computed using dbinom() using the code below, again for the minimum and maximum observed leaf heights\n\nWe see almost all the probability density on the unvisited option for leaves 14cm in height (which is also why the 95% interval we calculated earlier was all on unvisted (0), we’d need to go beyond a 99.7% interval to get the visited alernative (1) included in the interval). For leaves of 84cm, most of the density is on the visited outcome, but with approximately 8% on the unvisited outcome.\n\nHowever, these values are exactly what we get if we just take the fitted probabilities for these leaf heights, which are given by the solid line in the plot we made earlier\n\nThese values are for the visited outcome, but subtract them from 1 and you have the values for the unvisited outcome\n\nAs before, this ignores the uncertainty in the estimated probability of visitation. The densities incorporating this uncertainty are shown in the table below\n\nEstimated probability of the visited and not-visited outcomes based on the upper (upr) and lower (lwr) 95% interval of the model-estimated probability of visitation for two leaf heights.\nNot Visited (lwr) Not Visited (upr) Visited (lwr) Visited (upr)\nleafHeight = 14 0.9999 0.9125 0.0001 0.0875\nleafHeight = 84 0.4415 0.0103 0.5585 0.9897\n\nOne more thing we can do with the fitted model is simulate random outcomes from it. Again we do this for the minimum and maximum observed leaf heights, first for the lowest leaf height\n\nand then for the largest observed leaf height\n\nThe numbers should look pretty familiar — they are very close to both the posterior densities returned using dbinom() and the the fitted probabilities we just looked at. In fact, as nrand tends to infinity, the proportions of the two outcomes will also approach those given by dbinom(). As before, though I won’t show it, a complete interval would also include the uncertainty in the estimated probability.\n\nIn this example, the most useful outputs from the model are all based on the binomial distributions given values of leaf height. The interval given by the extreme 2.5th and 97.5th probability quantiles isn’t of much use at all; for the two values of leaf height we looked at the interval either wasn’t an interval or it told us no more information than we already possessed, that leaves either were or were not visited.\n\nThat said, this binomial GLM example is pretty extreme; the observed data only take values 0 or 1 and nothing else. However, this has been a useful exercise to think about what the fitted model represents.\n\nIn the second part of this post I’ll look at a model for a count response, which will start to look a little more interval-like than the one here.\n\nDixon, P. M., Ellison, A. M., and Gotelli, N. J. (2005). Improving the precision of estimates of the frequency of rare events. Ecology 86, 1114–1123. doi:10.1890/04-0601.\n\nGotelli, N. J., and Ellison, A. M. (2013). A primer of ecological statistics. second. Sinauer Associates Inc." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91347885,"math_prob":0.9666521,"size":7988,"snap":"2020-34-2020-40","text_gpt3_token_len":1803,"char_repetition_ratio":0.14441383,"word_repetition_ratio":0.012546126,"special_character_ratio":0.2234602,"punctuation_ratio":0.09597925,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99147505,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-11T16:59:44Z\",\"WARC-Record-ID\":\"<urn:uuid:79bff450-2d81-4f1d-9a7f-1086177aef97>\",\"Content-Length\":\"56208\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:39416992-17eb-4e9a-9329-4b9f9dbd535f>\",\"WARC-Concurrent-To\":\"<urn:uuid:0125dff4-7ef6-479c-b984-a581d5131e29>\",\"WARC-IP-Address\":\"104.248.63.248\",\"WARC-Target-URI\":\"https://fromthebottomoftheheap.net/2017/05/01/glm-prediction-intervals-i/\",\"WARC-Payload-Digest\":\"sha1:EUZDL4KUVRAMTESPNLFV5QOH5BPFDMCS\",\"WARC-Block-Digest\":\"sha1:VR56FPLI5ZJ5ANXN3BA2CMY24Y2AHUUK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738816.7_warc_CC-MAIN-20200811150134-20200811180134-00515.warc.gz\"}"}
https://www.kangya120.com/lists/25.html
[ "• 康雅首页\n• {yzn module=\"cms\" action=\"category\" catid=\"0\" cache=\"3600\" order=\"listorder ASC\" num=\"10\" return=\"data\"}\n• {if(\\$vo['child'])} 党委\n• {if(\\$vo['child'])} 工会\n• {if(\\$vo['child'])} 团委\n• {if(\\$vo['child'])} 医风医德\n• {if(\\$vo['child'])} 意见箱", null, "KANG YA FAMOUS DOCTOR", null, "", null, "", null, "" ]
[ null, "https://www.kangya120.com/static/modules/cms/images/icon_03.png", null, "https://www.kangya120.com/uploads/images/20200619/e10b56801762cefd6253ba3baaf2b52f.jpg", null, "https://www.kangya120.com/uploads/images/20200619/4ac40b21ca6bf2e3776110db20f35fec.jpg", null, "https://www.kangya120.com/uploads/images/20221116/567ee892f1ca98a43a01178d36a24162.jpg", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.9618658,"math_prob":0.7692545,"size":893,"snap":"2023-14-2023-23","text_gpt3_token_len":1157,"char_repetition_ratio":0.09448819,"word_repetition_ratio":0.0,"special_character_ratio":0.13101904,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9813607,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,5,null,2,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-05T13:14:21Z\",\"WARC-Record-ID\":\"<urn:uuid:48d91243-ad1e-4529-8173-c25ab7fc8f61>\",\"Content-Length\":\"18263\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d43b477a-f14a-4a69-882d-d835ed16429e>\",\"WARC-Concurrent-To\":\"<urn:uuid:8bd536bb-9611-43f9-b272-fedeb96907a7>\",\"WARC-IP-Address\":\"112.126.58.32\",\"WARC-Target-URI\":\"https://www.kangya120.com/lists/25.html\",\"WARC-Payload-Digest\":\"sha1:NWPEZXHJ46AS4NS3TM434EPYTYESD7HW\",\"WARC-Block-Digest\":\"sha1:BZYEDHZ4E37BFJXVFOZOOPMLKLG6K34H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652116.60_warc_CC-MAIN-20230605121635-20230605151635-00209.warc.gz\"}"}
https://www.lizenghai.com/archives/34576.html
[ "# 用Python在Excel里画出蒙娜丽莎\n\n## 前言\n\nPS:如有需要Python学习资料的小伙伴可以加点击下方链接自行获取", null, "## PIL使用\n\nPIL是Python里面做图像处理的时候十分常用的一个库,功能也是十分的强大,这里只需要用到PIL里一小部分的功能。\n\n```1 from PIL import Image\n2 img = Image.open(img_path) # 读取图片\n3 width, height = img.size # 获取图片大小\n4 r, g, b = img.getpixel((w - 1, h - 1)) # 获取像素色彩值```\n\n• `Image.open()`是PIL里面打开一张图片的函数,支持多种图片类型\n\n• `img_path`是图片路径,可以是相对路径,也可以是绝对路径\n\n• `img.size`是获取图片的size属性,包含图片的宽和高\n\n• `img.getpixel()`是获取图片色彩值的函数,需传入一个tuple或list,值为像素坐标xy\n\n## openpyxl使用\n\n`openpyxl`几乎是Python里功能最全的操作excel文件的库了,这里也只需要用到它的一小部分功能。\n\n```1 import openpyxl\n2 from openpyxl.styles import fills\n3 ​\n4 workbook = openpyxl.Workbook()\n5 worksheet = workbook.active\n6 cell.fill = fills.PatternFill(fill_type=\"solid\", fgColor=hex_rgb)\n7 workbook.save(out_file)```\n\n• `openpyxl.Workbook()`新建一个excel文件\n\n• `workbook.active` 激活一个工作表\n\n• `cell.fill = fills.PatternFill(fill_type=\"solid\", fgColor=hex_rgb)`填充一个单元格,`fill_type=\"solid\"`是填充类型,`fgColor=hex_rgb`是填充的颜色\n\n• `workbook.save()`保存文件,需传入要保存的文件名\n\n## 写一段代码\n\n`写这一个画图的需求需要用到的核心就是上面介绍的PIL跟openpyxl的几种用法。但是在实际写的时候,还会有一些其他问题,比如:`\n\n1、getpixel()获取的颜色值是rgb十进制的,但fills.PatternFill 里的fgColor`参数接收到的颜色值是十六进制的值 这个问题其实就是十进制转十六进制,很容易解决\n\n```1 def int_to_16(num):\n2 num1 = hex(num).replace('0x', '')\n3 num2 = num1 if len(num1) > 1 else '0' + num1 # 位数只有一位的时候在前面补零\n4 return num2```\n\n2、excel的单元格默认是长方形,修改为正方形才不会使图片变形\n\n```1 if h == 1:\n2 _w = cell.column\n3 _h = cell.col_idx\n4 # 调整列宽\n5 worksheet.column_dimensions[_w].width = 1\n6 ​\n7 # 调整行高\n8 worksheet.row_dimensions[h].height = 6```\n\n3、excel支持的样式数量有限", null, "``` 1 MAX_WIDTH = 300\n2 MAX_HEIGHT = 300\n3 def resize(img):\n4 w, h = img.size\n5 if w > MAX_WIDTH:\n6 h = MAX_WIDTH / w * h\n7 w = MAX_WIDTH\n8 ​\n9 if h > MAX_HEIGHT:\n10 w = MAX_HEIGHT / h * w\n11 h = MAX_HEIGHT\n12 return img.resize((int(w), int(h)), Image.ANTIALIAS)```\n\n## 全部代码\n\n``` 1 # draw_excel.py\n2 ​\n3 from PIL import Image\n4 import openpyxl\n5 from openpyxl.styles import fills\n6 import os\n7 ​\n8 MAX_WIDTH = 300\n9 MAX_HEIGHT = 300\n10 ​\n11 def resize(img):\n12 w, h = img.size\n13 if w > MAX_WIDTH:\n14 h = MAX_WIDTH / w * h\n15 w = MAX_WIDTH\n16 ​\n17 if h > MAX_HEIGHT:\n18 w = MAX_HEIGHT / h * w\n19 h = MAX_HEIGHT\n20 return img.resize((int(w), int(h)), Image.ANTIALIAS)\n21 ​\n22 ​\n23 def int_to_16(num):\n24 num1 = hex(num).replace('0x', '')\n25 num2 = num1 if len(num1) > 1 else '0' + num1\n26 return num2\n27 ​\n28 ​\n29 def draw_jpg(img_path):\n30 ​\n31 img_pic = resize(Image.open(img_path))\n32 img_name = os.path.basename(img_path)\n33 out_file = './result/' + img_name.split('.') + '.xlsx'\n34 if os.path.exists(out_file):\n35 os.remove(out_file)\n36 ​\n37 workbook = openpyxl.Workbook()\n38 worksheet = workbook.active\n39 ​\n40 width, height = img_pic.size\n41 ​\n42 for w in range(1, width + 1):\n43 ​\n44 for h in range(1, height + 1):\n45 if img_pic.mode == 'RGB':\n46 r, g, b = img_pic.getpixel((w - 1, h - 1))\n47 elif img_pic.mode == 'RGBA':\n48 r, g, b, a = img_pic.getpixel((w - 1, h - 1))\n49 ​\n50 hex_rgb = int_to_16(r) + int_to_16(g) + int_to_16(b)\n51 ​\n52 cell = worksheet.cell(column=w, row=h)\n53 ​\n54 if h == 1:\n55 _w = cell.column\n56 _h = cell.col_idx\n57 # 调整列宽\n58 worksheet.column_dimensions[_w].width = 1\n59 # 调整行高\n60 worksheet.row_dimensions[h].height = 6\n61\n62 cell.fill = fills.PatternFill(fill_type=\"solid\", fgColor=hex_rgb)\n63 ​\n64 print('write in:', w, ' | all:', width + 1)\n65 print('saving...')\n66 workbook.save(out_file)\n67 print('success!')\n68 ​\n69 if __name__ == '__main__':\n70 draw_jpg('mona-lisa.jpg')```\n\nhttps://www.cnblogs.com/Qqun821460695/p/11941700.html\n\n「点点赞赏,手留余香」\n\n还没有人赞赏,快来当第一个赞赏的人吧!\n\nTensorFlow\n0 条回复 A 作者 M 管理员\n所有的伟大,都源于一个勇敢的开始!" ]
[ null, "https://img-blog.csdnimg.cn/20191127133810269.png", null, "https://img-blog.csdnimg.cn/20191127134331902.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.52090204,"math_prob":0.96767944,"size":4056,"snap":"2019-51-2020-05","text_gpt3_token_len":2058,"char_repetition_ratio":0.10340573,"word_repetition_ratio":0.06903765,"special_character_ratio":0.33136094,"punctuation_ratio":0.17541437,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9614024,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-07T21:29:46Z\",\"WARC-Record-ID\":\"<urn:uuid:051223dd-6a47-452f-a290-c9d23ab5c151>\",\"Content-Length\":\"64503\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:521e7dd6-3500-4235-bceb-43cd5a4e8528>\",\"WARC-Concurrent-To\":\"<urn:uuid:a572fee3-3e65-44ab-963e-0250f50e4da2>\",\"WARC-IP-Address\":\"47.95.226.97\",\"WARC-Target-URI\":\"https://www.lizenghai.com/archives/34576.html\",\"WARC-Payload-Digest\":\"sha1:TTCNS6UF7MDJPL5VSZDHE4BQ4YMH6YLD\",\"WARC-Block-Digest\":\"sha1:TSBFPKGLEHGLZQ75Y2JXYSD4QQT3YFHI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540502120.37_warc_CC-MAIN-20191207210620-20191207234620-00080.warc.gz\"}"}
https://www.causeweb.org/cause/statistical-topic/graphical-displays?page=3
[ "Sorry, you need to enable JavaScript to visit this website.\n\n# Graphical Displays\n\n• ### Geometric Models\n\nThis online, interactive lesson on geometric models provides examples, exercises, and applets which include Buffon's Problems, Bertrand's Paradox, and Random Triangles.\n\n• ### Probability, Mathematical Statistics, and Stochastic Processes\n\nThis online, interactive lesson on finite sampling models provides examples, exercises, and applets that include hypergeometric distribution, multivariate hypergeometric distribution, order statistics, the matching problem, the birthday problem, and the coupon collector problem.\n\n• ### Renewal Processes Apps\n\nThis online, interactive lesson on the renewal processes provides examples, exercises, and applets which include renewal equations and renewal limit theorems.\n\n• ### Probability Spaces\n\nThis online, interactive lesson on probability spaces provides examples, exercises, and applets that cover conditional probability, independence, and several modes of convergence that are appropriate for random variables. This section also covers probability space, the paradigm of a random experiment and its mathematical model as well as sample spaces, events, random variables, and probability measures.\n\n• ### Markov Chains\n\nThis online, interactive lesson on Markov chains provides examples, exercises, and applets that cover recurrence, transience, periodicity, time reversal, as well as invariant and limiting distributions.\n• ### Free Statistics Software\n\nThis site is a collection of Web-enabled scientific services & applications including Equation Plotter Software, Scientific Forecasting Software, Multiple Regression Software, Descriptive Statistics Software, Statistical Hypothesis Testing Software, Sample Size Software, and XML-RPC PHP client.\n\n• ### Theoretical Underpinnings of the Bootstrap\n\nThis page discusses the theory behind the bootstrap. It discusses the empirical distribution function as an approximation of the distribution function. It also introduces the parametric bootstrap.\n• ### Chi-Square Sampling Distribution Generator\n\nThis page generates a graph of the Chi-Square distribution and displays the associated probabilities. Users enter the degrees of freedom (between 1 and 20, inclusive) upon opening the page." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85482705,"math_prob":0.8817845,"size":1735,"snap":"2019-51-2020-05","text_gpt3_token_len":320,"char_repetition_ratio":0.118428655,"word_repetition_ratio":0.20089285,"special_character_ratio":0.1682997,"punctuation_ratio":0.17037037,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.976384,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T22:56:23Z\",\"WARC-Record-ID\":\"<urn:uuid:6abaa8bc-61a9-4ad0-baed-68f31e4ac785>\",\"Content-Length\":\"55628\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3be5a031-0f55-4271-ae17-dd85a3faba66>\",\"WARC-Concurrent-To\":\"<urn:uuid:76cb6a89-7029-4c90-b676-232942a16f2f>\",\"WARC-IP-Address\":\"128.118.3.115\",\"WARC-Target-URI\":\"https://www.causeweb.org/cause/statistical-topic/graphical-displays?page=3\",\"WARC-Payload-Digest\":\"sha1:2L3CEZY4MLAHGCB4CJVP2FF2ZUHNL2BX\",\"WARC-Block-Digest\":\"sha1:GTGEFOE3GMUORYKTFBTWQSS66JV6W2LC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250595282.35_warc_CC-MAIN-20200119205448-20200119233448-00490.warc.gz\"}"}
https://portrait.gitee.com/ryvius_key/hand-keras-yolo3-recognize/blob/master/predict_beyes.py
[ "## ryvius_key / hand-keras-yolo3-recognize\n\nExplore and code with more than 6 million developers,Free private repositories !:)\npredict_beyes.py 1022 Bytes\ncungudafa authored 2020-08-08 09:50 . sign\n# -*- coding: utf-8 -*-\nimport os\nfrom cv2 import cv2\nimport time\nimport numpy as np\n\nfrom sklearn.externals import joblib\n\nfrom pose_hand import getImgInfo\nfrom yolo import YOLO\nfrom pose.coco import general_coco_model\n\n# coco\nmodelpath = \"model/\"\nstart = time.time()\npose_model = general_coco_model(modelpath) # 1.加载模型\nprint(\"[INFO]Pose Model loads time: \", time.time() - start)\n# yolo\nstart = time.time()\n_yolo = YOLO() # 1.加载模型\nprint(\"[INFO]yolo Model loads time: \", time.time() - start)\n\npath = \"D:/myworkspace/JupyterNotebook/hand-keras-yolo3-recognize/docs/wangyu_hand_img/\"\n\nX_test = [path+\"movehouse_37.jpg\",path+\"movehouse_65.jpg\"]\n\n# 测试集\nXX_test = []\nfor i in X_test:\nhist,_ = getImgInfo(image, pose_model, _yolo)\n\nXX_test.append(hist)\n\npredictions_labels = clf.predict(XX_test)\n\n# 使用测试集预测结果\nprint(u'预测结果:')\nprint(predictions_labels)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.58717424,"math_prob":0.71367145,"size":1279,"snap":"2021-31-2021-39","text_gpt3_token_len":357,"char_repetition_ratio":0.11372549,"word_repetition_ratio":0.03846154,"special_character_ratio":0.2556685,"punctuation_ratio":0.1627907,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97775954,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-16T22:57:16Z\",\"WARC-Record-ID\":\"<urn:uuid:3122872d-9060-461a-84fc-8c19d68c72ca>\",\"Content-Length\":\"87283\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4763dbbf-e1be-40ef-9410-f632dd2ad553>\",\"WARC-Concurrent-To\":\"<urn:uuid:15e1460d-a74e-46f0-9fc8-d7dccb9f05da>\",\"WARC-IP-Address\":\"69.28.62.188\",\"WARC-Target-URI\":\"https://portrait.gitee.com/ryvius_key/hand-keras-yolo3-recognize/blob/master/predict_beyes.py\",\"WARC-Payload-Digest\":\"sha1:QGXLWP42ZHD72OWX6ESLNQPVEPPQOUIT\",\"WARC-Block-Digest\":\"sha1:Z2OOOOCPBMHFM42KCBJBBHTRHCKLOHYG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780053759.24_warc_CC-MAIN-20210916204111-20210916234111-00431.warc.gz\"}"}
https://topbitzlej.web.app/mclendon67189lifa/how-to-determine-average-annual-rate-of-return-818.html
[ "How to determine average annual rate of return\n\nSince most investments' annual returns vary from year to year, the CAGR calculation averages the good years' and bad years' returns into one return percentage\n\nHow to Calculate the Average Annual Rate of Return in Excel. What is Annual Rate of Return? The annual rate of return for an investment is the percentage change of the total dollar amount from one year to Average Annual Rate of Return. Example Average Annual Rate of Return. Annual Rate of Return To calculate the compound average return, we first add 1 to each annual return, which gives us 1.15, 0.9 and 1.05, respectively. We then multiply those figures together and raise the product to the power of one-third to adjust for the fact that we have combined returns from three periods. Calculating an average annual return is much simpler than the average annual rate of return, which uses a  geometric average instead of a regular mean.  The formula is: [(1+r 1) x (1+r 2) x (1+r 3) But that 10% is the return for the whole 3 years. You want to know what equivalent rate you compounded at annually, in order to end up making 10% total. Annualized Return = (End / Beginning)(1 / Num Years) - 1 Or, continuing our example above: Determine the number of years the investor kept the investment. In our example, the investor held the stock for five years. Divide the rate of return by the number of years the investor held the shares to calculate the average rate of return. In our example, 37.5 percent divided by 5 years equals 7.5 percent per year. Use KeyBank’s annual rate of return calculator to determine the annual return of a known initial amount, a stream of deposits, plus a known final future value. How Do You Calculate Annual Rate of Return? Divide the ending value by the beginning value. Start with the total return, and divide it by the amount that was initially invested. For example Take the quotient to the power of one over the number of years the investment was held. For example, take\n\nDec 19, 2014 Compounded annual growth rates versus mean annual growth rate. I know if you have a 37 percent loss in your first year of analysis vs. a 37 percent If you take the simple average of those four years individual returns,\n\nThis not only includes your investment capital and rate of return, but inflation, 1970 to December 31st 2019, the average annual compounded rate of return for   Annualized Return Rate. While the percentage is a better at giving you context for the return, it still doesn't include the time element. Obviously, the longer  One way to measure your 401(k) plan's performance is to calculate the compound annual growth rate, which measures your average annual return. Using the  May 22, 2019 Annual return for the first 3 years was 15%, -5% and 10%. Suppose all the return results from capital gain. The arithmetic average return in the  Mar 11, 2020 Whenever I talk about investing in stocks, I usually suggest that you can earn a 7 % annual return on average. That percentage is based on a  There's no CAGR function in Excel. However, simply use the RRI function in Excel to calculate the compound annual growth rate (CAGR) of an investment over a\n\nTo determine the rate of return, first calculate the amount of dividends he received over the two-year period: 10 shares x (\\$1 annual dividend x 2) = \\$20 in dividends from 10 shares Next, calculate how much he sold the shares for:\n\nThe method of calculation can make a significant difference in your true rate of return. To calculate the compound average return, we first add 1 to each annual  Jan 31, 2020 As a measure of return, the yearly rate of return is rather limiting because it delivers only a percentage increase over a single, one-year period. By\n\nHow to understand, measure and compare the rate of return on different investments. People refer to it as the Compound Annual Growth rate (CAGR), Effective AVERAGE returns (arithmetic vs geometric) : You know how to calculate an\n\nThe Accounting Rate of Return formula is as follows: ARR = average annual profit / average investment. Of course, that doesn't mean too much on its own,  This not only includes your investment capital and rate of return, but inflation, 1970 to December 31st 2019, the average annual compounded rate of return for   Annualized Return Rate. While the percentage is a better at giving you context for the return, it still doesn't include the time element. Obviously, the longer  One way to measure your 401(k) plan's performance is to calculate the compound annual growth rate, which measures your average annual return. Using the\n\nJan 18, 2013 But if 12% isn't a reasonable rate of return on the money you invest, then what is? You need to know how/why an investment actually rises in value. an average annual return of 9.70% and the 20-year average is 5.98%.\n\nJan 31, 2020 As a measure of return, the yearly rate of return is rather limiting because it delivers only a percentage increase over a single, one-year period. By  Apr 22, 2019 Calculating an average annual return is much simpler than the average annual rate of return, which uses a geometric average instead of a\n\nDec 3, 2018 Just be aware that average annual return is not the same as average annual rate of return. In its most basic mathematical formula, you take the  Jan 18, 2013 But if 12% isn't a reasonable rate of return on the money you invest, then what is? You need to know how/why an investment actually rises in value. an average annual return of 9.70% and the 20-year average is 5.98%. The dividend rate specifies what percentage of an invested amount is paid to the creditor at regular time intervals. In this Annualized Rate of Return Calculator,  The annual percentage rate (APR) that you are charged on a loan may not be the In this video, we calculate the effective APR based on compounding the APR daily. APY is the actual return you are getting once you factor in compounding. Using your idea of an average, to find the average velocity we'd want to measure the velocity at a bunch of (evenly spaced) points in that interval, and find the" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9481181,"math_prob":0.9816998,"size":5917,"snap":"2022-05-2022-21","text_gpt3_token_len":1325,"char_repetition_ratio":0.19414848,"word_repetition_ratio":0.32627526,"special_character_ratio":0.23255028,"punctuation_ratio":0.09957447,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981233,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-20T08:04:32Z\",\"WARC-Record-ID\":\"<urn:uuid:d65b42a5-ac7d-4fbd-bb48-a3182599348a>\",\"Content-Length\":\"18648\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0de180cd-1f59-4b02-95bb-29e92e1b9bbe>\",\"WARC-Concurrent-To\":\"<urn:uuid:b64a1711-2df4-416c-9063-b005cfdb866d>\",\"WARC-IP-Address\":\"199.36.158.100\",\"WARC-Target-URI\":\"https://topbitzlej.web.app/mclendon67189lifa/how-to-determine-average-annual-rate-of-return-818.html\",\"WARC-Payload-Digest\":\"sha1:KXDX3KK6EQDB45XO3WOIEWI7CMCTRR3Q\",\"WARC-Block-Digest\":\"sha1:RUYHNL6RBLPG7NA4VAQQ26JRJBCS446P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301730.31_warc_CC-MAIN-20220120065949-20220120095949-00334.warc.gz\"}"}
https://ilja-schmelzer.de/hidden-variables/printthread.php?tid=45
[ "", null, "About surrealistic trajectories in dBB theory - Printable Version +- Hidden Variables (https://ilja-schmelzer.de/hidden-variables) +-- Forum: Foundations of Quantum Theory (https://ilja-schmelzer.de/hidden-variables/forumdisplay.php?fid=3) +--- Forum: de Broglie-Bohm theory (Bohmian mechanics) and other Hidden Variable Theories (https://ilja-schmelzer.de/hidden-variables/forumdisplay.php?fid=6) +--- Thread: About surrealistic trajectories in dBB theory (/showthread.php?tid=45) About surrealistic trajectories in dBB theory - secur - 05-22-2016 This is interesting: new paper that disputes 1992's \"ESSW\", Englert et al, http://www.degruyter.com/view/j/zna.1992.47.issue-12/zna-1992-1201/zna-1992-1201.xml. The paper from Mahler et al (Aephraim Steinberg is the only author I've heard of), \"Experimental nonlocal and surreal Bohmian trajectories\", http://advances.sciencemag.org/content/2/2/e1501466 came out in February but I noticed it in a popular write-up from 5 days ago at https://www.quantamagazine.org/20160517-pilot-wave-theory-gains-experimental-support/. Of course it tells us nothing we didn't already know but it's worth checking out. Good to see dBB treated fairly in popular press. RE: About surrealistic trajectories in dBB theory - Schmelzer - 05-22-2016 Thanks very much for the links, this is really helpful, I would have to find the links myself to comment yet another attack by Lubos Motl, in this case not directly against dBB theory but against the article in the Quanta Magazine. The article is quite fine, the only inaccuracy I have identified is Quote:... their velocities at any moment fully determined by the pilot wave, which in turn depends on the wave function. where the \"pilot wave\" is the same as the wave function, in fact, it is simply the original name of the wave function, the one used by de Broglie. But even this use can be defended, given that the pilot wave is the wave function on configuration space, but \"wave function\" is defined also on momentum space, thus, is a more general concept. What is Motl's objection? The article is about a paper Mahler et al., Experimental nonlocal and surreal Bohmian trajectories, Sci. Adv. 2016; 2 : e1501466 19 February 2016. Motl quotes the abstract of this paper, which says Quote:We have verified the effect pointed out by ESSW that for a WWM with a delayed readout, Bohmian trajectories originating at the lower slit may be accompanied by WWM results associated with either the upper or the lower slit. However, this surreal behavior is merely the flip side of the nonlocality we also demonstrated. and comments this with Quote:So an experiment by ESSW was done and the predictions were confirmed. and, given that the ESSW paper can be considered as being somehow against dBB theory, it follows that the result is a refutation of dBB theory. But in the Mahler et al. paper itself we read the following: Quote:Englert, Scully, Süssmann, and Walther (ESSW) (12) asserted that in the presence of such a Welcher Weg measurement (WWM) device, the particle’s Bohmian trajectories can display seemingly contradictory behavior: There are instances when the particle’s Bohmian trajectory goes through one slit, and yet the WWM result indicates that it had gone through the other slit. ESSW concluded that these trajectories predicted by Bohmian mechanics could not correspond to reality and they dubbed them “surreal trajectories.” This serious assertion was discussed at length in the literature (13–17), after which a resolution of this seeming inconsistency was proposed by Hiley et al. (18). Here, we present an experimental validation of this resolution, in which the nonlocality of Bohmian mechanics comes to the fore. Here, (18) refers to B. J. Hiley, R. Callaghan, O. Maroney, Quantum trajectories, real, surreal or an approximation to a deeper process? ArXiv:quant-ph/0010020 (2000). Which is, as one can easily see (if Hiley as one of the authors is not sufficient for this) proposes a resolution which supports dBB theory. And it is this pro-dBB resolution which is supported by the observation. So, the Bohmians, who, I suppose, tend to read important papers and not only the abstracts (as string theorists, with their thousands of papers, are essentially forced to do), will know that it is their beloved theory which is supported. Whatever, Lubos Motl explicitly asks for help: Quote:So how can anyone ever say that this experiment brings \"new support\" for Bohmian theory (there has never been any old support, let alone new support)? It's probably meant to be justified by the sentence (and related comments): Quote:However, this surreal behavior is merely the flip side of the nonlocality we also demonstrated. What? ;-) The ESSW paper and the serious observations in it don't depend on any \"nonlocality\" whatsoever. The word doesn't appear in the ESSW paper at all (the highly problematic term \"weak measurement\" doesn't appear there, either). Two trajectories either agree or disagree. And yes, they disagree. That's the problem ESSW found and Mahler et al. confirmed. What does it have to do with nonlocality? Hm, this looks similar to the problem that one should be able to explain things also to the own grandmother, as a criterion that one has understood the problem oneself. Is one able to explain the problem to Lubos Motl, without going into too many details of the Hiley et al. paper (32 pages) and the Mahler et al. paper (8 pages)? That's difficult, of course, but, whatever, let's try. There is the simple symmetry rule of the simple double slit experiment, everything is symmetric for $$z \\to -z$$, the Bohmian velocity too, thus, $$v^z(z=0)=0$$ and no particle can switch sides. But we do not have here this simple situation. We have a device which measures the \"which path\" information. So, let's describe the result of this \"which path\" measurement by $$\\psi^{which}(x)$$. Then we have to consider the full wave function, which is $\\psi = \\psi_{up}(z) \\psi^{which}_{up}(x) + \\psi_{down}(z) \\psi^{which}_{down}(x).$ And if we now try a symmetry $$z \\to -z$$, we also have to change the measured which path information correspondingly, $$\\psi^{which}_{down}\\leftrightarrow\\psi^{which}_{up}$$. Moreover, we have different Bohmian velocities at $$z=0$$, namely all values of $$v^z(x,0)$$. That they have to sum up to 0 does in no way mean that they have to be zero themselves. So, the \"Bohmian prediction\" that up remain up holds only in the simple case, where no \"which path\" information is measured. If the hole is measured, thus, if $$\\psi^{which}_{up}(x)$$ and $$\\psi^{which}_{down}(x)$$ do not overlap, then the symmetry gives us nothing, and the Bohmian particle follows the same wave function which we have to use if only one hole would have been open. Which easily allows him to cross the z=0 border. But what if $$\\psi^{which}_{up}(x)$$ and $$\\psi^{which}_{down}(x)$$ do overlap? What if they are, say, $$\\delta(x-x_1)\\pm\\delta(x-x_2)$$, so that their support is even identical? Then, from the Bohmian point of view, we have to distinguish the two cases $$x(t)=x_1$$ and $$x(t)=x_2$$. In above cases, we have a an effective superposition, simply with a different amplitude $$\\pm1$$. And in above cases we obtain a symmetry, and the \"up remains up\" rule remains valid. But what if we delay the \"which path\" measurement? That means, we make a measurement immediately, so that we obtain a state $\\psi = \\psi_{up}(z) \\psi^{which}_{up}(x) + \\psi_{down}(z) \\psi^{which}_{down}(x),$ but we do not complete the measurement immediately, so that it becomes a macroscopic, irreversible one, but leave it some time in the $$\\delta(x-x_1)\\pm\\delta(x-x_2)$$ state, and only some time later decide that we measure the \"which path\"? In this case, we may have to apply one part of the considerations before and the other one after the measurement. Which requires some non-locality. The usual weak non-locality, which cannot be used to transfer any information, because it is hidden in the correlations, and one needs the complete information from above parts to see that it is necessary to explain the observations. So, I hope this helps to understand how non-locality becomes relevant for understanding ESSW. How all this brings \"new support\"? In the same way as every experiment which does not falsify a given theory is, sloppily, named \"new support\" to a theory in popular literature. Nothing serious, given that we have an equivalence theorem. Or, maybe, more? At the actual moment, I think, yes, even much more. I have had an idea, today, which, if it really works, would give really much more. I have to write it down, and, given the quite exceptional nature, the probability is yet high enough that it is simply wrong. I have had a lot of such ideas, and rejected them later because they did not work, so this would be nothing new. But at least today it looks so consistent to me that I even risk to announce it in this way. So, what I announce here is the following: Or I will present in some time a paper with a really interesting and unexpected claim, or (more probable) I will find the error in this construction - and, then, promise to explain it, as an illustration of the everyday work of scientists, which, quite often, have nice ideas but, later, find they don't work." ]
[ null, "https://ilja-schmelzer.de/hidden-variables/images/logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9304839,"math_prob":0.9238512,"size":8440,"snap":"2019-35-2019-39","text_gpt3_token_len":2093,"char_repetition_ratio":0.11119014,"word_repetition_ratio":0.018925056,"special_character_ratio":0.24454977,"punctuation_ratio":0.13447867,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97968215,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-25T23:25:37Z\",\"WARC-Record-ID\":\"<urn:uuid:b23f3250-2647-484c-b83d-8fa6bc89ecf7>\",\"Content-Length\":\"13273\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6c6500b9-9c02-42ee-b4cb-c7524794a67c>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d029ea0-c7f5-468f-a82d-7eb0815990f7>\",\"WARC-IP-Address\":\"46.30.213.216\",\"WARC-Target-URI\":\"https://ilja-schmelzer.de/hidden-variables/printthread.php?tid=45\",\"WARC-Payload-Digest\":\"sha1:SE3O6LKEGQZPO45B7CEDHQLX357P2SBT\",\"WARC-Block-Digest\":\"sha1:GHBIYO6O66C5SDWEUFA3K3KKPKEAWGPM\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027330907.46_warc_CC-MAIN-20190825215958-20190826001958-00160.warc.gz\"}"}
https://www.got-it.ai/solutions/excel-chat/excel-tutorial/pivot-table/vba-pivot-table
[ "", null, "", null, "Get instant live expert help with Excel or Google Sheets", null, "“My Excelchat expert helped me in less than 20 minutes, saving me what would have been 5 hours of work!”\n\n#### Post your problem and you’ll get Expert help in seconds.\n\nYour message must be at least 40 characters\nOur professional Expert are available now. Your privacy is guaranteed.\n\n# How to Create a Pivot Table with VBA in Excel\n\nWe can use VBA to create a pivot table in less time by applying macro codes. In this guide, we will learn how to automate our pivot table using VBA.", null, "Figure 1- How to create a Pivot table with VBA\n\n## Convert data to a table\n\n• We will convert our data to a table by clicking Insert, and then, Table", null, "Figure 2- Click on Table", null, "Figure 3- Create Table dialog box\n\n• We will click OK\n• Next, we will name our table as SalesPivotTable in the name box below file as shown below", null, "Figure 4- Data Table\n\n• We will rename our sheet as “Data”\n\n## Steps to Write the VBA Code\n\n### Declare Variables\n\nWe need to declare the variables in the code to define different aspects:\n\n• PSheet: This implies that we want to create a sheet for a new pivot table\n• DSheet: To apply a datasheet\n• PChache: Apply as a name for pivot table cache\n• PTable: Apply a name for our pivot table\n• PRange: This defines the source data range (the range of our table, A3:F61)\n• LastRow and LastCol: We use this to acquire the last row and column of our data range.", null, "Figure 5 – Declare all variables\n\n### Insert a New Worksheet\n\nWe will create a code for excel to place our Pivot Table in a blank sheet. With this code, we will insert a new worksheet titled “Pivot table.” Our values will be set as PSheet Variable to Pivot table Worksheet and DSheet to source data Worksheet. Whenever we want to change the name of the worksheet, we can do so from the code. We must note that if there is a worksheet named PivotTable, the code will delete it before creating the Pivot Table with VBA Code.", null, "Figure 6 – Insert a New Worksheet\n\n### Defining Data Range\n\nWe will specify the range of the data from our source worksheet. This code will identify an entire data and not a fixed source range. The code will start from the first cell of the first row. It moves down to the last row, and then, to the last column. It will update itself, not minding how large or small our source data is.", null, "Figure 7 – Define data range\n\n### Create Pivot Cache\n\nExcel automatically creates a Pivot table Cache for us without asking. With VBA, we have to write the code for this by first defining a pivot cache through the data source. Also, we will define the cell address of the current inserted worksheet to create the pivot table.", null, "Figure 8 – Create Pivot Cache\n\n### Create a Blank Pivot Table\n\nThis code will enable us to have a Blank Pivot Table before we select the fields that we want. We can alter it within the code at any time.", null, "Figure 9 – Create a Blank Pivot Table\n\n### Insert Rows and Columns\n\nBecause we normally insert rows and columns, in the same manner, we will write code to do so. We will add years and month (Date) to the rows field and Zone to the column field. We will ensure that there is a position number to identify the sequence of fields, especially when we want to add more than one array to the same field.", null, "Figure 10 – Insert Rows and Columns\n\n### Insert Data Field\n\nWe will define the value field for our pivot table. For example, we may use the xlsum to depict the sum values. We will use the (,) separator to identify values as a number.", null, "Figure 11 – Insert Values Data Field\n\n### Format Pivot table\n\nLastly, we will also need a code to format our pivot table especially when we wish to change the formatting style of the Pivot table within the code. For our example, we will apply rows strips and “Pivot Style Medium 9”.", null, "Figure 12 – Format Pivot Table\n\n### Run the Macro code to Create a Pivot table\n\nNow, we have finished creating the VBA Code, we can run our code to create a pivot table.", null, "Figure 13 – Run the Macro code\n\n• When we click on RUN, we will be instantly presented with the Pivot Table field, here, we will select “More Tables”, then Yes", null, "Figure 14 – Select Worksheet\n\n• Lastly, we will select “SalesPivotTable” and press OK", null, "Figure 15 – Finished Pivot Table with Macro Code\n\n### Full Pivot Table Macro Code\n\n`Sub CreatePivottablewithVBA()`\n\n`'Declare Variables`\n\n`Dim PSheet As Worksheet`\n\n`Dim DSheet As Worksheet`\n\n`Dim PCache As PivotCache`\n\n`Dim PTable As PivotTable`\n\n`Dim PRange As Range`\n\n`Dim LastRow As Long`\n\n`Dim LastCol As Long`\n\n`'Insert a New Blank Worksheet`\n\n`On Error Resume Next`\n\n`Application.DisplayAlerts = False`\n\n`Worksheets(\"PivotTable\").Delete`\n\n`Sheets.Add Before:=ActiveSheet`\n\n`ActiveSheet.Name = \"PivotTable\"`\n\n`Application.DisplayAlerts = True`\n\n`Set PSheet = Worksheets(\"PivotTable\")`\n\n`Set DSheet = Worksheets(\"Data\")`\n\n`'Define Data Range`\n\n`LastRow = DSheet.Cells(Rows.Count, 1).End(xlUp).Row`\n\n`LastCol = DSheet.Cells(1, Columns.Count).End(xlToLeft).Column`\n\n`Set PRange = DSheet.Cells(1, 1).Resize(LastRow, LastCol)`\n\n`'Define Pivot Cache`\n\n`Set PCache = ActiveWorkbook.PivotCaches.Create _`\n\n`(SourceType:=xlDatabase, SourceData:=PRange). _`\n\n`CreatePivotTable(TableDestination:=PSheet.Cells(2, 2), _`\n\n`TableName:=\"SalesPivotTable\")`\n\n`'Insert Blank Pivot Table`\n\n`Set PTable = PCache.CreatePivotTable _`\n\n`(TableDestination:=PSheet.Cells(1, 1), TableName:=\"SalesPivotTable\")`\n\n`'Insert Row Fields`\n\n`With ActiveSheet.PivotTables(\"SalesPivotTable\").PivotFields(\"Year\")`\n\n`.Orientation = xlRowField`\n\n`.Position = 1`\n\n`End With`\n\n`With ActiveSheet.PivotTables(\"SalesPivotTable\").PivotFields(\"Month\")`\n\n`.Orientation = xlRowField`\n\n`.Position = 2`\n\n`End With`\n\n`'Insert Column Fields`\n\n`With ActiveSheet.PivotTables(\"SalesPivotTable\").PivotFields(\"Zone\")`\n\n`.Orientation = xlColumnField`\n\n`.Position = 1`\n\n`End With`\n\n`'Insert Data Field`\n\n`With ActiveSheet.PivotTables(\"SalesPivotTable\").PivotFields(\"Amount\")`\n\n`.Orientation = xlDataField`\n\n`.Position = 1`\n\n`.Function = xlSum`\n\n`.NumberFormat = \"#,##0\"`\n\n`.Name = \"Revenue \"`\n\n`End With`\n\n`'Format Pivot Table`\n\n`ActiveSheet.PivotTables(\"SalesPivotTable\").ShowTableStyleRowStripes = True`\n\n`ActiveSheet.PivotTables(\"SalesPivotTable\").TableStyle2 = \"PivotStyleMedium9\"`\n\n`End Sub`\n\n## Instant Connection to an Expert through our Excelchat Service\n\nMost of the time, the problem you will need to solve will be more complex than a simple application of a formula or function. If you want to save hours of research and frustration, try our live Excelchat service! Our Excel Experts are available 24/7 to answer any Excel question you may have. We guarantee a connection within 30 seconds and a customized solution within 20 minutes.", null, "" ]
[ null, "https://www.facebook.com/tr", null, "https://www.got-it.ai/solutions/excel-chat/wp-content/themes/seocms/assets/images/seo/seo-head-cover-opt.jpg", null, "https://www.got-it.ai/solutions/excel-chat/wp-content/themes/seocms/assets/images/seo-avatars/user-52.jpg", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063403/Figure-1-How-to-create-a-Pivot-table-with-VBA.png", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063404/Figure-2-Click-on-Table.png", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063407/Figure-3-Create-Table-dialog-box.png", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063409/Figure-4-Data-Table.png", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063412/Figure-5-%E2%80%93-Declare-all-variables.png", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063414/Figure-6-%E2%80%93-Insert-a-New-Worksheet.png", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063416/Figure-7-%E2%80%93-Define-data-range.png", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063417/Figure-8-%E2%80%93-Create-Pivot-Cache.png", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063419/Figure-9-%E2%80%93-Create-a-Blank-Pivot-Table.png", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063421/Figure-10-%E2%80%93-Insert-Rows-and-Columns.png", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063423/Figure-11-%E2%80%93-Insert-Values-Data-Field.png", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063424/Figure-12-%E2%80%93-Format-Pivot-Table.png", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063426/Figure-13-%E2%80%93-Run-the-Macro-code.png", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063428/Figure-14-%E2%80%93-Select-Worksheet.png", null, "https://d295c5dn8dhwru.cloudfront.net/wp-content/uploads/2019/03/14063430/Figure-15-%E2%80%93-Finished-Pivot-Table-with-Macro-Code.png", null, "https://secure.gravatar.com/avatar/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.639814,"math_prob":0.804291,"size":6058,"snap":"2020-34-2020-40","text_gpt3_token_len":1481,"char_repetition_ratio":0.16633631,"word_repetition_ratio":0.006116208,"special_character_ratio":0.22086497,"punctuation_ratio":0.114260405,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96027404,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,null,null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T13:11:14Z\",\"WARC-Record-ID\":\"<urn:uuid:7f14ef73-94e0-4d24-b4e1-8ad4d9f75cb7>\",\"Content-Length\":\"94905\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bb28f09b-5952-4d12-86a4-cfc7f20796b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:00215a89-dfec-4fb1-9bb7-8515cdd20138>\",\"WARC-IP-Address\":\"99.84.181.15\",\"WARC-Target-URI\":\"https://www.got-it.ai/solutions/excel-chat/excel-tutorial/pivot-table/vba-pivot-table\",\"WARC-Payload-Digest\":\"sha1:UWODHHAEBSPKIW3VVYPF4BAVOFPRVVAY\",\"WARC-Block-Digest\":\"sha1:IUELCLCAEACAQGGLU66D7CSV53QILS6Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401600771.78_warc_CC-MAIN-20200928104328-20200928134328-00612.warc.gz\"}"}
http://reportz725.web.fc2.com/essay/essays-20164171165/
[ "Date: 15.11.2016 / Article Rating: 5 / Votes: 517\nEasy way to solve word problems\nHome >> Uncategorized >> Easy way to solve word problems\n\nEasy way to solve word problems\n\nDec/Sat/2016 | Uncategorized\n\nTricks for Solving Algebra Word Problems - TakeLessons", null, "Easy Ways to Solve Math Problems (with Pictures) - wikiHow", null, "Tricks for Solving Algebra Word Problems - TakeLessons", null, "What is the best way to solve math word problems? - Quora", null, "How to Become a Word Problem Expert - Shmoop", null, "Solving Math Word Problems: explanation and exercises", null, "How to Easy way to solve word problems � Math", null, "Simple Steps for Solving Word Problems - dummies", null, "Solving Math Word Problems: explanation and exercises", null, "Easy Ways to Solve Math Problems (with Pictures) - wikiHow", null, "How to Become a Word Problem Expert - Shmoop", null, "Tricks for Solving Algebra Word Problems - TakeLessons", null, "How to Become a Word Problem Expert - Shmoop", null, "What is the best way to solve math word problems? - Quora", null, "What is the best way to solve math word problems? - Quora", null, "How to Become a Word Problem Expert - Shmoop", null, "Easy system to solve word problems wmv - YouTube", null, "How to solve math word problems - without giving yourself a headache", null, "Five Proven Steps on How to Solve Math Word Problems Quickly", null, "How to Become a Word Problem Expert - Shmoop", null, "", null, "" ]
[ null, "https://s-media-cache-ak0.pinimg.com/736x/3d/cc/ae/3dccae814a52a39fca0a1800723a48ce.jpg", null, "https://s-media-cache-ak0.pinimg.com/236x/8f/8f/4b/8f8f4b00023b527e4cba1c2a92e9426b.jpg", null, "http://www.shelovesmath.com/wp-content/uploads/2012/08/Another-Mixture-Word-Problem.png", null, "http://0.tqn.com/d/math/1/0/O/2/algbirthprob.gif", null, "https://www.whatihavelearnedteaching.com/wp-content/uploads/2015/09/Word-Problems-Preview.006.jpg", null, "http://images.slideplayer.com/16/4885968/slides/slide_20.jpg", null, "http://www.mathplayground.com/images/SolveIt.gif", null, "https://s-media-cache-ak0.pinimg.com/736x/ea/a0/45/eaa045aaf5e6db5ed6caf3467d1acc23.jpg", null, "https://s-media-cache-ak0.pinimg.com/736x/3d/cc/ae/3dccae814a52a39fca0a1800723a48ce.jpg", null, "http://www.math-salamanders.com/image-files/3rd-grade-math-word-problems-fractions-1-captain-salamanders-journey.gif", null, "http://2.bp.blogspot.com/-m3DW4IGUvmI/UJ1i6urabfI/AAAAAAAABv8/n68sPyHO7C4/s640/Problem Solving Simple Steps.PNG", null, "http://www.beambles.com/images/products/enlarged/Basics-Builders-Mathematics-Step-By-Step-Word-Problems-Primary-3-A-Singapore-Math-International.gif", null, "http://missgibsonsfractionaction.weebly.com/uploads/1/9/4/1/19410365/4558071_orig.jpg", null, "https://s-media-cache-ak0.pinimg.com/236x/93/4e/1e/934e1eff78d48dc2e8a27c2e1d5e9a33.jpg", null, "https://i.ytimg.com/vi/zDch07vVBxI/maxresdefault.jpg", null, "http://2.bp.blogspot.com/-esXpwVT_ce0/TywRflM7RhI/AAAAAAAAAVs/XwYAfb8VJxE/s1600/Screen shot 2012-02-03 at 10.54.34 AM.png", null, "http://3.bp.blogspot.com/-twMIQMcjYew/T4teNodINyI/AAAAAAAAABc/ca1ipXxB8oM/s1600/word problems2.png", null, "http://www.math-salamanders.com/images/free-math-word-problems-ratio-problems-3ans.gif", null, "http://images.slideplayer.com/16/4885968/slides/slide_20.jpg", null, "https://s-media-cache-ak0.pinimg.com/236x/78/f0/bf/78f0bfb417ea0e88e6718b9163855263.jpg", null, "http://media.fc2.com/counter_img.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76569647,"math_prob":0.6791776,"size":1181,"snap":"2019-43-2019-47","text_gpt3_token_len":277,"char_repetition_ratio":0.20560747,"word_repetition_ratio":0.45588234,"special_character_ratio":0.20660457,"punctuation_ratio":0.025641026,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9733765,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,10,null,1,null,6,null,7,null,1,null,6,null,4,null,5,null,10,null,null,null,6,null,1,null,4,null,null,null,2,null,1,null,6,null,9,null,6,null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-23T15:16:52Z\",\"WARC-Record-ID\":\"<urn:uuid:a3c34a17-b2cf-4482-9426-ef2bf88365bd>\",\"Content-Length\":\"39322\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fb0607a0-b324-4a00-a0e8-bc18839e7ba8>\",\"WARC-Concurrent-To\":\"<urn:uuid:cd27b5ba-bd80-41df-8c6f-83a855994eeb>\",\"WARC-IP-Address\":\"104.244.99.43\",\"WARC-Target-URI\":\"http://reportz725.web.fc2.com/essay/essays-20164171165/\",\"WARC-Payload-Digest\":\"sha1:4HEWFJT745OWEHI6SLU3QO2RFSMPJIUG\",\"WARC-Block-Digest\":\"sha1:GHOFBMMEWXPABAUD4B23NUKTEAKMZCQS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987834649.58_warc_CC-MAIN-20191023150047-20191023173547-00356.warc.gz\"}"}
https://giasutamtaiduc.com/average-rate-of-change-formula.html
[ "# ✅ Average Rate of Change Formula ⭐️⭐️⭐️⭐️⭐\n\n3/5 - (2 bình chọn)\n\nMục Lục\n\n## How To Find Average Rate Of Change\n\nWhenever we wish to describe how quantities change over time is the basic idea for finding the average rate of change and is one of the cornerstone concepts in calculus.\n\nSo, what does it mean to find the average rate of change?\n\nThe average rate of change finds how fast a function is changing with respect to something else changing.\n\nIt is simply the process of calculating the rate at which the output (y-values) changes compared to its input (x-values).\n\nHow do you find the average rate of change?\n\nWe use the slope formula!\n\nTo find the average rate of change, we divide the change in y (output) by the change in x (input). And visually, all we are doing is calculating the slope of the secant line passing between two points.\n\now To Find The Slope Of A Secant Line Passing Through Two Points\n\nNow for a linear function, the average rate of change (slope) is constant, but for a non-linear function, the average rate of change is not constant (i.e., changing).\n\nLet’s practice finding the average rate of a function, f(x), over the specified interval given the table of values as seen below.\n\n### Practice Problem #1\n\nFind The Average Rate Of Change Of The Function Over The Given Interval\n\n### Practice Problem #2\n\nHow To Find Average Rate Of Change Over An Interval\n\nSee how easy it is?\n\nAll you have to do is calculate the slope to find the average rate of change!\n\n## Average Vs Instantaneous Rate Of Change\n\nBut now this leads us to a very important question.\n\nWhat is the difference is between Instantaneous Rate of Change and Average Rate of Change?\n\nWhile both are used to find the slope, the average rate of change calculates the slope of the secant line using the slope formula from algebra. The instantaneous rate of change calculates the slope of the tangent line using derivatives.\n\nUsing the graph above, we can see that the green secant line represents the average rate of change between points P and Q, and the orange tangent line designates the instantaneous rate of change at point P.\n\nSo, the other key difference is that the average rate of change finds the slope over an interval, whereas the instantaneous rate of change finds the slope at a particular point.\n\n### How To Find Instantaneous Rate Of Change\n\nAll we have to do is take the derivative of our function using our derivative rules and then plug in the given x-value into our derivative to calculate the slope at that exact point.\n\nFor example, let’s find the instantaneous rate of change for the following functions at the given point.\n\n### Tips For Word Problems\n\nBut how do we know when to find the average rate of change or the instantaneous rate of change?\n\nWe will always use the slope formula when we see the word “average” or “mean” or “slope of the secant line.”\n\nOtherwise, we will find the derivative or the instantaneous rate of change. For example, if you see any of the following statements, we will use derivatives:\n\n• Find the velocity of an object at a point.\n• Determine the instantaneous rate of change of a function.\n• Find the slope of the tangent to the graph of a function.\n• Calculate the marginal revenue for a given revenue function.\n\n### Harder Example\n\nAlright, so now it’s time to look at an example where we are asked to find both the average rate of change and the instantaneous rate of change.\n\nNotice that for part (a), we used the slope formula to find the average rate of change over the interval. In contrast, for part (b), we used the power rule to find the derivative and substituted the desired x-value into the derivative to find the instantaneous rate of change.\n\nNothing to it!\n\n## Particle Motion\n\nBut why is any of this important?\n\nHere’s why.\n\nBecause “slope” helps us to understand real-life situations like linear motion and physics.\n\nThe concept of Particle Motion, which is the expression of a function where its independent variable is time, t, enables us to make a powerful connection to the first derivative (velocity)second derivative (acceleration), and the position function (displacement).\n\nThe following notation is commonly used with particle motion.\n\n### Ex) Position – Velocity – Acceleration\n\nLet’s look at a question where we will use this notation to find either the average or instantaneous rate of change.\n\n## Find the average rate of change of a function\n\nThe price change per year is a rate of change because it describes how an output quantity changes relative to the change in the input quantity. We can see that the price of gasoline in the table above did not change by the same amount each year, so the rate of change was not constant. If we use only the beginning and ending data, we would be finding the average rate of change over the specified period of time. To find the average rate of change, we divide the change in the output value by the change in the input value.\n\nOther examples of rates of change include:\n\n• A population of rats increasing by 40 rats per week\n• A car traveling 68 miles per hour (distance traveled changes by 68 miles each hour as time passes)\n• A car driving 27 miles per gallon (distance traveled changes by 27 miles for each gallon)\n• The current through an electrical circuit increasing by 0.125 amperes for every volt of increased voltage\n• The amount of money in a college account decreasing by \\$4,000 per quarter\n\n### A GENERAL NOTE: RATE OF CHANGE\n\nA rate of change describes how an output quantity changes relative to the change in the input quantity. The units on a rate of change are “output units per input units.”\n\nThe average rate of change between two input values is the total change of the function values (output values) divided by the change in the input values.\n\n### What is average rate of change?\n\nThe average rate of change of function fff over the interval axb, is less than or equal to, x, is less than or equal to, b is given by this expression:\n\nIt is a measure of how much the function changed per unit, on average, over that interval.It is derived from the slope of the straight line connecting the interval’s endpoints on the function’s graph.\n\n### EXAMPLE 1: COMPUTING AN AVERAGE RATE OF CHANGE\n\nUsing the data in the table below, find the average rate of change of the price of gasoline between 2007 and 2009.\n\n### SOLUTION\n\nIn 2007, the price of gasoline was \\$2.84. In 2009, the cost was \\$2.41. The average rate of change is\n\n### Analysis of the Solution\n\nNote that a decrease is expressed by a negative change or “negative increase.” A rate of change is negative when the output decreases as the input increases or when the output increases as the input decreases.\n\nThe following video provides another example of how to find the average rate of change between two points from a table of values.\n\n## Finding average rate of change\n\n### Solved Examples\n\nQuestion 1: Calculate the average rate of change of a function, f(x) = 3x + 12 as x changes from 5 to 8 .\n\nSolution:\n\nGiven,\nf(x) = 3x + 12\na = 5\nb = 8\n\nf(5) = 3(5) + 12\nf(5) = 15 + 12\nf(5) = 27\n\nf(8) = 3(8) + 12\nf(8) = 24 + 12\nf(8) = 36\n\nThe average rate of change is,\n\nMath Formulas ⭐️⭐️⭐️⭐️⭐" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8662133,"math_prob":0.99017334,"size":7284,"snap":"2021-43-2021-49","text_gpt3_token_len":1726,"char_repetition_ratio":0.21950549,"word_repetition_ratio":0.13155894,"special_character_ratio":0.24615596,"punctuation_ratio":0.08787466,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99976116,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-30T05:08:42Z\",\"WARC-Record-ID\":\"<urn:uuid:d9b16069-01c3-43a3-a541-7b42d07c278c>\",\"Content-Length\":\"72745\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3793f00b-2e65-441d-a151-ad0974a6d73f>\",\"WARC-Concurrent-To\":\"<urn:uuid:fed6c268-77c1-430d-99c3-d115383fc19d>\",\"WARC-IP-Address\":\"103.221.220.118\",\"WARC-Target-URI\":\"https://giasutamtaiduc.com/average-rate-of-change-formula.html\",\"WARC-Payload-Digest\":\"sha1:HBJP4K6PTKJLR7SZTIWTSZDNHHQNQIZB\",\"WARC-Block-Digest\":\"sha1:S75I7JYTHTKU2P6GUKOJM2O6F3EI6AGR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358953.29_warc_CC-MAIN-20211130050047-20211130080047-00544.warc.gz\"}"}
https://www.redips.net/javascript/html5-canvas-example/
[ "# HTML5 canvas example\n\nThe canvas is a new HTML5 element and it is used to draw graphs, charts, animations and other sort of graphics. Actually, canvas is a JavaScript controlled 2D drawing area. This example shows how to create simple JavaScript function and draw circles inside canvas tag. Source code can be downloaded from “Download” icon below post title.\n\n Pause:\n\nThe main component of this example is draw() JavaScript function (see the code below). ctx (2D rendering context) object is initialized in window.onload and is used to render graphic. fillStyle, beginPath, arc and fill are parts of 2D context API.\n\n```function draw() {\n// generate random color\n// see http://www.redips.net/javascript/random-color-generator/\nctx.fillStyle = rndColor();\n// start drawing\nctx.beginPath();\n// draw arc: arc(x, y, radius, startAngle, endAngle, anticlockwise)\nctx.arc(x, y, r, 0, Math.PI * 2, true);\n// fill circle\nctx.fill();\n// set X direction\nif (x + dx > width || x + dx < 0) {\ndx = -dx;\n}\n// set Y direction\nif (y + dy > height || y + dy < 0) {\ndy = -dy;\n}\nif (r + dr > radius * 2 || r + dr < radius) {\ndr = -dr;\n}\n// calculate new x, y and r values\nx += dx;\ny += dy;\nr += dr;\n// if \"run\" variable is true, then set timeout and call draw() again\nif (run) {\nsetTimeout(draw, pause);\n}\n}\n```\n\nIf pause is decreased then animation will be faster. On the other hand, if you click on start button twice then JavaScript engine will start two parallel processes to call draw() function. The result will be twice faster animation regardless to the current pause value.\n\nThis page is modification of excellent Canvas Tutorial – Bounce. Tutorial is divided in steps and guides from drawing a circle on canvas to the simple game in last step. Each step includes an editor and is possible to on-line modify JavaScript code – very impressive.\n\nAnd finally, your browser should be ready for HTML5 to see how this example works. Try with Chrome 10, FireFox 3.6, Opera 11 …\n\n### 4 thoughts on “HTML5 canvas example”\n\n1.", null, "Hi,\nthanks for this fine script. But what’s if I have a specific path the circle must follow? Thanks for your help (if possible). Hvala lega ;-)\n\n2.", null, "@Joe – Canvas is simply said a drawing board – a Coordinate Grid with (0, 0) in upper left corner. This example uses line paths from bound to bound to draw circles. Any other path is possible but you will have to calculate circle centers – for example you can calculalte circle center with Math.sin(x) or Math.cos(x) …\n\nThanks!\n\nPS Pozdrav iz Zagreba!\n\n3.", null, "Hi,\nthank you, I’ll try. And If I’ll have any of the result I need, I’ll write you ;-)\n\nPozdrav iz Vicenza (I) – Osijek (HR)\n\n4.", null, "OK, and I guess your problem (till now) is successfully solved. Cheers!\n:)" ]
[ null, "https://secure.gravatar.com/avatar/a68b2b7f4b1acc6d43cf93c2fc515314", null, "https://secure.gravatar.com/avatar/e8a409e856714a1e7eb73fa29f58947c", null, "https://secure.gravatar.com/avatar/a68b2b7f4b1acc6d43cf93c2fc515314", null, "https://secure.gravatar.com/avatar/e8a409e856714a1e7eb73fa29f58947c", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7039813,"math_prob":0.89353603,"size":2676,"snap":"2020-34-2020-40","text_gpt3_token_len":665,"char_repetition_ratio":0.09131736,"word_repetition_ratio":0.0,"special_character_ratio":0.27204782,"punctuation_ratio":0.14869888,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9689216,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-29T23:36:26Z\",\"WARC-Record-ID\":\"<urn:uuid:c881f125-b413-4d8b-9d6d-4844e7a1154a>\",\"Content-Length\":\"37759\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:616310cb-3350-4089-8417-90eca9d2d5ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:01ddd37f-5384-4ae2-a9f2-2c31959fb345>\",\"WARC-IP-Address\":\"88.99.92.108\",\"WARC-Target-URI\":\"https://www.redips.net/javascript/html5-canvas-example/\",\"WARC-Payload-Digest\":\"sha1:Y2ITR3FLZGBQC6RLU5DCE4FQV2GG7MQS\",\"WARC-Block-Digest\":\"sha1:JIE42JCFNJZP3OIJYGVQSFIKSY6HPO2G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402093104.90_warc_CC-MAIN-20200929221433-20200930011433-00723.warc.gz\"}"}
https://programming.vip/docs/the-fourth-day-of-c-learning.html
[ "# 1. Code area\n\nIt has two characteristics:\n\n1 is sharing, that is, we click on many of the same files, but only execute that piece of code and will not generate another piece of code.\n\n2 is readable, that is, the code is only readable, but cannot be written, otherwise the data will be modified and the game gold coin will be changed\n\n# 2. Global area\n\n## 1. Variables stored in the global area\n\n1. Global variables\n\n```#include<iostream>\nint a=10;//global variable\nint main()\n{\n\n}\n```\n\n2. Static variables\n\n```static int a=10;\n```\n\n3. String constant\n\n```\"hello world\"\n```\n\n4. Global constants\n\n```#include<iostream>\nconst int a=10;//Global constant\nint main()\n{\n\n}\n```\n\n## 2. Variables not stored in the global area\n\n1. Local constants\n\n```const int a=10;\n```\n\n2. Local variables\n\n```int a=10;\n```\n\n## 3. Summary\n\n1. Generally speaking, I learned the classification of constants, mainly string constants\n\n2.c + + is divided into global area and code area before the program runs\n\n# 3. Stack area\n\nPrecautions for stack area:\n\n1. Don't return the address of the local variable: because the local variable is stored in the stack area, and the data in the stack area is automatically released after the function is executed, we can't continue to use it. You may find that it can be used again, but that's because the compiler helps us keep it automatically once\n\n# 4. Stacking area\n\n```#include <iostream>\nusing namespace std;\nint* func()\n{\nint* a = new int(10);//New opens up a space on the heap area, which is controlled and released by the programmer area, and new returns an address\nreturn a;\n}\nint main()\n{\nint* p = func();\ncout << *p << endl;//p=10\nreturn 0;\n}\n```\n\n# 5.new operator\n\n## 1. Create a small space\n\n```#include <iostream>\nusing namespace std;\nint main()\n{\nint* p = new int(10);\ncout << *p << endl;\nreturn 0;\n}\n```\n\n## 2. Open up a one-dimensional array\n\n```#include <iostream>\nusing namespace std;\nint main()\n{\nint* array = new int;//It means a number of 10 sizes\nfor (int i = 0; i < 10; i++)\n{\narray[i] = 100 + i;\n}\nfor(int i=0;i<10;i++)\n{\ncout << array[i] << \" \";\n}\ndelete[] array;//We're going to release the array like this and add a []\nreturn 0;\n}\n```\n\n# 6. References in C + +\n\n## 1. Reference must be initialized\n\n```int &b;//This is not possible and needs to be initialized\n```\n\n## 2. Once initialized, it cannot be changed\n\n```#include <iostream>\nusing namespace std;\nint main()\n{\nint a = 10;\nint &b = a;\nint c = 20;\nb = c;//This is assignment\ncout << \"a=\" << a << endl;\ncout << \"b=\" << b << endl;\ncout << \"c=\" << c << endl;\nreturn 0;\n}\n```\n\n## 3. Function of reference\n\nWhen exchanging values, in addition to pointer exchange, you can also use referenced operations\n\n```#include <iostream>\nusing namespace std;\nvoid swap(int& a, int& b)//Do it in this way.\n{\nint temp = a;\na = b;\nb = temp;\n}\nint main()\n{\nint a = 10;\nint b = 20;\nswap(a, b);\ncout << \"a=\" << a << endl;\ncout << \"b=\" << b << endl;\nreturn 0;\n}\n```\n\nThe principle is that a and b in void swap (int & A, int & b) are equivalent to aliases. We actually exchange the original A and b\n\n## 4. Nature of reference\n\n1. A reference is actually an integer pointer, specifically a pointer constant\n\n```#include <iostream>\nint main()\n{\nint a = 10;\nint& b = a;//Actually int * const B = & A;\nb = 20;//The c + + compiler finds that B is a reference and automatically converts it to * b=20. All subsequent examples are * b=val;\nreturn 0;\n}\n```\n\n# 7. Advanced order of function\n\n## 1. We can write the equal sign directly when defining the function\n\n```#include <iostream>\nusing namespace std;\nint func(int a, int b = 20, int c = 30)//We can do this\n{\nreturn a + b + c;\n}\nint main()\n{\ncout << func(10);//ha-ha\nreturn 0;\n}\n\n```\n```#include <iostream>\nusing namespace std;\nint func(int a, int b = 20, int c = 30)\n{\nreturn a + b + c;\n}\nint main()\n{\ncout << func(10, 30, 30);\nreturn 0;\n}\n//The output is 70, not 60, indicating that we have stronger ability to pass parameters and can eat what we defined there\n```\n\n## 2. There are two points to note\n\n### 1. When the assignment is defined directly in the function, it should be written from left to right\n\n```#include <iostream>\nusing namespace std;\nint func(int a, int b = 20, int c )//That's not possible, because b has been assigned, so we have to write c as well\n{\nreturn a + b + c;\n}\nint main()\n{\ncout << func(10, 30, 30);\nreturn 0;\n}\n\n```\n\n### 2. Only one function can be declared and defined\n\n```#include <iostream>\nusing namespace std;\n\nint func(int a = 10, int b = 20, int c = 30);//We can't do this. How can we understand that the compiler will be confused. Is it written in you or in there\nint func(int a = 10, int b = 20, int c = 30)\n{\nreturn a + b + c;\n}\nint main()\n{\ncout << func();\nreturn 0;\n}\n```\n\nKeywords: C++ Back-end\n\nAdded by Jalz on Sat, 15 Jan 2022 01:45:50 +0200" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73662525,"math_prob":0.9840856,"size":4540,"snap":"2022-27-2022-33","text_gpt3_token_len":1248,"char_repetition_ratio":0.1404321,"word_repetition_ratio":0.19112629,"special_character_ratio":0.32841408,"punctuation_ratio":0.15328467,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99740267,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-09T07:20:36Z\",\"WARC-Record-ID\":\"<urn:uuid:f754c64f-1e18-45b8-955a-10eda744c1f7>\",\"Content-Length\":\"12172\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:905fec24-0998-42d4-9a43-75d300c535d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:e5ef2d16-2be4-4941-acd9-4767d168d6ff>\",\"WARC-IP-Address\":\"154.12.245.167\",\"WARC-Target-URI\":\"https://programming.vip/docs/the-fourth-day-of-c-learning.html\",\"WARC-Payload-Digest\":\"sha1:6DXMLSNGHTL3VEA5INZPABTEE3NSHZJI\",\"WARC-Block-Digest\":\"sha1:2STFEP767ZWLZ2BAVZG2S2TKG3IOPBOW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570913.16_warc_CC-MAIN-20220809064307-20220809094307-00294.warc.gz\"}"}
https://www.tutorialspoint.com/finding-the-third-maximum-number-within-an-array-in-javascript
[ "# Finding the third maximum number within an array in JavaScript\n\nJavascriptWeb DevelopmentFront End Technology\n\nWe are required to write a JavaScript function that takes in an array of numbers as the first and the only argument.\n\nThe task of our function is to pick and return the third maximum number from the array. And if the array does not contain any third maximum number then we should simply return the maximum number from the array.\n\nFor example −\n\nIf the input array is −\n\nconst arr = [34, 67, 31, 87, 12, 30, 22];\n\nThen the output should be −\n\nconst output = 34;\n\n## Example\n\nThe code for this will be −\n\nLive Demo\n\nconst arr = [34, 67, 31, 87, 12, 30, 22];\nconst findThirdMax = (arr = []) => {\nconst map = {};\nlet j = 0;\nfor (let i = 0, l = arr.length; i < l; i++) {\nif(!map[arr[i]]){\nmap[arr[i]] = true;\n}else{\ncontinue;\n};\narr[j++] = arr[i];\n};\narr.length = j;\nlet result = -Infinity;\nif (j < 3) {\nfor (let i = 0; i < j; ++i) {\nresult = Math.max(result, arr[i]);\n}\nreturn result;\n} else {\narr.sort(function (prev, next) {\nif (next >= prev) return -1;\nreturn 1;\n});\nreturn arr[j - 3]\n};\n};\nconsole.log(findThirdMax(arr));\n\n## Output\n\nAnd the output in the console will be −\n\n34" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59540164,"math_prob":0.99494636,"size":2351,"snap":"2022-27-2022-33","text_gpt3_token_len":607,"char_repetition_ratio":0.2151683,"word_repetition_ratio":0.10502283,"special_character_ratio":0.28923863,"punctuation_ratio":0.11320755,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99717116,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-15T21:13:37Z\",\"WARC-Record-ID\":\"<urn:uuid:84fbfa3c-e489-4ee7-b319-73541b48a600>\",\"Content-Length\":\"31362\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4c4e56a1-e21e-4a5d-9dc6-92918d26a1e5>\",\"WARC-Concurrent-To\":\"<urn:uuid:31cdd2f3-bc5e-43bf-b8a5-606f3cc356eb>\",\"WARC-IP-Address\":\"192.229.210.176\",\"WARC-Target-URI\":\"https://www.tutorialspoint.com/finding-the-third-maximum-number-within-an-array-in-javascript\",\"WARC-Payload-Digest\":\"sha1:ALYFYC4M5AWUXUTGWBAHT2D6TT5YPBU2\",\"WARC-Block-Digest\":\"sha1:AMNQXS7ZRNGYKMHQM6UH2QHH56I2CXWX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572212.96_warc_CC-MAIN-20220815205848-20220815235848-00012.warc.gz\"}"}
http://zuowen5.info/node/190
[ "## 我爱家乡的一年四季\n\n•\n•\n• ài\n• jiā\n• xiāng\n• de\n• nián\n•   我爱家乡的一年四季\n•\n•\n• zhè\n• jiāng\n• shěng\n• wēn\n• zhōu\n• shì\n•\n• ōu\n• běi\n• zhèn\n• sān\n• xiǎo\n• xué\n• ?\n•   浙江省温州市 瓯北镇第三小学五(\n•\n•\n•\n• zhèng\n• shuò\n• 6) 郑烁\n•\n•\n•\n• de\n• jiā\n• xiāng\n•\n•\n• ōu\n• běi\n• shì\n• yǒng\n• jiā\n• xiàn\n• jīng\n• jiào\n•  我的家乡——瓯北是永嘉县经济比较发\n• de\n• xiāng\n• zhèn\n• zhī\n•\n• shǔ\n• dài\n•\n• 达的一个乡镇之一,属于亚热带地区,特\n• chǎn\n• duō\n•\n• dàn\n• shì\n• zhè\n• de\n• nián\n• què\n• fēng\n• jǐng\n• huà\n• 产不多,但是这里的一年四季却风景如画\n•\n•\n• chūn\n• tiān\n• dào\n•\n• wàn\n•\n• xiǎo\n• cǎo\n• tàn\n• zhe\n• nǎo\n•  春天一到,大地万物复苏。小草探着脑\n• dài\n• xiàng\n• wài\n• zhāng\n• wàng\n•\n• xiǎo\n• shù\n• kāi\n• ?g\n•\n• piàn\n•\n• 袋向外张望,小树开花,大地一片绿色,\n• xiàng\n• tiáo\n• de\n• máo\n• tǎn\n•\n• ràng\n• rén\n• gǎn\n• dào\n• kōng\n• xīn\n• xiān\n•\n• 像一条碧绿的毛毯,让人感到空气新鲜。\n• ?g\n• ér\n• hán\n• bāo\n• fàng\n• dài\n• zhe\n• shuǐ\n• sàn\n• zhe\n• nóng\n• de\n• xiāng\n• wèi\n• 花儿含苞欲放带着露水散发着浓郁的香味\n•\n• zhèn\n• wēi\n• fēng\n• chuī\n• guò\n•\n• xiǎo\n• cǎo\n• bǎi\n• dòng\n• zhe\n• shēn\n•\n• huān\n• kuài\n• 。一阵微风吹过,小草摆动着身子,欢快\n• de\n• tiào\n•\n• niǎo\n• ér\n• tíng\n• liú\n• zài\n• shù\n• shàng\n•\n• jìn\n• qíng\n• chàng\n• zhe\n• dòng\n• 的跳舞。鸟儿停留在树上,尽情地唱着动\n• tīng\n• de\n•\n• chūn\n• rùn\n• zhe\n•\n• chūn\n• cǎo\n• gài\n• zhe\n• 听的歌。春雨滋润着大地,春草铺盖着大\n•\n• chūn\n• ?g\n• sàn\n• xiāng\n• wèi\n•\n• 地,春花散发香味。\n•\n• xià\n• tiān\n• lái\n• lín\n•\n• liǔ\n• shù\n• zhī\n• tiáo\n• xiàng\n• xiāng\n• qiàn\n• zhe\n• de\n• bǎo\n•  夏天来临,柳树枝条像镶嵌着碧绿的宝\n• shí\n•\n• yuán\n• gǔn\n• gǔn\n• de\n• guā\n• tǎng\n• zài\n• shàng\n• dòng\n• dòng\n•\n• xiàng\n• 石。圆滚滚的西瓜躺在地上一动不动,像\n• lǎn\n• yáng\n• yáng\n• de\n• pàng\n•\n• chū\n• ér\n• rǎn\n• de\n• 一个懒洋洋的胖娃娃。出淤泥而不染的荷\n• ?g\n• gāo\n• guì\n• qīng\n•\n• zài\n• chí\n• táng\n• tíng\n• tíng\n•\n• xiàng\n• wèi\n• 花高贵清雅,在池塘里亭亭玉立,像一位\n• xiān\n• bān\n• shén\n•\n• ?g\n• de\n• huǒ\n• bàn\n•\n•\n• yuán\n• yòu\n• 仙女般神奇。荷花的伙伴——荷叶圆又大\n•\n• zhōng\n• jiān\n• dài\n• yǒu\n• zhū\n•\n• gèng\n• jiā\n• gāo\n• guì\n•\n• ?g\n• 。中间带有一滴滴露珠,更加高贵。荷花\n• zài\n• de\n• chèn\n• tuō\n• xià\n• xiǎn\n• wài\n• měi\n•\n• 在荷叶的衬托下显得格外美丽。\n•\n• qiū\n• tiān\n• dào\n• le\n•\n• guǒ\n• shí\n• lèi\n• lèi\n•\n• dào\n• chù\n• shì\n• piàn\n• fēng\n• shōu\n•  秋天到了,果实累累,到处是一片丰收\n• de\n• jǐng\n• xiàng\n•\n• nóng\n• mín\n• de\n• liǎn\n• shàng\n• yáng\n• zhe\n• fēng\n• shōu\n• de\n• 的景象,农民伯伯的脸上洋溢着丰收的喜\n• yuè\n•\n• shù\n• huáng\n• le\n•\n• cóng\n• shù\n• zhī\n• shàng\n• màn\n• yōu\n• yōu\n• piāo\n• dào\n• 悦。树叶黄了,从树枝上慢悠悠地飘到地\n• shàng\n•\n• piàn\n• jīn\n• huáng\n•\n• guǒ\n• shù\n• shàng\n• de\n• guǒ\n• chéng\n• shú\n• le\n• 上。大地一片金黄。果树上的果子成熟了\n•\n• ràng\n• rén\n• chán\n• zhí\n• liú\n• kǒu\n• shuǐ\n•\n• zhēn\n• xiǎng\n• tōu\n• zhāi\n• pǐn\n• cháng\n• ,让人馋得直流口水,真想偷摘一个品尝\n• xià\n•\n• zhèn\n• qiū\n• fēng\n• chuī\n• lái\n•\n• ràng\n• rén\n• jiào\n• bié\n• liáng\n• shuǎng\n• 一下。一阵秋风吹来。让人觉得特别凉爽\n•\n• zhēn\n• shì\n• qiū\n• gāo\n• shuǎng\n• ā\n•\n• ,真是秋高气爽啊!\n•\n• qiū\n• dōng\n• lái\n•\n• guǒ\n• xìng\n• yùn\n• de\n• huà\n•\n• xià\n• xuě\n•\n•  秋去冬来,如果幸运的话,下一次雪。\n• shí\n• hòu\n• piàn\n• xuě\n• bái\n•\n• shàng\n• ān\n• jìng\n•\n• 那时候大地一片雪白,马路上安静无比,\n• zhēn\n• xiàng\n• wèi\n• ài\n• shuō\n• huà\n• de\n• lǎo\n•\n• xiǎo\n• péng\n• yǒu\n• zài\n• xuě\n• 真像一位不爱说话的老爷爷。小朋友在雪\n• duī\n• xuě\n• rén\n•\n• xuě\n• zhàng\n•\n• wán\n• gāo\n• xìng\n•\n• 地里堆雪人,打雪仗,玩得可高兴啦!\n•\n• jiā\n• xiāng\n• de\n• nián\n• jiù\n• shì\n• zhè\n• yàng\n•\n• měi\n• huà\n•  我家乡的一年四季就是这样,美丽如画\n•\n• ài\n• jiā\n• xiāng\n• de\n• nián\n•\n•\n• 。我爱家乡的一年四季。\n\n无注音版:\n\n我爱家乡的一年四季\n浙江省温州市 瓯北镇第三小学五(6) 郑烁\n\n我的家乡——瓯北是永嘉县经济比较发达的一个乡镇之一,属于亚热带地区,特产不多,但是这里的一年四季却风景如画。\n春天一到,大地万物复苏。小草探着脑袋向外张望,小树开花,大地一片绿色,像一条碧绿的毛毯,让人感到空气新鲜。花儿含苞欲放带着露水散发着浓郁的香味。一阵微风吹过,小草摆动着身子,欢快的跳舞。鸟儿停留在树上,尽情地唱着动听的歌。春雨滋润着大地,春草铺盖着大地,春花散发香味。\n夏天来临,柳树枝条像镶嵌着碧绿的宝石。圆滚滚的西瓜躺在地上一动不动,像一个懒洋洋的胖娃娃。出淤泥而不染的荷花高贵清雅,在池塘里亭亭玉立,像一位仙女般神奇。荷花的伙伴——荷叶圆又大。中间带有一滴滴露珠,更加高贵。荷花在荷叶的衬托下显得格外美丽。\n秋天到了,果实累累,到处是一片丰收的景象,农民伯伯的脸上洋溢着丰收的喜悦。树叶黄了,从树枝上慢悠悠地飘到地上。大地一片金黄。果树上的果子成熟了,让人馋得直流口水,真想偷摘一个品尝一下。一阵秋风吹来。让人觉得特别凉爽,真是秋高气爽啊!\n秋去冬来,如果幸运的话,下一次雪。那时候大地一片雪白,马路上安静无比,真像一位不爱说话的老爷爷。小朋友在雪地里堆雪人,打雪仗,玩得可高兴啦!\n我家乡的一年四季就是这样,美丽如画。我爱家乡的一年四季。\n\n### 四季(诗歌)\n\n五年级作文260字\n作者:未知\n•\n•\n• ?\n• shī\n•\n•   四季(诗歌)\n•\n•\n• hǎi\n• nán\n• shěng\n• hǎi\n• kǒu\n• shì\n•\n• lóng\n• huá\n• xiǎo\n• xué\n• sān\n• bān\n•\n• chén\n•   海南省海口市 龙华小学五三班 陈\n• kǎi\n• 阅读全文\n\n### 我爱家乡的一年四季\n\n五年级作文572字\n作者:未知\n•\n•\n• ài\n• jiā\n• xiāng\n• de\n• nián\n•   我爱家乡的一年四季\n•\n•\n• zhè\n• jiāng\n• shěng\n• wēn\n• zhōu\n• shì\n•\n• ōu\n• běi\n• zhèn\n• sān\n• xiǎo\n• xué\n• ?\n•   浙江省温州市 瓯北镇第三小学五(\n•\n•\n•\n• zhèng\n• shuò\n• 6) 郑烁\n• 阅读全文\n\n### 四季的风\n\n五年级作文641字\n作者:未知\n•\n•\n• de\n• fēng\n•   四季的风\n•\n•\n• zhè\n• jiāng\n• shěng\n• wēn\n• zhōu\n• shì\n•\n• lóng\n• gǎng\n• xiǎo\n• nián\n• liù\n• bān\n•\n•   浙江省温州市 龙港一小五年六班\n• zhōu\n• wěi\n• wěi\n• 周伟伟\n• 阅读全文\n\n### 四季的风\n\n五年级作文369字\n作者:未知\n•\n•\n• de\n• fēng\n•   四季的风\n•\n•\n• lín\n• shěng\n• lín\n• jiāng\n• shì\n•\n• ?\n• guó\n• xiǎo\n• xué\n• nián\n• bān\n•\n•   吉林省临江市 建国小学五年一班\n• xīn\n• yuán\n• 胡馨元\n• 阅读全文\n\n### 四季之歌\n\n五年级作文768字\n作者:未知\n•\n•\n• zhī\n•   四季之歌\n•\n•\n• zhè\n• jiāng\n• shěng\n• ruì\n• ān\n• shì\n•\n• shēn\n• chéng\n• shí\n• yàn\n• xiǎo\n• xué\n• nián\n• zhì\n•   浙江省瑞安市 莘塍实验小学五年制\n• ?\n•\n•\n• bān\n•\n• dài\n• jiā\n• 五(3)班 戴佳利\n• 阅读全文\n\n### 四季银杏苑\n\n五年级作文662字\n作者:刘浚辰\n• de\n• xiào\n• zǒu\n• xiǎo\n• xué\n• yǒu\n• duō\n• jǐng\n•\n• yǒu\n• xiāng\n• 我的母校走马小学有许许多景物,有香气\n• yíng\n• rào\n• de\n• guì\n• ?g\n•\n• gāo\n• sǒng\n• yún\n• de\n• shān\n•\n• qīng\n• cuì\n• 萦绕的桂花,高耸入云的古杉,青翠欲滴\n• de\n• luó\n• hàn\n• sōng\n•\n• chōng\n• mǎn\n• huān\n• xiào\n• de\n• pīng\n• pāng\n• qiú\n• tái\n•\n• hóng\n• zhuān\n• bái\n• 的罗汉松,充满欢笑的乒乓球台,红砖白\n• 阅读全文\n\n### 我家楼下的四季之美\n\n五年级作文461字\n作者:郭子欣\n• jiā\n• lóu\n• xià\n• de\n• zhī\n• měi\n• 我家楼下的四季之美\n•\n• bān\n•\n•\n• èr\n•\n• xìng\n• míng\n•\n• guō\n• xīn\n•  班级:五、二 姓名:郭子欣\n•\n•\n•\n• 阅读全文\n\n### 四季的雨\n\n五年级作文546字\n作者:曾昕\n•\n•\n•\n•\n•\n• chūn\n• tiān\n• de\n•\n• mián\n• mián\n• xiāo\n• xiāo\n•\n• de\n• chūn\n•    春天的雨,绵绵潇潇,丝丝的春雨\n• shì\n• rén\n• men\n• de\n• zuì\n• ài\n•\n• shuǐ\n• shùn\n• zhe\n• shù\n• zhī\n• jiān\n• xià\n• lái\n•\n• 是人们的最爱。雨水顺着树枝尖滴下来,\n• 阅读全文\n\n### 四季的美\n\n五年级作文541字\n作者:子轩\n• de\n• měi\n• 四季的美\n•\n•\n•\n• nián\n•\n• chūn\n• xià\n• qiū\n• dōng\n•\n• yǒu\n• de\n• měi\n•\n•  一年四季,春夏秋冬,各有各的美。\n• 阅读全文\n\n### 乡下的四季\n\n五年级作文412字\n作者:连宇季节\n•\n•\n• xiāng\n• xià\n• de\n•   乡下的四季\n•\n•\n•\n•\n• suī\n• rán\n• xiàn\n• zài\n• shēng\n• huó\n• zài\n• xuān\n• xiāo\n• de\n• chéng\n• shì\n•\n• què\n• wàng\n•   虽然我现在生活在喧嚣的城市,却忘\n• 阅读全文\n\n### 四季(诗歌)\n\n五年级作文180字\n作者:王英男\n• shī\n•\n• 四季诗歌)\n•\n• hóng\n• yàn\n• wài\n• nián\n•\n• wáng\n• yīng\n• nán\n•  鸿雁外语五年级 王英男\n•\n• chūn\n• tiān\n• dào\n•\n•\n•  春天一到\n• 阅读全文\n\n### 那一年,令我难忘\n\n五年级作文526字\n作者:陈静怡11\n•\n•\n• nián\n•\n• lìng\n• nán\n• wàng\n•\n•\n•   那一年,令我难忘\n•\n•\n• píng\n• xiǎo\n• xué\n•\n• bān\n• chén\n• jìng\n•   和平路小学 五四班陈静怡\n•\n•\n• nián\n•\n• shàng\n• nián\n•\n• jiàn\n• xiǎo\n• shì\n• zhōng\n•\n•   那一年,我上一年级,一件小事中,\n• 阅读全文\n\n### 欣赏四季花\n\n五年级作文583字\n作者:付晶峰\n•\n•\n• nián\n• rén\n• men\n• zǒng\n• huān\n• kàn\n• ?g\n•\n• chūn\n• tiān\n•   一年四季人们总喜欢去看花。春天里\n• rén\n• men\n• dōu\n• ài\n• chūn\n• yóu\n•\n• dào\n• jiāo\n• wài\n• shān\n• cūn\n• tián\n• kàn\n• kàn\n• 人们都爱去春游,到郊外山村田野去看看\n• màn\n• shān\n• biàn\n• de\n• hóng\n• juān\n•\n• wàng\n• jīn\n• huáng\n• de\n• yóu\n• cài\n• 漫山遍野的红杜鹃,一望无际金黄的油菜\n• 阅读全文\n\n### 校园的四季\n\n五年级作文453字\n作者:杜耀迪\n•\n•\n• xiào\n• yuán\n• de\n•\n•   校园的四级\n•\n•\n• men\n• yǒu\n• fēi\n• cháng\n• fēi\n• cháng\n• měi\n• de\n• xiào\n• yuán\n•\n• ér\n•   我们有一个非常非常美丽的校园,而\n• qiě\n• chōng\n• mǎn\n• le\n• shēng\n•\n•\n• 且充满了勃勃生机。\n• 阅读全文" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.7381618,"math_prob":0.44000453,"size":4446,"snap":"2021-43-2021-49","text_gpt3_token_len":4392,"char_repetition_ratio":0.16749212,"word_repetition_ratio":0.17087968,"special_character_ratio":0.4008097,"punctuation_ratio":0.00896861,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98922426,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-08T15:42:09Z\",\"WARC-Record-ID\":\"<urn:uuid:4cd506d1-f0e6-48e4-a301-837e750e7d8a>\",\"Content-Length\":\"24856\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:87ae6c6e-37a5-44de-a7fd-a8773a9fc381>\",\"WARC-Concurrent-To\":\"<urn:uuid:410060f2-ac4f-4bfc-a291-1ed378f0289d>\",\"WARC-IP-Address\":\"198.144.184.133\",\"WARC-Target-URI\":\"http://zuowen5.info/node/190\",\"WARC-Payload-Digest\":\"sha1:7ZM2UCWIICGKFNGVWMB77VZYXG6BKVKU\",\"WARC-Block-Digest\":\"sha1:FRIQ4RHSRIT5GLDQ25L5OTBF4BJOPGEA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363515.28_warc_CC-MAIN-20211208144647-20211208174647-00550.warc.gz\"}"}
https://crypto.stackexchange.com/questions/58950/rsa-cca-security-of-m-hm-as-input-to-modular-exponentiation?noredirect=1
[ "# RSA-CCA security of $m || H(m)$ as input to modular exponentiation\n\nIf we encrypt a message $m$ as $m || H(m)$ using textbook RSA encryption algorithm, $H$ is random oracle modeled. In this case the message can be recovered.\n\nIs this encryption CCA secure? If so how can we prove it or not?\n\n• It isn't even CPA secure.... – SEJPM May 4 '18 at 16:08\n\nCCA security assumes CPA security, and $m || H(m)$ is not CPA secure. This is simple to show: say that $C = E(m || H(m))$ is known, then an adversary can simply guess $m'$ and encrypt that. Then $C' = C$ will show the adversary that $m'=m$.\n\nTo show a CCA attack you can use this attack. You can of course replace the value $2$ in that question with $y$ where $y = x || H(x)$ for any $x$. In that case you end up with $z \\cdot m || H(m)$ after decryption. Calculating $m || H(m)$ will then just consist of division with $z$.\n\nTo avoid this it is of vital important that the signature is over data protected by randomized padding, and that this randomization is well distributed over the input to the RSA exponentiation. This is what OAEP does with the MGF(1) padding.\n\nAlternatively a hybrid scheme such as RSA-KEM combined with a block cipher could be used.\n\nDeterministic encryption cannot be fully CPA or CCA secure; some kind of randomness or at least uniqueness is required to be CPA / CCA secure to not leak information for repeated messages.\n\n• Also there is a general result that a public key encryption operation must be randomized in order to be CPA secure... – SEJPM May 4 '18 at 17:53\n• Anything missing from my answer, pinofer? – Maarten Bodewes May 9 '18 at 15:01" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8924954,"math_prob":0.9938121,"size":1690,"snap":"2020-34-2020-40","text_gpt3_token_len":404,"char_repetition_ratio":0.12930012,"word_repetition_ratio":0.24342105,"special_character_ratio":0.24615385,"punctuation_ratio":0.086053416,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.997613,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-12T01:33:05Z\",\"WARC-Record-ID\":\"<urn:uuid:dc3313d3-3ea7-4c7d-923f-294f0d5b2d12>\",\"Content-Length\":\"151356\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5ff87bca-4b15-4858-abb0-3e9a6491f830>\",\"WARC-Concurrent-To\":\"<urn:uuid:d219d7e7-f5b9-45d4-83e1-cbd74b036e8c>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/58950/rsa-cca-security-of-m-hm-as-input-to-modular-exponentiation?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:BQJG5HP6UT5DQEXFKEK5XMYBURWYSKEZ\",\"WARC-Block-Digest\":\"sha1:EGN5CFRTNOHIHT6TZAJVL46XAEUDUTVO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738858.45_warc_CC-MAIN-20200811235207-20200812025207-00558.warc.gz\"}"}
http://hackage.haskell.org/package/roundtrip-0.2.0.0/docs/Text-Roundtrip-Combinators.html
[ "roundtrip-0.2.0.0: Bidirectional (de-)serialization\n\nText.Roundtrip.Combinators\n\nContents\n\nSynopsis\n\n# Lexemes\n\nchar :: StringSyntax delta => Char -> delta ()Source\n\nchar' :: StringSyntax delta => Char -> delta CharSource\n\nstring :: StringSyntax delta => String -> delta ()Source\n\n`string` parses/prints a fixed text and consumes/produces a unit value.\n\ncomma :: StringSyntax delta => delta ()Source\n\ndot :: StringSyntax delta => delta ()Source\n\n# Repetition\n\nmany :: Syntax delta => delta alpha -> delta [alpha]Source\n\nmany1 :: Syntax delta => delta alpha -> delta [alpha]Source\n\nsepBy :: Syntax delta => delta alpha -> delta () -> delta [alpha]Source\n\nchainl1 :: Syntax delta => delta alpha -> delta beta -> Iso (alpha, (beta, alpha)) alpha -> delta alphaSource\n\nThe `chainl1` combinator is used to parse a left-associative chain of infix operators.\n\n# Sequencing\n\n(*>) :: Syntax delta => delta () -> delta alpha -> delta alphaSource\n\nThis variant of `<*>` ignores its left result. In contrast to its counterpart derived from the `Applicative` class, the ignored parts have type `delta ()` rather than `delta beta` because otherwise information relevant for pretty-printing would be lost.\n\n(<*) :: Syntax delta => delta alpha -> delta () -> delta alphaSource\n\nThis variant of `<*>` ignores its right result. In contrast to its counterpart derived from the `Applicative` class, the ignored parts have type `delta ()` rather than `delta beta` because otherwise information relevant for pretty-printing would be lost.\n\nbetween :: Syntax delta => delta () -> delta () -> delta alpha -> delta alphaSource\n\nThe `between` function combines `*>` and `<*` in the obvious way.\n\n# Alternation\n\n(<+>) :: Syntax delta => delta alpha -> delta beta -> delta (Either alpha beta)Source\n\noptional :: Syntax delta => delta alpha -> delta (Maybe alpha)Source\n\noptionalBool :: Syntax delta => delta () -> delta BoolSource\n\noptionalWithDefault :: (Eq alpha, Syntax delta) => alpha -> delta alpha -> delta alphaSource\n\n# Whitespace\n\nskipSpace :: StringSyntax delta => delta ()Source\n\n`skipSpace` marks a position where whitespace is allowed to occur. It accepts arbitrary space while parsing, and produces no space while printing.\n\nsepSpace :: StringSyntax delta => delta ()Source\n\n`sepSpace` marks a position where whitespace is required to occur. It requires one or more space characters while parsing, and produces a single space character while printing.\n\noptSpace :: StringSyntax delta => delta ()Source\n\n`optSpace` marks a position where whitespace is desired to occur. It accepts arbitrary space while parsing, and produces a single space character while printing.\n\n# XML\n\nxmlElem :: XmlSyntax x => Name -> x a -> x aSource\n\nxmlAttr :: XmlSyntax x => Name -> Iso Text a -> x aSource" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.54653406,"math_prob":0.969683,"size":3559,"snap":"2019-43-2019-47","text_gpt3_token_len":825,"char_repetition_ratio":0.33614627,"word_repetition_ratio":0.5146104,"special_character_ratio":0.2936218,"punctuation_ratio":0.18761384,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9871557,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-17T18:46:02Z\",\"WARC-Record-ID\":\"<urn:uuid:e004f81d-4a58-4c32-a584-f4308054bcb2>\",\"Content-Length\":\"20777\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8f894cb4-ace8-48fa-ad42-a7d486bcdf78>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f45afaa-3c8d-4ca2-9fe2-40af5ef2f9bd>\",\"WARC-IP-Address\":\"151.101.248.68\",\"WARC-Target-URI\":\"http://hackage.haskell.org/package/roundtrip-0.2.0.0/docs/Text-Roundtrip-Combinators.html\",\"WARC-Payload-Digest\":\"sha1:G66STVQ7ECPXIOURKQV7XDIFU7W3QFKA\",\"WARC-Block-Digest\":\"sha1:E2DK5VZLNHYV4FWDIN3LP6QZNJDLLEBW\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669225.56_warc_CC-MAIN-20191117165616-20191117193616-00312.warc.gz\"}"}
https://www.mathematik.hu-berlin.de/de/minicourse-2019
[ "Humboldt-Universität zu Berlin - Mathematisch-Naturwissenschaftliche Fakultät - Institut für Mathematik\n\nMini-Course „Effective André Oort“\n\nMini-Course „Effective André Oort“\n\nam 28.10., 30.10. and 31.10.2019\n\nWann: 16 - 18 Uhr\n\nWo: RUD 25, BMS Seminar Room (1.023 )\n\nAbstract: On every Shimura variety certain algebraic subvarieties are labeled \"special\"; in particular, \"special points\" are special subvarieties of dimension 0. Intuitively, if the Shimura variety is viewed as a moduli space of some objects (like Abelian varieties or so), then the special subvarities are moduli spaces of the same objects with some additional structure.\n\nThe celebrated André-Oort Conjecture asserts, roughly speaking, that the Zariski closure of a set of \"special points\" is a \"special subvariety\"; equivalently, every algebraic subvariety of a Shimura variety may have at most finitely many maximal special subvarieties. This conjecture is proved by Klingler, Ullmo and Yafaev subject to GRH, and in many special cases unconditionally.\n\nIn particular, Pila (2011) proved unconditional André-Oort for Shimura varieties of modular type, that is, products of modular curves. Here every point is (x_1,...,x_n), where each x_i is an elliptic curve with some additional structure. Special subvarieties are (roughly) defined by conditions of the type \"there is a cyclic isogeny of given degree between x_i and x_j\" or \"x_i is a given curve with Complex Multiplication\".\n\nWhile Pila's argument (based on the ground-breaking idea of a previous work by Pila and Zannier) is very clever and beautiful, it is non-effective, though the use of Siegel-Brauer lower estimate for the class number.\n\nIn the recent years, in the work of Masser, Zannier, Kühne, Binyamini and others effective proofs were given for various special cases of Pila's result. Most of them are based on the beautiful \"Tatuzawa trick\", discovered by Kühne, which allows one to achieve effectiveness, in many cases, by replacing the Siegel-Brauer by the  Siegel-Tatuzawa Theorem.\n\nThe purpose of my course is to give an introduction into this topic. I will restrict to the \"Shimura variety\" C^n (viewed as the product of n j-lines). The special subvarieties are (roughly) defined by equations of the type F_N(x_i,x_j)=0 or x_i=(singular modulus), where F_N is the modular polynomial of level N, and singular moduli are j-invariants of elliptic curves with CM. In particular, special points are those whose all coordinates are singular moduli.\n\nNo preliminary knowledge beyond university courses of Algebra, Analysis and Number Theory is required. I will even define what Complex Multiplication is. I will also try to highlight ideas as clearly as possible, in many cases sacrificing generality to lucidity. Here is an approximate plan.\n\n-1. Motivation: The Manin-Mumford conjecture\n0. Introductory material: Complex Multiplication, Class Field Theory etc.\n1. Statement of the André-Oort conjecture for C^n\n2. Non-effective proof in dimension 1 (André)\n3. Effectivisation of André's argument (Kühne et al.)\n4. One-dimensional effective/explicit results on individual curves and families of curves.\n5. Multidimensional results (if time permits)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89297205,"math_prob":0.91257274,"size":3035,"snap":"2022-05-2022-21","text_gpt3_token_len":738,"char_repetition_ratio":0.11547344,"word_repetition_ratio":0.0043668123,"special_character_ratio":0.21713345,"punctuation_ratio":0.1380531,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9591701,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-19T01:08:46Z\",\"WARC-Record-ID\":\"<urn:uuid:b6bd24ad-20bb-4bd0-a944-f226b012f7f7>\",\"Content-Length\":\"35928\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6048c8c2-6849-4256-8567-5c905d5ee5d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:056f02f1-187e-498b-9800-2814a06fda90>\",\"WARC-IP-Address\":\"141.20.5.188\",\"WARC-Target-URI\":\"https://www.mathematik.hu-berlin.de/de/minicourse-2019\",\"WARC-Payload-Digest\":\"sha1:IBO4G3R453NSYPDG7ICMHV4DZ5O3FLG2\",\"WARC-Block-Digest\":\"sha1:RGBY3PU7A7TMAUGSVU7FEZWAD4S2TBCA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301217.83_warc_CC-MAIN-20220119003144-20220119033144-00643.warc.gz\"}"}
https://study.com/math_graduate_programs.html
[ "# Math Graduate Programs with Career Options\n\nMaster's and Ph.D. programs in mathematics allow graduates to pursue advanced positions in the field. Courses in these programs cover advanced mathematical theories and complex equations, with some courses also covering computer programming concepts.\n\n## Essential Information\n\nStudents enrolling in master's or Ph.D. programs in mathematics typically have earned bachelor's degrees in mathematics or statistics, though a bachelor's degree in a related area that includes math coursework may be acceptable as well. Some Ph.D. programs may also require students to have completed a master's degree prior to entry. A master's degree in mathematics is designed for students looking for careers in the advanced application of mathematics. A Ph.D. in mathematics is ideal for statisticians and math teachers pursuing more advanced math careers.\n\nA Ph.D. qualifies students to become mathematicians with various government and private agencies, though most mathematicians work for the federal government. With a master's degree in math, students can find jobs with engineering, manufacturing, healthcare, telecommunications, education and computer programming businesses. Program specialization options for students include education and teaching, statistics, computational math and computer science.\n\n## Master of Science in Mathematics\n\nA master's degree program in mathematics teaches advanced mathematical theories, concepts and systems. Because computer technology is often used in math, many programs offer courses in computer programming and networking administration. Many courses require students to research complex math equations and use different numerical methods to solve math problems. The completion of a thesis is also required in these programs. Courses also teach students how to apply math theories to practical, real-life problems in the workplace. Typical courses may include:\n\n• Statistical research\n• Partial differential equations\n• Applied mathematics research\n• Probability and statistics\n• Programming with numerical methods\n• Mathematical data modeling\n\n## Ph.D. in Mathematics\n\nMost math Ph.D. programs require from three to five years of study and allow students to conduct independent mathematical research studies. A mathematics Ph.D. program offers courses that are based on complex math research studies. Programs focus on a doctoral dissertation that presents students' knowledge of mathematics and specific areas of study. Most courses deal extensively with statistics and applying mathematical concepts to engineering and computer technology. While the dissertation makes up a significant portion of academic credits, other courses may include:\n\n• Vector calculus\n• Dynamic mathematical systems\n• Scientific and technical computing\n• Boundary value problems\n• Multivariate statistical analysis\n• Linear statistical models\n\n### Career Information and Employment Outlook\n\nWith a master's degree in mathematics, students can pursue careers as statisticians. There were approximately 29,870 statisticians in 2015, reported the U.S. Bureau of Labor Statistics (www.bls.gov). Some statisticians work for local, state or federal government agencies, including the Department of Commerce and the Department of Agriculture. Many other statisticians worked with private organizations in medical, engineering and computer technology research. The median annual salary for statisticians in May 2015 was \\$80,110 (www.bls.gov).\n\nWith a Ph.D. in mathematics, statisticians or math teachers can become mathematicians. Mathematicians are generally required to have a Ph.D., and a background in computer science or engineering is often preferred. There were approximately 3,170 mathematicians in May 2015 (www.bls.gov). Mathematicians earned a median annual salary of \\$111,110 in the same year. A 21% job growth was expected for these workers for the years 2014 through 2024, according to the BLS.\n\n### Continuing Education Information\n\nAlthough a master's degree is sufficient for statisticians and basic teaching positions, a Ph.D. in mathematics is required for mathematicians, postsecondary math instructors and those interested in advanced mathematical research studies. With a Ph.D. in mathematics or statistics, statisticians can also become self-employed statistical consultants.\n\nAfter completing a master's or Ph.D. program in mathematics, graduates have a wide range of options available to them. Many choose to become statisticians or conduct research for a variety of different companies or institutions, with Ph.D. graduates having the most opportunity for high-level careers, such as mathematicians.\n\n## 10 Popular Schools\n\nThe listings below may include sponsored content but are popular choices among our users.\n\n• 1\nSouthern New Hampshire University\n\nWhat is your highest level of education completed?\n\n• 2\nCapella University\n\nWhat is your highest level of education completed?\n\n• 3\nNorthcentral University\n\nWhat is your highest level of education?\n\n• 4\nStanford University\n• 5\nUniversity of Pennsylvania\n• 6\nDuke University\n• 7\nUniversity of Notre Dame\n• 8\nCornell University\n• 9\nTowson University\n• 10\nBoston University" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95137423,"math_prob":0.50886446,"size":4345,"snap":"2019-51-2020-05","text_gpt3_token_len":787,"char_repetition_ratio":0.17853029,"word_repetition_ratio":0.003322259,"special_character_ratio":0.17928654,"punctuation_ratio":0.13509749,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9587974,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-07T14:53:06Z\",\"WARC-Record-ID\":\"<urn:uuid:3506fc47-7cca-4578-94a0-c7e206a6f7ac>\",\"Content-Length\":\"243462\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d07d92b8-40c7-4856-9c2d-c6fea688b4ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:fa992f09-8aa9-4c3f-991c-39cd63acdfb7>\",\"WARC-IP-Address\":\"52.0.4.252\",\"WARC-Target-URI\":\"https://study.com/math_graduate_programs.html\",\"WARC-Payload-Digest\":\"sha1:HBUZU6BONZ44JI7KTRHEMFIAYEZCA4K5\",\"WARC-Block-Digest\":\"sha1:SO4WPO4Y2IN4PUK6LA3PKVXWM454RFYJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540499439.6_warc_CC-MAIN-20191207132817-20191207160817-00538.warc.gz\"}"}
https://www.physicsforums.com/threads/electric-potential-inside-a-parallel-plate-capacitor.295825/
[ "# Electric Potential Inside a Parallel-Plate Capacitor\n\nThank you for taking the time to look. I think I get the basic idea here but I'm must be missing something important. Any help is greatly appreciated.\n\n## Homework Statement\n\nTwo 1.4g beads, each charged to 5.1nC , are 2.2cm apart. A 2.8g bead charged to -1.0nC is exactly halfway between them. The beads are released from rest.\n\nWhat is the speed of the positive beads, in cm/s, when they are very far apart?\n\n## Homework Equations\n\n$$U_{i}+K_{i}=K_{f}+U_{f}$$\n$$U_{q_{1}+q_{2}}=\\frac{1}{4\\pi\\epsilon_{0}}$$\n\n## The Attempt at a Solution\n\n$$m_{1}=0.0014 kg$$\n$$m_{2}=0.0014 kg$$\n$$m_{3}=0.0028 kg$$\n$$q_{1}=5.1*10^{-9} C$$\n$$q_{2}=5.1*10^{-9} C$$\n$$q_{3}=-1*10^{-9} C$$\n$$r_{12}=0.022 m$$\n$$r_{13}=0.011 m$$\n\n$$K_{i}=0$$ because initial velocity of all three is 0\n$$U_{f}=0$$ because they end far apart --> r->$$\\infty$$\n\nSo $$U_{i}=K_{f}$$\n$$\\Longrightarrow\\frac{1}{4\\pi\\epsilon_{0}}\\{\\frac{q_{1}q_{2}}{r_{12}}+\\frac{q_{1}q_{3}}{r_{13}}\\}=\\frac{m_{1}v_{f}^{2}}{2}$$\n\n$$\\Longrightarrow\\sqrt\\frac{2}{4\\pi\\epsilon_{0}m_{1}}\\{\\frac{q_{1}q_{2}}{r_{12}}+\\frac{q_{1}q_{3}}{r_{13}}\\}}$$ (the sqrt sgn should extend to the end of the expression but I suck as LaTeX)\n=0.096m/s=9.6cm/s" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6033962,"math_prob":0.99998033,"size":2442,"snap":"2022-27-2022-33","text_gpt3_token_len":1043,"char_repetition_ratio":0.17268252,"word_repetition_ratio":0.7637795,"special_character_ratio":0.45536447,"punctuation_ratio":0.082630694,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":1.0000056,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-02T13:49:25Z\",\"WARC-Record-ID\":\"<urn:uuid:c99a732f-795d-4cdd-a8ad-f1a7769110c0>\",\"Content-Length\":\"62158\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:83e35f92-5653-4788-89d6-d00fc4f223e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:1cc956a2-f4ee-4b8e-9a28-6f30bdf0c276>\",\"WARC-IP-Address\":\"172.67.68.135\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/electric-potential-inside-a-parallel-plate-capacitor.295825/\",\"WARC-Payload-Digest\":\"sha1:UDL6BHDUJJGM3JEZ3U6XG52QXDQYUR5Z\",\"WARC-Block-Digest\":\"sha1:ZAKQX7JYCDU3RZ5AGADRPTKJ5IJ5NVOP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104141372.60_warc_CC-MAIN-20220702131941-20220702161941-00231.warc.gz\"}"}
https://www.physicsforums.com/threads/commutative-monoid-and-some-properties.522807/page-2
[ "# Commutative Monoid and some properties\n\nWhat if we work in $\\mathbb{Z}^{n}_{p}$ over a the field $\\mathbb{Z}_{p}$, as I discuss in the second part of my paper? This gets a bit messy I guess though.\nAaah, that might work", null, "Aaah, that might work", null, "Then I will play around with that tonight, and type some stuff up tomorrow! I'll try to generalize everything using the map you defined, and see what I can do.\n\nthank you very much micromass!\n\nI'm just trying to find a way to work this.\n\nFix $p$ prime. Let $1<p_{1}<p_{2}<\\cdots<p_{n}<p$ be all primes between $1$ and $p$, and let $\\mathbb{Z}_{p}^{n}$ be the set of all $n$-tuples with entires in $\\mathbb{Z}_{p}$. Then our map $\\Phi$ doesn't work since we can find a $q\\in\\mathbb{Q}^{*}$ whose factorization uses a prime $p_{0}>p$.\n\nSo that's just where I'm confused.\n\nI'm just trying to get over a slump on how to turn this into a vector space.\n\nLast edited:\nHurkyl\nStaff Emeritus\nGold Member\nWhoa, could you please explain? I haven't been exposed to much of any metamathematics.\nOnce you have a data structure, you can do a lot. Just look at set theory.", null, "Similar power is opened up to you once you can view an integer as a list of integers.\n\nThere are a variety of related things, I'll go down the route of computability -- what is a computer? It is essentially just it's current state -- a list of integers (e.g. the list might contain 16 registers, and the rest be memory which can be interpreted as instructions or data) -- and a rule for advancing from one state to the next.\n\nThe hardest part of encoding this in Peano arithmetic is how to use an integer to encode a list of integers, and how to access / manipulate lists under this encoding.\n\n(Then, once you have encoded the state of a computer as a single integer, you can then talk about the execution of a program on a computer as the \"list of integers\" corresponding to the state of the computer at a given time. With this, you can now start reasoning about how computers behave, again using just Peano arithmetic under-the-hood)\n\nFormal logic is similar -- specifically look at the subfield of syntax, which talks about languages, grammar, and proofs. Here, you want to use a \"list of integers\" to store the characters in a formal expression -- i.e. to store strings.\n\nThere are other ways to manage this technical construction, but I think the isomorphism from your original post is the most straightforward. (it does require you to first prove some things about primes and divisibility, though)\n\nSo I typed it up again, generalizing $\\phi:\\mathbb{Q}^{*}\\to\\mathbb{Z}^{\\infty}$.\n\nI'm still a little confused how to turn this into a vector space. I was wondering, if we use the non-transcendental numbers as our field. But would we lose the fact of 'prime factorization', and the basis of even doing this?\n\nI can't conceptually see how to work in $\\mathbb{Z}_{p}$ for some prime $p$. I don't know how we would chose $p$. Maybe I'm just obsessing over minor points.\n\nI've also defined $\\psi:\\mathbb{Z}^{\\infty}\\to\\mathbb{P}[X]^{\\infty}$, but I just don't know how to define a binary operation $\\otimes:\\mathbb{Z}^{\\infty}\\times\\mathbb{Z}^{ \\infty}\\to\\mathbb{Z}^{\\infty}$ such that it is homomorphic by $\\psi$ to multiplication of finite polynomials.\n\nSo I typed it up again, generalizing $\\phi:\\mathbb{Q}^{*}\\to\\mathbb{Z}^{\\infty}$.\n\nI'm still a little confused how to turn this into a vector space. I was wondering, if we use the non-transcendental numbers as our field. But would we lose the fact of 'prime factorization', and the basis of even doing this?\n\nI can't conceptually see how to work in $\\mathbb{Z}_{p}$ for some prime $p$. I don't know how we would chose $p$. Maybe I'm just obsessing over minor points.\nCan't you just define it as a $\\mathbb{Q}$-vector space?\n\nAnd maybe I'm starting to do much to advanced stuff here, but perhaps you could define your entire structure as a sheaf of rings??\n\nI've also defined $\\psi:\\mathbb{Z}^{\\infty}\\to\\mathbb{P}[X]^{\\infty}$, but I just don't know how to define a binary operation $\\otimes:\\mathbb{Z}^{\\infty}\\times\\mathbb{Z}^{ \\infty}\\to\\mathbb{Z}^{\\infty}$ such that it is homomorphic by $\\psi$ to multiplication of finite polynomials.\nYou know that that is a bijection, right?? Well, then you can just define\n\n$$a\\otimes b=\\psi^{-1}(\\psi (a)\\cdot \\psi (b))$$\n\njust carry over the structure.\n\nDo take a look at \"group rings\", because it really looks a lot like what you're trying to do here!\n\nCan't you just define it as a $\\mathbb{Q}$-vector space?\nThat's what i mean, woops. But then we would be representing the non-transcendentals. Because if we choose $\\mathbb{Q}$ to be our field, then since scalar multiplication in $\\mathbb{Z}^{\\infty}$ is isomorphic to exponentiation, things like $$\\frac{1}{2}(1,0,\\dots)\\overset{\\phi^{-1}}{=} 2^{\\frac{1}{2}}=\\sqrt{2}\\notin\\mathbb{Q}^{*}$$So shouldn't we work with the set of non-zero algebraic numbers, as opposed to $\\mathbb{Q}^{*}$ (this is what I meant as opposed to non-transcendental).\n\nAnd maybe I'm starting to do much to advanced stuff here, but perhaps you could define your entire structure as a sheaf of rings??\nYeah, I haven't a clue what a sheaf of rings is >.<, i haven't even had a true introduction to group theory. I'm doing this a lot by reading as I go.\n\nYou know that that is a bijection, right?? Well, then you can just define\n\n$$a\\otimes b=\\psi^{-1}(\\psi (a)\\cdot \\psi (b))$$\n\njust carry over the structure.\nPerfect! Can't believe I didn't think of that, thank you very much!\n\nDo take a look at \"group rings\", because it really looks a lot like what you're trying to do here!\nNo I haven't yet, totally forgot you mentioned that. I'll definitely take a look now before I continue :)\n\nLast edited:\nThat's what i mean, woops. But then we would be representing the non-transcendentals. Because if we choose $\\mathbb{Q}$ to be our field, then since scalar multiplication in $\\mathbb{Z}^{\\infty}$ is isomorphic to exponentiation, things like $$\\frac{1}{2}(1,0,\\dots)\\overset{\\phi^{-1}}{=} 2^{\\frac{1}{2}}=\\sqrt{2}\\notin\\mathbb{Q}^{*}$$\nWell, defining it like that isn't exactly what I mean", null, "You know that $\\mathbb{Q}$ is isomorphic to $\\mathbb{Z}^{\\infty}$ (if we adjoin a certain element 0 to the last set). So define scalar multiplication by\n\n$$\\alpha x=\\varphi(\\varphi^{-1}(\\alpha)\\varphi^{-1}(x))$$. So just \"lift\" the scalar multiplication in $\\mathbb{Q}$ to a new one in $\\mathbb{Z}^\\infty$.\n\nYeah, I haven't a clue what a sheaf of rings is >.<, i haven't even had a true introduction to group theory. I'm doing this a lot by reading as I go.\nWell, I think that researching things because you need it in this problem is the only true way to learn something. So all power to you!!\nBut sheafs is too advanced", null, "It would work though.\n\nWell, defining it like that isn't exactly what I mean", null, "You know that $\\mathbb{Q}$ is isomorphic to $\\mathbb{Z}^{\\infty}$ (if we adjoin a certain element 0 to the last set). So define scalar multiplication by\n\n$$\\alpha x=\\varphi(\\varphi^{-1}(\\alpha)\\varphi^{-1}(x))$$. So just \"lift\" the scalar multiplication in $\\mathbb{Q}$ to a new one in $\\mathbb{Z}^\\infty$.\nI though I've already defined the scalar multiplication in $\\mathbb{Q}$ in $\\mathbb{Z}^{\\infty}$ through defining $\\phi(a)=\\phi(\\prod^{\\infty}_{k=1}p_{k}^{e_{k}})=(e_{1},e_{2},\\dots)=(e_{k})^{\\infty}_{k=1}$. And since $\\phi$ is an isomorphism (I now show this properly in my paper), multiplication of numbers in $\\mathbb{Q}^{*}$ is isomorphic to the 'addition' i defined ($\\oplus$) in $\\mathbb{Z}^{\\infty}$.\n\nSo there our vector addition is defined as multiplication in $\\mathbb{Q}^{*}$ (or algebraic numbers, to generalize).\n\nThen it makes sense that our scalar multiplication (from $\\mathbb{Q}$) would be isomorphic to exponentiation. Since it can be shown, for example, that\n\n$$a^{2} \\overset{\\phi}{=} 2 (e_{k})^{\\infty} _{k=1} = (e_{k}) ^{\\infty} _{k=1} \\oplus (e_{k}) ^{\\infty}_{k=1}.$$\n\nI guess i'm actually generalizing even more instead of $\\phi:\\mathbb{Q}^{*}\\to\\mathbb{Z}^{\\infty}$, but to $\\phi:\\mathbb{A}^{*}\\to\\mathbb{Q}^{\\infty}$, where $\\mathbb{A}^{*}$ is the set of all non-zero algebraic numbers.\n\nAm I making sense or am I just crazy?\n\nI though I've already defined the scalar multiplication in $\\mathbb{Q}$ in $\\mathbb{Z}^{\\infty}$ through defining $\\phi(a)=\\phi(\\prod^{\\infty}_{k=1}p_{k}^{e_{k}})=(e_{1},e_{2},\\dots)=(e_{k})^{\\infty}_{k=1}$. And since $\\phi$ is an isomorphism (I now show this properly in my paper), multiplication of numbers in $\\mathbb{Q}^{*}$ is isomorphic to the 'addition' i defined ($\\oplus$) in $\\mathbb{Z}^{\\infty}$.\n\nSo there our vector addition is defined as multiplication in $\\mathbb{Q}^{*}$ (or algebraic numbers, to generalize).\n\nThen it makes sense that our scalar multiplication (from $\\mathbb{Q}$) would be isomorphic to exponentiation. Since it can be shown, for example, that $$a^{2} \\: \\overset{\\phi}{=} 2 (e_{k})^{\\infty} _{k=1} = (e_{k}) ^{\\infty) _{k=1} \\oplus (e_{k}) ^{\\infty )_{k=1}.$$\n\nI guess i'm actually generalizing even more instead of $\\phi:\\mathbb{Q}^{*}\\to\\mathbb{Z}^{\\infty}$, but to $\\phi:\\mathbb{A}^{*}\\to\\mathbb{Q}^{\\infty}$, where $\\mathbb{A}^{*}$ is the set of all non-zero algebraic numbers.\n\nAm I making sense or am I just crazy?\nHmm, I'm not sure how you would generalize all of this to algebraic numbers. But it does make sense what you're saying...\n\nHmm, I'm not sure how you would generalize all of this to algebraic numbers. But it does make sense what you're saying...\nI guess I would I would have to prove a sort of analogous Fundamental Theorem of Arithmetic to the non-zero algebraics? If I can do that then I hope it all works, and I would just have to prove it to be a vector space, correct?\n\nI never expected myself to get into doing this much for such a trivial idea I began with lol! Considering how little algebraic background I have outside of linear algebra.\n\nI guess I would I would have to prove a sort of analogous Fundamental Theorem of Arithmetic to the non-zero algebraics? If I can do that then I hope it all works, and I would just have to prove it to be a vector space, correct?\nI think such a thing would be very hard, if not impossible", null, "I never expected myself to get into doing this much for such a trivial idea I began with lol! Considering how little algebraic background I have outside of linear algebra.\nHey, this is the perfect way to learn something more about algebra!! You're doing great!\n\nAs a last attempt, what if we just generalize to the set\n\n$$\\mathbb{A}^{*}=\\{a^{b}:a,b\\in\\mathbb{Q}^{*}\\}$$\n\nI would have to show this to be a field obviously.\n\nAs a last attempt, what if we just generalize to the set\n\n$$\\mathbb{A}^{*}=\\{a^{b}:a,b\\in\\mathbb{Q}^{*}\\}$$\n\nI would have to show this to be a field obviously.\nThat seems plausible. Don't know if it'll be a field, though...\n\nThat seems plausible. Don't know if it'll be a field, though...\nI agree, I don't think its closed under addition.\n\nI guess I never really tried $\\mathbb{Z}_{p}$. Or even the set $\\mathbb{Q}[\\mathbb{Z}_{p}]=\\{a^{b}|a,b\\in\\mathbb{Z}_{p}\\}$. Which would be closed under addition (I'd hav to prove it) I guess since every $a^{b}\\equiv c<p$, so we would have $a^{b}+c^{d}\\equiv e+f\\equiv g$ with $e,f,g\\leq p$, or something like that.\n\nSo working i've got some weird results, that i'll type later when you fix a $p>0$ prime, and define $\\phi_{p}:\\mathbb{Z}_{p}^{*}\\to\\mathbb{Z}_{p}^{n}$ where $1<p_{1}<p_{2}<\\cdots <p_{n}<p$ are all primes less than $p$.\n\nI can show this to be a vector space, which is good. But you get weird equivalences when you work in $\\mathbb{Z}_{p}^{n}$, that are just blowing my mind a little as I write them down on paper. Which get even weirder when you move to $\\mathbb{P}_{n} [\\mathbb{Z}_{p}$.\n\nI'll type it all up when I get home.\n\nHmm, \"weird equivalences\", I'm wondering about what you will type up", null, "Well its trivial to show that, from the same $p$ and $n$ defined before that $\\phi_{p}:\\mathbb{Z}_{p}^{*}\\to\\mathbb{Z}^{n}_{p}$ is an isomorphism. This is seen since if $a\\in\\mathbb{Z}_{p}^{*}$ then $a$ has a prime factorization such that $$a=\\prod^{n}_{k=1}p_{k}^{e_{k}}=p_{1}^{e_{1}}p_{2}^{e_{2}}\\cdots p_{n}^{e_{n}}.$$ So let $\\phi_{p}(a)=(e_{1},\\dots,e_{n})$. Its really easy to show its a vector space over field $\\mathbb{Z}_{p}$\n\nWhats interesting is when you partition the set such that for any $u,v\\in\\mathbb{Z}_{p}^{n}$, we have $$u\\simeq v \\;\\; iff \\;\\; \\phi_{p}^{-1}(u)\\equiv\\phi_{p}^{-1}(v).$$\n\nIf you use the definition of the isomorphism i've used before into the polynomials, with the same partition, you'll get weird things. For example, take p=7. Then n=3. So we have $$0\\simeq3\\simeq x+x^{2}\\simeq2+2x\\simeq\\cdots$$\n\nI've worked with quotient spaces in linear algebra, and I though how that collapsed a vector space was weird, but this weirds me out even more.\n\nIts probably not weird for someone who has taken abstract\"er\" algebra than I have, but still weird lol\n\nI'll be typing this all up sometime tonight or tomorrow when I have time in a paper and more rigourously, but i have other plans tongiht so I won't be able to type everything up i worked out today (like my proof that its a vector space).\n\nI feel even that $\\mathbb{Z}_{p}^{n}$ collapses $\\mathbb{Q}^{n}$. Since any n-tuple's entries can be seen as a quotient a/b, but then since $\\mathbb{Z}_{p}$ is a field, so this quotient exists as one of the p members of Z_p.\n\nMaybe thats wrong.\n\nI don't quite see why $\\phi_p$ is an isomorphism. That is, I see no reason why it should be surjective. Certainly something like (p-1,p-1,...,p-1) isn't reached?\n\nI don't quite see why $\\phi_p$ is an isomorphism. That is, I see no reason why it should be surjective. Certainly something like (p-1,p-1,...,p-1) isn't reached?\nSure it does, it corresponds to $p_{1}^{p-1}p_{2}^{p-1}\\cdots p_{n}^{p-1}\\equiv a$ with $0\\leq a\\leq p-1$.\n\nWait crap, that makes it non-injective. URGH.\n\nI need to sit down with someone else and work all of this out. I need to get back to university asap. BLARGH.\n\nSure it does, it corresponds to $p_{1}^{p-1}p_{2}^{p-1}\\cdots p_{n}^{p-1}\\equiv a$ with $0\\leq a\\leq p-1$.\n\nWait crap, that makes it non-injective. URGH.\n\nI need to sit down with someone else and work all of this out. I need to get back to university asap. BLARGH.\nThe problem is that you don't have a well-defined function, really.\n\n$\\mathbb{Z}_p^*$ has p-1 elements, and $\\mathbb{Z}_p^n$ has $p^n$elements. So there can never be a surjection.", null, "Do you see a way to solve this??\n\nOkay, I'm meeting with my prof tomorrow, and I want to really see if this works:\n\nDefine $\\mathbb{A}^{*} =\\{a^{b}:a,b\\in\\mathbb{Q}^{*}\\}$ and let $\\phi :\\mathbb{A}^{*}\\to\\mathbb{Q}^{\\infty}$, where $\\mathbb{Q}^{\\infty}$ is the set of all $\\infty$-tuples with a finite amount of non-zero entries. Note that for any $a^{b}\\in\\mathbb{A}^{*}$ we have a prime factorization such that $$a^{b}=\\prod_{k=1}^{\\infty}p_{k}^{be_{k}}.$$ So define $\\phi(a^{b})=(be_{1},be_{2},\\dots)=(be_{k})^{\\infty}_{k=1}$. Also define, from before, $\\oplus:\\mathbb{Q}^{\\infty}\\times\\mathbb{Q}^{\\infty}\\to \\mathbb{Q}^{\\infty}$ such that $$(e_{1},e_{2},\\dots)\\oplus (f_{1}, f_{2},\\dots)=(e_{k})^{\\infty}_{k=1}\\oplus (f_{k})^{\\infty}_{k=1}=(e_{k}+f_{k})^{\\infty}_{k=1}=(e_{1}+f_{1},e_{2}+f_{2},\\dots).$$ Further, $\\phi$ can be shown to be an isomorphism.\n\nWe can also show $\\mathbb{Q}^ {\\infty}$ to be a rational vector space. Note that the scalar multiplication of any vector in $\\mathbb{Q}^{\\infty}$ is analogous to raising the original element of $\\mathbb{A}^{*}$ by a rational exponent (the rational exponent being the scalar). That is: $$\\alpha(be_{k})^{\\infty}_{k=1}\\overset{\\phi^{-1}}{=}a^{b^{\\alpha}}=a^{b\\alpha}.$$ Also note that $\\alpha(be_{k})^{\\infty}_{k=1}=\\alpha b(e_{k})_{k=1}^{\\infty}=(\\alpha be_{k})^{\\infty}_{k=1}$.\n\nI didn't want to type everything out, but it seems to have worked.\n\nLast edited:\nSo I've met with my professor, and by the end we just were laughing at how simple and familiar it ended up working out to be.\n\nNo matter how hard I tried, I could not create a vector space, which is fine. The suddenly it came to us.\n\nSo here:\n\nLet $(\\mathbb{R}^{>0},\\cdot)$ be the abelian group of positive reals and multiplication, and let $(\\mathbb{R},+)$ be the abelian group of the reals over addition. Suppose there exists a function $\\log:(\\mathbb{R}^{>0},\\cdot)\\to(\\mathbb{R},+)$ such that $\\log(ab)=\\log(a)+\\log(b)$ and $\\log(a^{b})=b\\log(a)$. Its clear that $\\log$ was just the extension of what I was trying to do before. I guess you could say its the 'analytic jump' from what I was trying to do algebraically. This has given me some new intuition into how $\\log$ works. That its 'sort of' rooted in prime factorization, but more general than that.\n\nI thought this was really interesting. And I'm glad this is all over of me attempting to find something that wasn't there, but had some sort of 'transcendental' version that explains everything.\n\nHe gave me some homework to do, the first question being one that I could probably do my self with my current skill sets, and another I'll have to wait after I take Measure Theory in the Winter Semester.\n\nThe first is that:\n\n• If $f:(\\mathbb{R}^{>0},\\cdot)\\to(\\mathbb{R},+)$ continuously with the properties that $f(ab)=f(a)+f(b)$ and $f(a^{b})=b\\cdot f(a)$ and $f(1)=0$, then $f(x)=\\log(x)$\n\n• There exist many non-measurable $f:(\\mathbb{R}^{>0},\\cdot)\\to(\\mathbb{R},+)$ with the properties $f(ab)=f(a)+f(b)$, $f(a^{b})=b\\cdot f(a)$, and $f(1)=0$.\n\nI hope to be able to do these sometime in the future, but we'll see what happens.\n\nThank you to everyone that has helped me! Especially micromass for putting up with my lack of education in abstract algebra and metric space topology. I swear I'll get better after this year when I finally get a formal introduction to everything! haha\n\nI'm very glad you have found an answer to your (very intriguing) question. I'm also happy that you learned a bit of new mathematics this way!!\n\nI hope you enjoyed thinking about this. These are the things that math research is all about", null, "" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89925885,"math_prob":0.99612343,"size":7836,"snap":"2021-04-2021-17","text_gpt3_token_len":2497,"char_repetition_ratio":0.16853933,"word_repetition_ratio":0.37093863,"special_character_ratio":0.29313424,"punctuation_ratio":0.108947046,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999002,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T17:37:44Z\",\"WARC-Record-ID\":\"<urn:uuid:24711272-e9f9-4388-ba6c-76a61f53f934>\",\"Content-Length\":\"142007\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:acc21166-ba28-4587-b088-768a8bedba18>\",\"WARC-Concurrent-To\":\"<urn:uuid:add4b3e0-f2c5-4415-b153-03858b6b5056>\",\"WARC-IP-Address\":\"104.26.14.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/commutative-monoid-and-some-properties.522807/page-2\",\"WARC-Payload-Digest\":\"sha1:FHLONVB2CG56QKJTRQKS276MD7P62TU4\",\"WARC-Block-Digest\":\"sha1:DCQXNIIUYSLSAZDXRKBZIENKHCREUMWD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703527224.75_warc_CC-MAIN-20210121163356-20210121193356-00213.warc.gz\"}"}
https://datascience.stackexchange.com/questions/53857/why-could-an-overfitted-cnn-model-have-a-higher-validation-accuracy
[ "# Why could an overfitted CNN model have a higher validation accuracy?\n\nI am currently training a CNN model by using cifar10 images (50000 for training, another 10000 for validation).\n\nI plot training loss, validation loss and accuracy against training iteration:", null, "I am not sure when should I stop the training, should I stop it at around iteration 1250, since overfitting happens beyond that point? Or should I stop it at around iteration 5000, since I can get the maximum validation accuracy?\n\nWhy can the overfitted model have a higher validation accuracy?\n\n• How large is your test set? – Peter Jun 16 '19 at 19:10\n• @Peter 50000 for training, another 10000 for validation, this is only the training, therefore no test set was used. – Lei Xun Jun 18 '19 at 20:58\n• maybe a look at a test set, which has not been used for training or evaluation, would help to get a better idea if the large difference in scores is really sustainable. If this works, I see no reason why one should have doubt about performance. – Peter Jun 18 '19 at 21:01\n• @Peter Do you mean using a test set to see which model is actually better? Can I use the validation set as the test set? Since the model did not learn from it. – Lei Xun Jun 18 '19 at 21:07\n\nI will try to answer your question \"Why can the overfitted model have a higher validation accuracy?\" with an example, then try to answer the remaining questions (TLDR at bottom):\n\nSay we have 2 classes with a 1:1 ratio in both the training and testing sets (each 4 instances).\n\nPerhaps our training/validation curves are just like yours (I highlighted some points for reference).", null, "Going from point A to point B ($$\\overline{AB}$$), we do not observe overfitting:\n\n• validation accuracy increases\n• validation loss decreases\n\n$$\\therefore$$ We might conclude an inverse relationship between validation accuracy/loss.\n\nHowever, $$\\overline{BC}$$ shows overfitting:\n\n• validation accuracy is nearly constant\n• validation loss increases\n\nThis appears to contradict the relationship we observed with the $$\\overline{AB}$$ example!\n\nThe reason behind this phenomenon is simple: we cannot assume a linear dependence between the accuracy and loss functions.\n\nLet us define our accuracy and loss functions respectively:\n\n$$accuracy:\\quad 1-\\frac{1}{N}\\sum^N_{i=1}|[X_i]-\\hat{X}_i|$$\n\n$$loss \\space (MSE):\\quad \\frac{1}{N}\\sum^N_{i=1}(X_i-\\hat{X}_i)^2$$\n\n$$\\{X \\space | \\space 0\n\nWe can see that these are linearly independent (which you could verify by calculating their Wronskian). How might this affect the relationship between training/loss curves?\n\nLet us plug-in some numbers to see. Columns {A,B,C} represent the model output vectors for these three epochs and the corresponding ground truth is in $$\\hat{X}$$ column.\n\n$$\\begin{array}{|c|c|c|c|} \\hline A & B & C & \\hat{X}\\\\ \\hline 0.6 & 0.3 & 0.3 & 0\\\\ \\hline 0.2 & 0.2 & 0.2 & 0 \\\\ \\hline 0.4 & 0.4 & 0.1 & 1 \\\\ \\hline 0.8 & 0.8 & 0.8 & 1 \\\\ \\hline \\end{array}$$\n\n$$\\begin{array}{|c|c|c|} \\hline epoch & accuracy \\space calculation & accuracy & loss \\space calculation & loss \\\\ \\hline A & 1-\\frac{1}{4} * (1 + 0 + 1 + 0) & 0.50 & \\frac{1}{4} * (0.6^2 + 0.2^2 + 0.6^2 + 0.2^2) & 0.2000\\\\ \\hline B & 1-\\frac{1}{4} * (0 + 0 + 1 + 0) & 0.75 & \\frac{1}{4} * (0.3^2 + 0.2^2 + 0.6^2 + 0.2^2) & 0.1325 \\\\ \\hline C & 1-\\frac{1}{4} * (0 + 0 + 1 + 0) & 0.75 & \\frac{1}{4} * (0.3^2 + 0.2^2 + 0.9^2 + 0.2^2) & 0.2450 \\\\ \\hline \\end{array}$$\n\nTherefore, $$C_{loss} > A_{loss},\\quad$$even though$$\\quad C_{accuracy} > A_{accuracy}$$!\n\nOf course this is a very simple example, and your situation will have many more complexities, but establishing the linear independence of the validation accuracy and validation loss and understanding these curves is a critical aspect of determining which epoch model to select.\n\nTLDR: With this in mind, you should use the model instance at epoch B. This is the point at which overfitting starts, not at epoch 1250. (although I suggest saving many intermediate models and empirically determining their viability on another dataset).\n\nAfter epoch B, your validation accuracy is not changing while your training loss decreases. Therefore, your weights are fitting more to the training data without improving your results, ergo overfitting. At epoch 1250, your model is still improving validation accuracy quite dramatically, so stopping training then would yield a very poor model.\n\nAs we can see from your validation accuracy curve, this model will probably perform similarly to models beyond epoch B, but there are a couple reasons you might prefer model B.\n\nIf you ever wanted to resume training (perhaps you add more data to your training set but do not want to restart completely), then having weights that are not overly fitted (or even vanished!) to the initial dataset would be ideal.\n\nA very similar reason is that, even though this particular dataset shows virtually no difference between the epoch B model and models beyond epoch B, other datasets might. It would then be easier to adjust model B for these new datasets rather than an overfitted model.\n\n• Thanks for the reply. I wonder if we could use validation loss to find the over-fitting point? Previous I thought the gap between training loss and validation loss is the over-fitting point. – Lei Xun Jun 24 '19 at 17:33\n• You start overfitting when your results on the validation set either worsen or remain constant while your training loss decreases. The \"results\" on the validation set could mean loss but it could also mean accuracy. Generally loss is more granular, so it is a better indicator. With this definition, we can see that after epoch B, neither validation accuracy nor validation loss improves, so that is where the model starts overfitting. It is therefore not the gap that determines overfitting, rather the derivatives of the curves. – Benji Albert Jun 24 '19 at 18:17\n• I see now. Thanks. – Lei Xun Jun 24 '19 at 19:00" ]
[ null, "https://i.stack.imgur.com/iGt2x.png", null, "https://i.stack.imgur.com/IwJgT.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.869846,"math_prob":0.9874864,"size":3829,"snap":"2019-51-2020-05","text_gpt3_token_len":1102,"char_repetition_ratio":0.13411765,"word_repetition_ratio":0.06320908,"special_character_ratio":0.3032123,"punctuation_ratio":0.1127321,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9941116,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-24T13:25:51Z\",\"WARC-Record-ID\":\"<urn:uuid:88c58eea-1381-4a30-8182-26c9ee03546d>\",\"Content-Length\":\"144786\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ea62acd8-f854-48c5-8211-22edc0c08dc1>\",\"WARC-Concurrent-To\":\"<urn:uuid:13b0132b-76c2-49e5-9fa1-cc77f2a4b7c8>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://datascience.stackexchange.com/questions/53857/why-could-an-overfitted-cnn-model-have-a-higher-validation-accuracy\",\"WARC-Payload-Digest\":\"sha1:2YUYPUVFUFIVZTWO6K7QTA4XGVXJIEAE\",\"WARC-Block-Digest\":\"sha1:GCZ5EJW7LLZSFE7QTFN2ZLMDZJG3OK64\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250620381.59_warc_CC-MAIN-20200124130719-20200124155719-00384.warc.gz\"}"}
https://www.youphysics.education/relative-motion/relative-motion-problems/relative-motion-problem-7/
[ "# Relative Motion - Earth's rotation and Coriolis acceleration\n\nProblem Statement:\n\nA plane is traveling from the North Pole (we will assume that the earth is a sphere of radius RT) with a velocity v’ measured with respect to an Earth’s non-inertial frame of reference O’ (see figure). v’ is contained in the XY plane. The constant angular velocity of the earth is ω .\nCalculate the Coriolis acceleration vector of the plane for points A, B, C and D of its trajectory. Give the results as a function of the givens.", null, "Knowledge is free, but servers are not. Please consider supporting us by disabling your ad blocker on YouPhysics. Thanks!\n\nSolution:\n\nCoriolis acceleration\n\nThe Coriolis acceleration is given by:", null, "Where ω is the angular velocity of the rotating frame of reference (O’ in this problem) and v’ is the velocity of the moving particle with respect to the rotating reference frame.\n\nAs with any other cross product, the magnitude of the Coriolis acceleration is:", null, "Where θ in the angle between ω and v’.\n\nThe direction of the Coriolis acceleration is given by the right hand rule. We are going to learn how to use it with some examples.\n\nPoint A:\n\nAs you can see in the figure, when the plane is at point A θ is 900, and therefore the magnitude of the Coriolis acceleration is:", null, "We are going to use the right hand rule to determine the direction of the Coriolis acceleration.\n\nFirst we calculate the cross product:", null, "In the figure below you can see vectors ω and v’ at point A.", null, "To use the right hand rule, first we have to align the right hand with the first vector (in this case ω). Then we close the hand over the second one (v’). The thumb gives the direction of the cross product.\n\nThe cross product is not commutative; therefore you will have to respect the order of the vectors to calculate it correctly.\n\nThe cross product is always orthogonal to both vectors. In this example the cross product is perpendicular to the screen and points inwards, as the thumb indicates.\n\nLast we have to multiply by -1 (the minus sign is included in the definition of the Coriolis acceleration). This inverts the cross product direction, and therefore the Coriolis acceleration in this case will point outwards.\n\nIn the first figure you can see the unit vectors. The Coriolis acceleration at point A is parallel to k. And since we also have calculated its magnitude, the final value of the Coriolis acceleration when the plane is at point A is:", null, "Point B: The angle between ω and v’ at this point is 180-λ, as you can see in the figure below. We have translated ω to point B to make it easier to determine the angle.", null, "Therefore the magnitude of the Coriolis acceleration at point B is:", null, "We are going to use the right hand rule to determine the direction of the cross vector.", null, "The direction of the Coriolis acceleration at point B will be the same as at point A, since ω and v’ make the same plane as at point A.\n\nThe Coriolis acceleration at point B is:", null, "Knowledge is free, but servers are not. Please consider supporting us by disabling your ad blocker on YouPhysics. Thanks!\n\nPoint C: At this point the angle between ω and v’ is 180-λ, as you can see in the figure below. We have translated ω to point C to make it easier to determine the angle.", null, "Por tanto, el módulo de la aceleración de Coriolis del avión cuando se encuentra en el punto C es:", null, "We are going to use the right hand rule to determine the direction of the Coriolis acceleration. We begin with the cross product:", null, "In the figure below you can see vectors ω and v’ at point C:", null, "As in the previous cases, we are going to use the right hand rule, closing it from ω to v’. The thumb gives the direction of the cross product. In this case it is perpendicular to the screen and points outwards (parallel to k).\n\nLast we have to multiply by -1 (the minus sign is included in the definition of the Coriolis acceleration). This inverts the cross product direction, and therefore the Coriolis acceleration in this case will point inwards (-k).\n\nThe Coriolis acceleration at point C is given by:", null, "As you can see, the Coriolis acceleration has, for the same value of the latitude, opposite directions in the Northern and Southern hemispheres. This is why the trajectories deviate to the right in the first case and to the left in the second.\n\nPoint D: In this case ωv’ are parallel, and therefore the cross product is zero and so is the Coriolis acceleration.\n\nThe post Relative Motion - Earth's rotation and Coriolis acceleration appeared first on YouPhysics" ]
[ null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Eq1.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Eq2.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Eq3.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Eq5.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Sol2.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Eq6.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Sol1.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Eq4.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Sol3.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Eq7.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Sol4.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Eq4.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Eq5.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Sol5.png", null, "https://www.youphysics.education/wp-content/uploads/es/movimiento-relativo/problemas-de-movimiento-relativo/movimiento-relativo-problema-7/P7-Tierra2-Eq8.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8922272,"math_prob":0.9553919,"size":4169,"snap":"2020-45-2020-50","text_gpt3_token_len":956,"char_repetition_ratio":0.2007203,"word_repetition_ratio":0.2791612,"special_character_ratio":0.21372032,"punctuation_ratio":0.08659549,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9950344,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,4,null,2,null,2,null,2,null,4,null,2,null,2,null,2,null,4,null,4,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-26T07:10:10Z\",\"WARC-Record-ID\":\"<urn:uuid:bbb377c2-94fa-4265-ae7d-f801181e6c9e>\",\"Content-Length\":\"35949\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e1a47e39-64de-4fb8-9003-781c8cc6e3ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:f3402486-87b8-4f40-8f29-5b673f66caf8>\",\"WARC-IP-Address\":\"104.31.77.22\",\"WARC-Target-URI\":\"https://www.youphysics.education/relative-motion/relative-motion-problems/relative-motion-problem-7/\",\"WARC-Payload-Digest\":\"sha1:44YG3C3PNSHDBTE7XB2LQVE7XTGFZFAI\",\"WARC-Block-Digest\":\"sha1:UZUUCO7SZXTFKVVH2ZKBEFBXQ4GQRMDL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141186761.30_warc_CC-MAIN-20201126055652-20201126085652-00259.warc.gz\"}"}
https://slideplayer.com/slide/4919505/
[ "", null, "# FACTOR MODELS (Chapter 6)\n\n## Presentation on theme: \"FACTOR MODELS (Chapter 6)\"— Presentation transcript:\n\nFACTOR MODELS (Chapter 6)\nMarkowitz Model Employment of Factor Models Essence of the Single-Factor Model The Characteristic Line Expected Return in the Single-Factor Model Single-Factor Model’s Simplified Formula for Portfolio Variance Explained Versus Unexplained Variance Multi-Factor Models Models for Estimating Expected Return\n\nMarkowitz Model Problem: Tremendous data requirement.\nNumber of security variances needed = M. Number of covariances needed = (M2 - M)/2 Total = M + (M2 - M)/2 Example: (100 securities) 100 + (10, )/2 = 5,050 Therefore, in order for modern portfolio theory to be usable for large numbers of securities, the process had to be simplified. (Years ago, computing capabilities were minimal)\n\nEmployment of Factor Models\nTo generate the efficient set, we need estimates of expected return and the covariances between the securities in the available population. Factor models may be used in this regard. Risk Factors – (rate of inflation, growth in industrial production, and other variables that induce stock prices to go up and down.) May be used to evaluate covariances of return between securities. Expected Return Factors – (firm size, liquidity, etc.) May be used to evaluate expected returns of the securities. In the discussion that follows, we first focus on risk factor models. Then the discussion shifts to factors affecting expected security returns.\n\nEssence of the Single-Factor Model\nFluctuations in the return of a security relative to that of another (i.e., the correlation between the two) do not depend upon the individual characteristics of the two securities. Instead, relationships (covariances) between securities occur because of their individual relationships with the overall market (i.e., covariance with the market). If Stock (A) is positively correlated with the market, and if Stock (B) is positively correlated with the market, then Stocks (A) and (B) will be positively correlated with each other. Given the assumption that covariances between securities can be accounted for by the pull of a single common factor (the market), the covariance between any two stocks can be written as:\n\nThe Characteristic Line (See Chapter 3 for a Review of the Statistics)\nRelationship between the returns on an individual security and the returns on the market portfolio: Aj = intercept of the characteristic line (the expected rate of return on stock (j) should the market happen to produce a zero rate of return in any given period). j = beta of stock (j); the slope of the characteristic line. j,t = residual of stock (j) during period (t); the vertical distance from the characteristic line.\n\nGraphical Display of the Characteristic Line\nrj,t = j Aj rM,t\n\nThe Characteristic Line (Continued)\nNote: A stock’s return can be broken down into two parts: Movement along the characteristic line (changes in the stock’s returns caused by changes in the market’s returns). Deviations from the characteristic line (changes in the stock’s returns caused by events unique to the individual stock). Movement along the line: Aj + jrM,t Deviation from the line: j,t\n\nMajor Assumption of the Single-Factor Model\nThere is no relationship between the residuals of one stock and the residuals of another stock (i.e., the covariance between the residuals of every pair of stocks is zero). Stock j’s Residuals (%) Stock k’s Residuals (%)\n\nExpected Return in the Single-Factor Model\nActual Returns: Expected Residual: Given the characteristic line is truly the line of best fit, the sum of the residuals would be equal to zero Therefore, the expected value of the residual for any given period would also be equal to zero: Expected Returns: Given the characteristic line, and an expected residual of zero, the expected return of a security according to the single-factor model would be:\n\nSingle-Factor Model’s Simplified Formula for Portfolio Variance\nVariance of an Individual Security: Given: It Follows That:\n\nNote: Therefore:\n\nVariance of a Portfolio\nSame equation as the one for individual security variance: Relationship between security betas & portfolio betas Relationship between residual variances of stocks, and the residual variance of a portfolio, given the index model assumption. The residual variance of a portfolio is a weighted average of the residual variances of the stocks in the portfolio with the weights squared.\n\nExplained Vs. Unexplained Variance (Systematic Vs. Unsystematic Risk)\nTotal Risk = Systematic Risk + Unsystematic Risk Systematic: That part of total variance which is explained by the variance in the market’s returns. Unsystematic: The unexplained variance, or that part of total variance which is due to the stock’s unique characteristics.\n\nNote: [i.e., j22(rM) is equal to the coefficient of determination (the % of the variance in the security’s returns explained by the variance in the market’s returns) times the security’s total variance] Total Variance = Explained + Unexplained As the number of stocks in a portfolio increases, the residual variance becomes smaller, and the coefficient of determination becomes larger.\n\nExplained Vs. Unexplained Variance (A Graphical Display)\nResidual Variance Coefficient of Determination Number of Stocks Number of Stocks\n\nExplained Vs. Unexplained Variance (A Two Stock Portfolio Example)\nCovariance Matrix for Explained Variance Covariance Matrix for Unexplained Variance\n\nExplained Vs. Unexplained Variance (A Two Stock Portfolio Example) Continued\n\nA Note on Residual Variance\nThe Single-Factor Model assumes zero correlation between residuals: In this case, portfolio residual variance is expressed as: In reality, firms’ residuals may be correlated with each other. That is, extra-market events may impact on many firms, and: In this case, portfolio residual variance would be:\n\nMarkowitz Model Versus the Single-Factor Model (A Summary of the Data Requirements)\nNumber of security variances = m Number of covariances = (m2 - m)/2 Total = m + (m2 - m)/2 Example securities: 100 + (10, )/2 = 5,050 Single-Factor Model Number of betas = m Number of residual variances = m Plus one estimate of 2(rM) Total = 2m + 1 2(100) + 1 = 201\n\nMulti-Factor Models Recall the Single-Factor Model’s formula for portfolio variance: If there is positive covariance between the residuals of stocks, residual variance would be high and the coefficient of determination would be low. In this case, a multi-factor model may be necessary in order to reduce residual variance. A Two Factor Model Example where: rg = growth rate in industrial production rI = % change in an inflation index\n\nTwo Factor Model Example - Continued\nOnce again, it is assumed that the covariance between the residuals of the the individual stocks are equal to zero: Furthermore, the following covariances are also presumed: Portfolio Variance in a Two Factor Model:\n\nwhere: Note that if the covariances between the residuals of the individual securities are still significantly different from zero, you may need to develop a different model (perhaps a three, four, or five factor model).\n\nNote on the Assumption Cov(rg,rI ) = 0\nIf the Cov(rg,rI) is not equal to zero, the two factor model becomes a bit more complex. In general, for a two factor model, the systematic risk of a portfolio can be computed using the following covariance matrix: To simplify matters, we will assume that the factors in a multi-factor model are uncorrelated with each other. g,p I,p g,p I,p\n\nModels for Estimating Expected Return\nOne Simplistic Approach Use past returns to predict expected future returns. Perhaps useful as a starting point. Evidence indicates, however, that the future frequently differs from the past. Therefore, “subjective adjustments” to past patterns of returns are required. Systematic Risk Models One Factor Systematic Risk Model: Given a firm’s estimated characteristic line and an estimate of the future return on the market, the security’s expected return can be calculated.\n\nModels for Estimating Expected Return (Continued)\nTwo Factor Systematic Risk Model: N Factor Systematic Risk Model: Other Factors That May Be Used in Predicting Expected Return Note that the author discusses numerous factors in the text that may affect expected return. A review of the literature, however, will reveal that this subject is indeed controversial. In essence, you can spend the rest of your lives trying to determine the “best factors” to use. The following summarizes “some” of the evidence.\n\nOther Factors That May Be Used in Predicting Expected Return\nLiquidity (e.g., bid-asked spread) Negatively related to return [e.g., Low liquidity stocks (high bid-asked spreads) should provide higher returns to compensate investors for the additional risk involved.] Value Stock Versus Growth Stock P/E Ratios Low P/E stocks (value stocks) tend to outperform high P/E stocks (growth stocks). Price/(Book Value) Low Price/(Book Value) stocks (value stocks) tend to outperform high Price/(Book Value) stocks (growth stocks).\n\nOther Factors That May Be Used in Predicting Expected Return (continued)\nTechnical Analysis Analyze past patterns of market data (e.g., price changes) in order to predict future patterns of market data. “Volumes have been written on this subject. Size Effect Returns on small stocks (small market value) tend to be superior to returns on large stocks. Note: Small NYSE stocks tend to outperform small NASDAQ stocks. January Effect Abnormally high returns tend to be earned (especially on small stocks) during the month of January.\n\nOther Factors That May Be Used in Predicting Expected Return (continued)\nAnd the List Goes On If you are truly interested in factors that affect expected return, spend time in the library reading articles in Financial Analysts Journal, Journal of Portfolio Management, and numerous other academic journals. This could be an ongoing venture the rest of your life.\n\nBuilding a Multi-Factor Expected Return Model: One Possible Approach\nEstimate the historical relationship between return and “chosen” variables. Then use this relationship to predict future returns. Historical Relationship: Future Estimate:\n\nAsset Allocation Decisions\nUsing the Markowitz and Factor Models to Make Asset Allocation Decisions Asset Allocation Decisions Portfolio optimization is widely employed to allocate money between the major classes of investments: Large capitalization domestic stocks Small capitalization domestic stocks Domestic bonds International stocks International bonds Real estate\n\nUsing the Markowitz and Factor Models to Make Asset Allocation Decisions Continued: Strategic Versus Tactical Asset Allocation Strategic Asset Allocation Decisions relate to relative amounts invested in different asset classes over the long-term. Rebalancing occurs periodically to reflect changes in assumptions regarding long-term risk and return, changes in the risk tolerance of the investors, and changes in the weights of the asset classes due to past realized returns. Tactical Asset Allocation Short-term asset allocation decisions based on changes in economic and financial conditions, and assessments as to whether markets are currently underpriced or overpriced.\n\nMarkowitz Full Covariance Model\nUsing the Markowitz and Factor Models to Make Asset Allocation Decisions Continued Markowitz Full Covariance Model Use to allocate investments in the portfolio among the various classes of investments (e.g., stocks, bonds, cash). Note that the number of classes is usually rather small. Factor Models Use to determine which individual securities to include in the various asset classes. The number of securities available may be quite large. Expected return factor models could also be employed to provide inputs regarding expected return into the Markowitz model. Further Information Interested readers may refer to Chapter 7, Asset Allocation, for a more indepth discussion of this subject. In addition, the author has provided “hands on” examples of manipulating data using the PManager software in the process of making asset allocation decisions." ]
[ null, "https://slideplayer.com/static/blue_design/img/slide-loader4.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8757911,"math_prob":0.9479791,"size":12240,"snap":"2019-43-2019-47","text_gpt3_token_len":2528,"char_repetition_ratio":0.16124551,"word_repetition_ratio":0.06581741,"special_character_ratio":0.1988562,"punctuation_ratio":0.09943715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9869181,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T08:58:56Z\",\"WARC-Record-ID\":\"<urn:uuid:c1646c8a-e5da-4ec4-9ce8-de48c253d29f>\",\"Content-Length\":\"209280\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1a1ec8c5-906d-4d5a-896d-f372720c26ec>\",\"WARC-Concurrent-To\":\"<urn:uuid:a03a8c17-6fb6-423a-940d-190ae9e858cf>\",\"WARC-IP-Address\":\"144.76.153.40\",\"WARC-Target-URI\":\"https://slideplayer.com/slide/4919505/\",\"WARC-Payload-Digest\":\"sha1:BIRNZCR2HR46JBHKEEI2MFKOSAOQJTUR\",\"WARC-Block-Digest\":\"sha1:SB45V6ZEXQG4YQKWARONTUAMNBTBJOT6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668594.81_warc_CC-MAIN-20191115065903-20191115093903-00314.warc.gz\"}"}
https://www.cut-the-knot.org/arithmetic/MinMax.shtml
[ "# Minimax Principle\n\nThe purpose of the applet below is to illustrate a mathematical fact that plays an important role in the Game Theory, Economics, and general optimization problems. I postpone the statement until later. See if, with the help of the applet, you can arrive at the right formulation yourself.\n\nGiven a rectangular array of numbers. In every column, select the maximum value and write it under the column. Similarly, in every row, select the minimum value and write it to the right of the column. In the applet, these computed values appear in blue. In this fashion, our original array of numbers is augmented with one (blue) row and one (blue) column. In the blue row select the minimum element and write it (in red) to the right of the row. Similarly, in the blue column select the maximum number and right it under that column. Compare the two (red) numbers. Is there any regularity? Most of the time the numbers will be different. Sometimes they will coincide. On such occasions, the applet will provide an additional clue. Try to grasp its significance.\n\nTo get a different configuration, press the Reset button. With larger matrix dimensions, it may become tiresome to keep pressing the Reset, just in order to get a matrix with a saddle point. However, each of the numbers in the matrix can be changed individualy. Click or drag the mouse off the vertical center line of each entry. The number will go down or up, depending on whether the cursor is to the left or right of the central line.\n\n### If you are reading this, your browser is not set to run Java applets. Try IE11 or Safari and declare the site https://www.cut-the-knot.org as trusted in the Java setup.", null, "What if applet does not run?\n\nStatement", null, "Let's first of all formalize the procedure. We start with an array Ai,k of numbers. By convention, the first index designate a row, the second a column. For example, element A3,5 is located in the row number 3 and the column # 5. Keep the first index fixed - this is to say, select a row (the row corresponding to the fixed first index.) Denote the minimum element in this row as Bi = Min(Ai,k). (The minimum is taken over all k for a fixed i.) The result generally depends on the selected row. Denote the maximum number among Bi's as MaxMin = Max(Bi). (The maximum is taken over all i's.)\n\nIn a similar manner, let Ck = Max(Ai,k), where, as before, maximum is taken over all i's (for a fixed k, i.e., for a fixed column.) Compute then MinMax = Min(Ck).\n\n### Statement\n\n1. MinMax ≥ MaxMin\n2. The equality is achieved iff there exists an element Ai0,k0 that is simultaneously maximal in its column and minimal in its row.\n\n### Proof\n\nWriting subscripts in HTML (this is the language of the Web) is somewhat awkward. Let's just agree that Max is taken over index i, whereas Min is always taken over index k.\n\nAssume MinMax = Ai0 ,k0 for some indices i0 and k0. MinMax = Min(Ck) = Ck0. In other words, MinMax is the maximum element in column k0. By definition, Ck0 = Max(Ai, k0). Every element Ai,k0 in column k0 may or may not be the smallest number in its row. But, in any event, the smallest number in a row can't exceed any element in that row:\n\nAi,k0 ≥ Min(Ai, k) = Bi\n\nCombining that with\n\nAi0, k0 = Max(Ai, k0) ≥ Ai, k0\n\n(which is just a different way of describing our selection of indices) we get\n\nAi0,k0 ≥ Min(Ai,k) = Bi\n\nThe left hand side in the latter does not depend on the index i, and the inequality holds for any value of i (any row in the table.) Therefore, maximizing the right hand side will preserve the inequality:\n\n (1) MinMax = Ai0,k0 ≥ Max(Bi) = MaxMin\n\nwhich proves the first part of the statement.\n\nTo prove the second part, first assume that MinMax = MaxMin. Introduce\n\nMinMax = Min(Ck) = Ck0 and\nMaxMin = Max(Bi) = Bi0,\n\nand consider Ai0,k0. By definition,\n\nMaxMin = Bi0 ≤ Ai0,k0 ≤ Ck0 = MinMax.\n\nBy the assumption MinMax = MaxMin,\n\nMinMax = Ai0,k0 = MaxMin.\n\nConversely, assume there exists an element Ai0,k0 such that\n\nCk0 = Max(Ai, k0) = Ai0, k0 = Min(Ai0, k) = Bi0\n\nThen, by definition,\n\nMaxMin = Max(Bi) ≥ Bi0 = Ck0 ≥ Min(Ck) = MinMax\n\nHowever, as we have shown in the first part of the proof, MinMax ≥ MaxMin always, which gives MinMax = MaxMin.", null, "The point (i0, k0) such that MinMax = Ai0,k0 = MaxMin is called a saddle point for a given table A. Saddle points may exist for functions of continuous arguments as well. Graphs of such functions in a vicinity of a saddle point in fact resemble the shape of a saddle.\n\nThe saddle points have interesting properties that you may observe with the applet and also try proving:\n\n1. If there are more than one saddle point, then all corresponding matrix entries coincide.\n2. If there are two saddle points at (i1, j1) and (i2, j2), such that i1 differs from i2 and j1 differs from j2, then (i1, j2) and (i2, j1) are also saddle points.\n\nPeter Bajorski rewrote the above replacing matrices with a more general functional setting. His writeup is available as a pdf file.", null, "### A Sample of Optimization Problems II\n\n• Mathematicians Like to Optimize\n• Building a Bridge\n• Building Bridges\n• Optimization Problem in Acute Angle\n•", null, "" ]
[ null, "https://www.cut-the-knot.org/Curriculum/Algebra/MinMax.jpg", null, "https://www.cut-the-knot.org/gifs/tbow_sh.gif", null, "https://www.cut-the-knot.org/gifs/tbow_sh.gif", null, "https://www.cut-the-knot.org/gifs/tbow_sh.gif", null, "https://www.cut-the-knot.org/gifs/tbow_sh.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87297344,"math_prob":0.98658216,"size":3950,"snap":"2022-40-2023-06","text_gpt3_token_len":1018,"char_repetition_ratio":0.11353269,"word_repetition_ratio":0.020057306,"special_character_ratio":0.24936709,"punctuation_ratio":0.124523506,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99281234,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,5,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T21:11:09Z\",\"WARC-Record-ID\":\"<urn:uuid:57dca148-fe64-4b65-b3fe-1d54028150a4>\",\"Content-Length\":\"20060\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:784d878a-54c6-492f-9659-5837f7d8ab92>\",\"WARC-Concurrent-To\":\"<urn:uuid:afc20de0-cddd-4ae2-a8c1-7ff72aabb470>\",\"WARC-IP-Address\":\"107.180.50.227\",\"WARC-Target-URI\":\"https://www.cut-the-knot.org/arithmetic/MinMax.shtml\",\"WARC-Payload-Digest\":\"sha1:QKW4EWKUCWONM2WM6Q3VVCHTCKVH6KBP\",\"WARC-Block-Digest\":\"sha1:KT63J5IJI4T2BE2VRMBIHM3PM6DWAF3F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500074.73_warc_CC-MAIN-20230203185547-20230203215547-00372.warc.gz\"}"}
https://idris2docs.sinyax.net/data/base/docs/Data.Bifoldable.html
[ "Idris2Doc : Data.Bifoldable\n\n# Data.Bifoldable\n\nbiall : Bifoldablep => (a -> Bool) -> (b -> Bool) -> pab -> Bool\nThe disjunction of the collective results of applying a predicate to all\nelements of a structure. `biall` short-circuits from left to right.\nbiand : Bifoldablep => p Lazy Bool Lazy Bool -> Bool\nThe conjunction of all elements of a structure containing lazy boolean\nvalues. `biand` short-circuits from left to right, evaluating until either an\nelement is `False` or no elements remain.\nbiany : Bifoldablep => (a -> Bool) -> (b -> Bool) -> pab -> Bool\nThe disjunction of the collective results of applying a predicate to all\nelements of a structure. `biany` short-circuits from left to right.\nbichoice : (Bifoldablep, Alternativef) => p Lazy (fa) Lazy (fa) -> fa\nBifold using Alternative.\n\nIf you have a left-biased alternative operator `<|>`, then `choice` performs\nleft-biased choice from a list of alternatives, which means that it\nevaluates to the left-most non-`empty` alternative.\nbichoiceMap : (Bifoldablep, Alternativef) => (a -> fx) -> (b -> fx) -> pab -> fx\nA fused version of `bichoice` and `bimap`.\nbiconcat : (Bifoldablep, Monoidm) => pmm -> m\nCombines the elements of a structure using a monoid.\nbiconcatMap : (Bifoldablep, Monoidm) => (a -> m) -> (b -> m) -> pab -> m\nCombines the elements of a structure,\ngiven ways of mapping them to a common monoid.\nbifoldMap : (Bifoldablep, Monoidm) => (a -> m) -> (b -> m) -> pab -> m\nCombines the elements of a structure,\ngiven ways of mapping them to a common monoid.\nbifoldlM : (Bifoldablep, Monadm) => (a -> b -> ma) -> (a -> c -> ma) -> a -> pbc -> ma\nLeft associative monadic bifold over a structure.\nbifor_ : (Bifoldablep, Applicativef) => pab -> (a -> fx) -> (b -> fy) -> fUnit\nLike `bitraverse_` but with the arguments flipped.\nbior : Bifoldablep => p Lazy Bool Lazy Bool -> Bool\nThe disjunction of all elements of a structure containing lazy boolean\nvalues. `bior` short-circuits from left to right, evaluating either until an\nelement is `True` or no elements remain.\nbiproduct : (Bifoldablep, Numa) => paa -> a\nMultiply together all elements of a structure.\nbiproduct' : (Bifoldablep, Numa) => paa -> a\nMultiply together all elements of a structure.\nSame as `product` but tail recursive.\nbisequence_ : (Bifoldablep, Applicativef) => p (fa) (fb) -> fUnit\nEvaluate each computation in a structure and discard the results.\nbisum : (Bifoldablep, Numa) => paa -> a\nAdd together all the elements of a structure.\nbisum' : (Bifoldablep, Numa) => paa -> a\nAdd together all the elements of a structure.\nSame as `bisum` but tail recursive.\nbitraverse_ : (Bifoldablep, Applicativef) => (a -> fx) -> (b -> fy) -> pab -> fUnit\nMap each element of a structure to a computation, evaluate those" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6480774,"math_prob":0.41599476,"size":2927,"snap":"2021-43-2021-49","text_gpt3_token_len":884,"char_repetition_ratio":0.17687307,"word_repetition_ratio":0.4760274,"special_character_ratio":0.3194397,"punctuation_ratio":0.11851852,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97795457,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T00:34:58Z\",\"WARC-Record-ID\":\"<urn:uuid:3785f84f-0713-4e71-a8f6-7d2b27013f69>\",\"Content-Length\":\"15117\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:12946b16-8a5e-43df-a711-2fdf76305b0e>\",\"WARC-Concurrent-To\":\"<urn:uuid:f5af269e-f9ef-4acf-b5bb-8e8014d7a12f>\",\"WARC-IP-Address\":\"176.9.78.6\",\"WARC-Target-URI\":\"https://idris2docs.sinyax.net/data/base/docs/Data.Bifoldable.html\",\"WARC-Payload-Digest\":\"sha1:W4KDUY64CKZJRIQH472NT32LFIAFFKWJ\",\"WARC-Block-Digest\":\"sha1:KVQMJWL7BY6EWCKNWWT6GTENRVISNJ56\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585449.31_warc_CC-MAIN-20211021230549-20211022020549-00488.warc.gz\"}"}
https://readymadepubquiz.com/quiz-14-round-3-maths/
[ "", null, "# Quiz 14 – Round 3 – Maths\n\n1. How many minutes are there in a day?\n\n1440\n\n2. How many millilitres are equal to 3 standard bottles of wine?\n\n2250 (750 ml per bottle)\n\n3. The internal angles of a triangle always add up to how many degrees?\n\n180\n\n4. If it takes 3 women 1 hour to drink a bottle of wine, how long will it take 2 women?\n\n1 and a half hours\n\n5. What is the name for a 14-sided polygon?\n\n6. Which is the next biggest prime number after 23?\n\n29\n\n7. What is the next number in the sequence; 2, 7, 17, 37, 77, …\n\n157 (multiply each number by two and then add three to get the next in the sequence)\n\n8. Mathematically, what is an apex?\n\nThe vertex at the tip of a cone or pyramid\n\n9. What is half a sphere called?\n\nHemisphere\n\n10. If Frank got 18 out of 20 in his maths test, what percentage would he have?\n\n90%\n\n### 1 thought on “Quiz 14 – Round 3 – Maths”\n\n1.", null, "Very good questions\nuseful for my son\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed." ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20137%20138%22%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%2050%2050%22%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9302415,"math_prob":0.91496927,"size":864,"snap":"2020-34-2020-40","text_gpt3_token_len":253,"char_repetition_ratio":0.10232558,"word_repetition_ratio":0.0,"special_character_ratio":0.31828704,"punctuation_ratio":0.13592233,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96387005,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T12:50:57Z\",\"WARC-Record-ID\":\"<urn:uuid:2cd315b7-4555-4bec-8ad0-affd4e9613c7>\",\"Content-Length\":\"58748\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:04abf6dc-9845-43de-9838-0978541df20a>\",\"WARC-Concurrent-To\":\"<urn:uuid:648c62d0-75ea-475a-87d1-a4eb8804c37b>\",\"WARC-IP-Address\":\"149.255.56.211\",\"WARC-Target-URI\":\"https://readymadepubquiz.com/quiz-14-round-3-maths/\",\"WARC-Payload-Digest\":\"sha1:63QZ5XKAZFCDXGKQYUZORAE2UD7MDAX5\",\"WARC-Block-Digest\":\"sha1:XG6IZV4ND3M46CTU67TT4TX6YYQ5NTDC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401600771.78_warc_CC-MAIN-20200928104328-20200928134328-00609.warc.gz\"}"}
http://www.pearltrees.com/u/8834986-minkowski-encyclopedia
[ "", null, "# Minkowski space", null, "In theoretical physics, Minkowski space is often contrasted with Euclidean space. While a Euclidean space has only spacelike dimensions, a Minkowski space also has one timelike dimension. The isometry group of a Euclidean space is the Euclidean group and for a Minkowski space it is the Poincaré group. History In 1905 (published 1906) it was noted by Henri Poincaré that, by taking time to be the imaginary part of the fourth spacetime coordinate √−1 ct, a Lorentz transformation can be regarded as a rotation of coordinates in a four-dimensional Euclidean space with three real coordinates representing space, and one imaginary coordinate, representing time, as the fourth dimension. The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. For further historical information see references Galison (1979), Corry (1997), Walter (1999). Structure The Minkowski inner product Standard basis where Related:  MathematicsPhysicsPhysics\n\nHilbert space The state of a vibrating string can be modeled as a point in a Hilbert space. The decomposition of a vibrating string into its vibrations in distinct overtones is given by the projection of the point onto the coordinate axes in the space. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as infinite-dimensional function spaces. The earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis (which includes applications to signal processing and heat transfer)—and ergodic theory, which forms the mathematical underpinning of thermodynamics. John von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications. Definition and illustration Motivating example: Euclidean space Definition\n\nLorentz group The mathematical form of Basic properties The Lorentz group is a subgroup of the Poincaré group, the group of all isometries of Minkowski spacetime. The Lorentz transformations are precisely the isometries which leave the origin fixed. Thus, the Lorentz group is an isotropy subgroup of the isometry group of Minkowski spacetime. For this reason, the Lorentz group is sometimes called the homogeneous Lorentz group while the Poincaré group is sometimes called the inhomogeneous Lorentz group. Mathematically, the Lorentz group may be described as the generalized orthogonal group O(1,3), the matrix Lie group which preserves the quadratic form on R4. The restricted Lorentz group arises in other ways in pure mathematics. Connected components Each of the four connected components can be categorized by which of these two properties its elements have: Lorentz transformations which preserve the direction of time are called orthochronous. P = diag(1, −1, −1, −1) T = diag(−1, 1, 1, 1). where\n\nMinkowski diagram Minkowski diagram with resting frame (x,t), moving frame (x′,t′), light cone, and hyperbolas marking out time and space with respect to the origin. The Minkowski diagram, also known as a spacetime diagram, was developed in 1908 by Hermann Minkowski and provides an illustration of the properties of space and time in the special theory of relativity. It allows a quantitative understanding of the corresponding phenomena like time dilation and length contraction without mathematical equations. The term Minkowski diagram is used in both a generic and particular sense. In general, a Minkowski diagram is a graphic depiction of a portion of Minkowski space, often where space has been curtailed to a single dimension. These two-dimensional diagrams portray worldlines as curves in a plane that correspond to motion along the spatial axis. Basics A photon moving right at the origin corresponds to the yellow track of events, a straight line with a slope of 45°. Different scales on the axes. History\n\nSpacetime symmetries Spacetime symmetries are features of spacetime that can be described as exhibiting some form of symmetry. The role of symmetry in physics is important in simplifying solutions to many problems, spacetime symmetries finding ample application in the study of exact solutions of Einstein's field equations of general relativity. Physical motivation Physical problems are often investigated and solved by noticing features which have some form of symmetry. preserving geodesics of the spacetimepreserving the metric tensorpreserving the curvature tensor These and other symmetries will be discussed in more detail later. Mathematical definition A rigorous definition of symmetries in general relativity has been given by Hall (2004). on M. the term on the right usually being written, with an abuse of notation, as Killing symmetry A Killing vector field is one of the most important types of symmetries and is defined to be a smooth vector field that preserves the metric tensor:\n\nGravitation Gravitation, or gravity, is a natural phenomenon by which all physical bodies attract each other. It is most commonly recognized and experienced as the agent that gives weight to physical objects, and causes physical objects to fall toward the ground when dropped from a height. During the grand unification epoch, gravity separated from the electronuclear force. History of gravitational theory Scientific revolution Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and early 17th centuries. Newton's theory of gravitation In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. Newton's theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Equivalence principle Formulations of the equivalence principle include: General relativity Specifics\n\nPascal's law Pascal's law or the principle of transmission of fluid-pressure is a principle in fluid mechanics that states that pressure exerted anywhere in a confined incompressible fluid is transmitted equally in all directions throughout the fluid such that the pressure variations (initial differences) remain the same. The law was established by French mathematician Blaise Pascal. Definition Pressure in water and air. Pascal's law applies only for fluids. Pascal's principle is defined as A change in pressure at any point in an enclosed fluid at rest is transmitted undiminished to all points in the fluid This principle is stated mathematically as: ρ is the fluid density (in kilograms per cubic meter in the SI system); g is acceleration due to gravity (normally using the sea level acceleration due to Earth's gravity in metres per second squared); Explanation Pascal's principle applies to all fluids, whether gases or liquids. Applications See also References\n\nCausal dynamical triangulation Causal dynamical triangulation (abbreviated as CDT) invented by Renate Loll, Jan Ambjørn and Jerzy Jurkiewicz, and popularized by Fotini Markopoulou and Lee Smolin, is an approach to quantum gravity that like loop quantum gravity is background independent. This means that it does not assume any pre-existing arena (dimensional space), but rather attempts to show how the spacetime fabric itself evolves. The Loops '05 conference, hosted by many loop quantum gravity theorists, included several presentations which discussed CDT in great depth, and revealed it to be a pivotal insight for theorists. It has sparked considerable interest as it appears to have a good semi-classical description. At large scales, it re-creates the familiar 4-dimensional spacetime, but it shows spacetime to be 2-d near the Planck scale, and reveals a fractal structure on slices of constant time. Introduction Derivation Advantages and Disadvantages Related theories See also References\n\nMetric (mathematics) In differential geometry, the word \"metric\" may refer to a bilinear form that may be defined from the tangent vectors of a differentiable manifold onto a scalar, allowing distances along curves to be determined through integration. It is more properly termed a metric tensor. d : X × X → R (where R is the set of real numbers). For all x, y, z in X, this function is required to satisfy the following conditions: d(x, y) ≥ 0 (non-negativity, or separation axiom)d(x, y) = 0 if and only if x = y (identity of indiscernibles, or coincidence axiom)d(x, y) = d(y, x) (symmetry)d(x, z) ≤ d(x, y) + d(y, z) (subadditivity / triangle inequality). Conditions 1 and 2 together produce positive definiteness. A metric is called an ultrametric if it satisfies the following stronger version of the triangle inequality where points can never fall 'between' other points: For all x, y, z in X, d(x, z) ≤ max(d(x, y), d(y, z)) d(x, y) = d(x + a, y + a) for all x, y and a in X. If a modification of the triangle inequality\n\nRelated:" ]
[ null, "http://cdn2.pearltrees.com/v2/asset/background/default_background-lightest.jpg", null, "http://cdn.pearltrees.com/s/pic/sq/minkowski-encyclopedia-8834986", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9141477,"math_prob":0.96291494,"size":8162,"snap":"2020-34-2020-40","text_gpt3_token_len":1762,"char_repetition_ratio":0.114243686,"word_repetition_ratio":0.0064205457,"special_character_ratio":0.19308993,"punctuation_ratio":0.09289233,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.991835,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T00:49:02Z\",\"WARC-Record-ID\":\"<urn:uuid:16786145-7cb5-4f98-8a32-179f89a29b22>\",\"Content-Length\":\"100628\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6217f5f3-688d-4f96-b48a-7d4ce6a452f4>\",\"WARC-Concurrent-To\":\"<urn:uuid:e2049edc-1b24-4f26-ada8-ad3bb110ed45>\",\"WARC-IP-Address\":\"93.184.35.40\",\"WARC-Target-URI\":\"http://www.pearltrees.com/u/8834986-minkowski-encyclopedia\",\"WARC-Payload-Digest\":\"sha1:SE65MNTSQ67JJZNCOWWQDQOT7US7ZZY5\",\"WARC-Block-Digest\":\"sha1:K3VWSHNTPU2KFVLG2GC353HOHRRDFIHP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400249545.55_warc_CC-MAIN-20200926231818-20200927021818-00467.warc.gz\"}"}
https://www.dotlayer.org/en/training-rnn-using-pytorch/
[ "# Training a Recurrent Neural Network (RNN) using PyTorch\n\nTrain an RNN for address parsing\n\nIn this article, we will train an RNN, or more precisely, an LSTM, to predict the sequence of tags associated with a given address, known as address parsing.\n\nAlso, the article is available in a Jupyter Notebook or in a Google Colab Jupyter notebook.\n\nBefore starting this article, we would like to disclaim that this tutorial is greatly inspired by an online tutorial David created for the Poutyne framework. Also, the content is based on a recent article we wrote about address tagging. However, there are differences between the present work and the two others, as this one is specifically designed for the less technical reader.\n\nSequential data, such as addresses, are pieces of information that are deliberately given in a specific order. In other words, they are sequences with a particular structure; and knowing this structure is crucial for predicting the missing entries of a given truncated sequence. For example, when writing an address, we know, in Canada, that after the civic number (e.g. 420), we have the street name (e.g. du Lac). Hence, if one is asked to complete an address containing only a number, he can reasonably assume that the next information that should be added to the sequence is a street name. Various modelling approaches have been proposed to make predictions over sequential data. Still, more recently, deep learning models known as Recurrent Neural Network (RNN) have been introduced for this type of data.\n\nThe main purpose of this article is to introduce the various tricks (e.g., padding and packing) that are required for training an RNN. Before we do that, let us define our “address” problem more formally and elaborate on what RNNs (and LSTMs) actually are.\n\nAddress tagging is the task of detecting and tagging the different parts of an address such as the civic number, the street name or the postal code (or zip code). The following figure shows an example of such a tagging.", null, "For our purpose, we define 8 pertinent tags that can be found in an address: [StreetNumber, StreetName, Orientation, Unit, Municipality, Province, PostalCode, GeneralDelivery].\n\nSince addresses are sequences of arbitrary length where a word’s index does not mean as much as its position relative to others, one can hardly rely on a simple fully connected neural network for address tagging. A dedicated type of neural networks was specifically designed for this kind of tasks involving sequential data: RNNs.\n\n## Recurrent Neural Network (RNN)\n\nIn brief, an RNN is a neural network in which connections between nodes form a temporal sequence. It means that this type of network allows previous outputs to be used as inputs for the next prediction. For more information regarding RNNs, have a look at Stanford’s freely available cheastsheet.\n\nFor our purpose, we do not use the vanilla RNN, but a widely-use variant of it known as long short-term memory (LSTM) network. This latter, which involves components called gates, is often preferred over its competitors due to its better stability with respect to gradient update (vanishing and exploding gradient). To learn more about LSTMs, see here for an in-depth explanation.\n\nFor now, let’s simply use a single layer unidirectional LSTM. We will, later on, explore the use of more layers and a bidirectional approach.\n\n### Word Embeddings\n\nSince our data is text, we will use a well-known text encoding technique: word embeddings. Word embeddings are vector representations of words. The main hypothesis underlying their use is that there exists a linear relation between words. For example, the linear relation between the word king and queen is gender. So logically, if we remove the vector corresponding to male to the one for king, and then add the vector for female, we should obtain the vector corresponding to queen (i.e. king - male + female = queen). That being said, this kind of representation is usually made in high dimensions such as 300, which makes it impossible for humans to reason about them. Neural networks, on the other hand, can efficiently make use of the implicit relations despite their high dimensionality.\n\nWe therefore fix our LSTM’s input and hidden state dimensions to the same sizes as the vectors of embedded words. For the present purpose, we will use the French pre-trained fastText embeddings of dimension 300.\n\n### The PyTorch Model\n\nLet us first import all the necessary packages.\n\nimport gzip\nimport os\nimport pickle\nimport re\nimport shutil\nimport warnings\nfrom io import TextIOBase\n\nimport fasttext\nimport fasttext.util\nimport requests\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.nn.functional import cross_entropy\n\nfrom poutyne import set_seeds\nfrom poutyne.framework import Experiment\n\n\nNow, let’s create a single (i.e. one layer) unidirectional LSTM with input_size and hidden_size of 300. We will explore later on the effect of stacking more layers and using a bidirectional approach.\n\nSee here why we use the batch_first argument.\n\ndimension = 300\nnum_layer = 1\nbidirectional = False\n\nlstm_network = nn.LSTM(input_size=dimension,\nhidden_size=dimension,\nnum_layers=num_layer,\nbidirectional=bidirectional,\nbatch_first=True)\n\n\n## Fully-connected Layer\n\nSince the output of the LSTM network is of dimension 300, we will use a fully-connected layer to map it into a space of equal dimension to that of the tag space (i.e. number of tags to predict), that is 8. Finally, since we want to predict the most probable tokens, we will apply the softmax function on this layer (see here if softmax does not ring a bell).\n\ninput_dim = dimension #the output of the LSTM\ntag_dimension = 8\n\nfully_connected_network = nn.Linear(input_dim, tag_dimension)\n\n\n## Training Constants\n\nNow, let’s set our training constants. We first specify a CUDA (GPU) device for training (using a CPU takes way too long, if you don’t have one, you can use the Google Colab notebook).\n\nSecond, we set the batch size (i.e. the number of elements to see before updating the model), the learning rate for the optimizer and the number of epochs.\n\ndevice = torch.device(\"cuda:0\")\n\nbatch_size = 128\nlr = 0.1\n\nepoch_number = 10\n\n\nWe also need to set Pythons’s, NumPy’s and PyTorch’s random seeds using the Poutyne function to make our training (almost) completely reproducible.\n\nSee here for an explanation on why setting seed does not guarantee complete reproducibility.\n\nset_seeds(42)\n\n\n## The Dataset\n\nThe dataset consists of 1,010,987 complete French and English Canadian addresses and their associated tags. Here’s an example address\n\n\"420 rue des Lilas Ouest, Québec, G1V 2V3\"\n\nand its corresponding tags\n\n[StreetNumber, StreetName, StreetName, StreetName, Orientation, Municipality, PostalCode, PostalCode].\n\nNow let’s download our dataset. For simplicity, a 100,000 addresses test set is kept aside, with 80% of the remaining addresses used for training and 20 % used as a validation set. Also note that the dataset was pickled for simplicity (using a Python list). Here is the code to download it.\n\ndef download_data(saving_dir, data_type):\n\"\"\"\nFunction to download the dataset using data_type to specify if we want the train, valid or test.\n\"\"\"\n\nroot_url = \"https://dot-layer.github.io/blog-external-assets/train_rnn/{}.p\"\n\nurl = root_url.format(data_type)\nr = requests.get(url)\nos.makedirs(saving_dir, exist_ok=True)\n\nopen(os.path.join(saving_dir, f\"{data_type}.p\"), 'wb').write(r.content)\n\n\n\nNow let’s load the data in memory.\n\n# load the data\ntrain_data = pickle.load(open(\"./data/train.p\", \"rb\")) # 728,789 examples\nvalid_data = pickle.load(open(\"./data/valid.p\", \"rb\")) # 182,198 examples\ntest_data = pickle.load(open(\"./data/test.p\", \"rb\")) # 100,000 examples\n\n\nAs explained before, the (train) dataset is a list of 728,789 tuples where the first element is the full address, and the second is a list of tags (the ground truth).\n\ntrain_data[:2] # The first two train items\n\n\n(the output)", null, "### Vectorize the Dataset\n\nSince we used word embeddings as the encoded representations of the words in the addresses, we need to convert the addresses into the corresponding word vectors. In order to do that, we will use a vectorizer (i.e. the process of converting words into vectors). This embedding vectorizer will extract, for each word, the embedding value based on the pre-trained French fastText model. We use French embeddings because French is the language in which most of the adresses in our dataset are written.\n\nIf you are curious about another solution, the Google Colab Jupyter notebook uses Magnitude.\n\n# We use this class so that the download templating of the fastText\n# script be not buggy as hell in notebooks.\nclass LookForProgress(TextIOBase):\ndef __init__(self, stdout):\nself.stdout = stdout\nself.regex = re.compile(r'([0-9]+(\\.[0-9]+)?%)', re.IGNORECASE)\n\ndef write(self, o):\nres = self.regex.findall(o)\nif len(res) != 0:\nprint(f\"\\r{res[-1]}\", end='', file=self.stdout)\n\nclass EmbeddingVectorizer:\ndef __init__(self):\n\"\"\"\nEmbedding vectorizer\n\"\"\"\n\n\"\"\"\n:return: The embeddings vectors\n\"\"\"\nembeddings = []\nembeddings.append(self.embedding_model[word])\nreturn embeddings\n\nembedding_vectorizer = EmbeddingVectorizer()\n\n\nWe also need to apply a similar operation to the address tags (e.g. StreetNumber, StreetName). This time, however, the vectorizer needs to convert the tags into categorical values (e.g. StreetNumber -> 0). For simplicity, we will use a DatasetBucket class that will apply the vectorizing process using both the embedding and the address vectorization process that we’ve just described during training.\n\nclass DatasetBucket:\ndef __init__(self, data, embedding_vectorizer):\nself.data = data\nself.embedding_vectorizer = embedding_vectorizer\nself.tags_set = {\n\"StreetNumber\": 0,\n\"StreetName\": 1,\n\"Unit\": 2,\n\"Municipality\": 3,\n\"Province\": 4,\n\"PostalCode\": 5,\n\"Orientation\": 6,\n\"GeneralDelivery\": 7\n}\n\ndef __len__(self):\nreturn len(self.data)\n\ndef __getitem__(self, item): # We vectorize when data is asked\ndata = self.data[item]\nreturn self._item_vectorizing(data)\n\ndef _item_vectorizing(self, item):\n\ntags = item\nidx_tags = self._convert_tags_to_idx(tags)\n\ndef _convert_tags_to_idx(self, tags):\nidx_tags = []\nfor tag in tags:\nidx_tags.append(self.tags_set[tag])\nreturn idx_tags\n\ntrain_dataset_vectorizer = DatasetBucket(train_data, embedding_vectorizer)\nvalid_dataset_vectorizer = DatasetBucket(valid_data, embedding_vectorizer)\ntest_dataset_vectorizer = DatasetBucket(test_data, embedding_vectorizer)\n\n\nHere is a example of the vectorizing process.\n\naddress, tag = train_dataset_vectorizer # Unpack the first tuple\n\nprint(f\"Tag is now a list of integers : {tag}\")\n\n\nWe use a first trick, padding.\n\nNow, because the addresses are not all of the same size, it is impossible to batch them together; recall that all tensor elements must have the same lengths. But there is a trick: padding!\n\nThe idea is simple; we add empty tokens at the end of each sequence until they reach the length of the longest one in the batch. For example, if we have three sequences of length ${1, 3, 5}$, padding will add 4 and 2 empty tokens respectively to the first two.\n\nFor the word vectors, we add vectors of 0 as padding. For the tag indices, we pad with -100’s. We do so because the cross-entropy loss and the accuracy metric both ignore targets with values of -100.\n\nTo do the padding, we use the collate_fn argument of the PyTorch DataLoader, and on running time, the process will be done by the DataLoader. One thing to keep in mind when treating padded sequences is that their original length will be required to unpad them later on in the forward pass. That way, we can pad and pack the sequence to minimize the training time (read this good explanation on why we pack sequences).\n\ndef pad_collate_fn(batch):\n\"\"\"\nThe collate_fn that can add padding to the sequences so all can have\nthe same length as the longest one.\n\nArgs:\nbatch (List[List, List]): The batch data, where the first element\nof the tuple is the word idx and the second element are the target\nlabel.\n\nReturns:\nA tuple (x, y). The element x is a tuple containing (1) a tensor of padded\nword vectors and (2) their respective original sequence lengths. The element\ny is a tensor of padded tag indices. The word vectors are padded with vectors\nof 0s and the tag indices are padded with -100s. Padding with -100 is done\nbecause of the cross-entropy loss and the accuracy metric ignores\nthe targets with values -100.\n\"\"\"\n\n# This gets us two lists of tensors and a list of integer.\n# Each tensor in the first list is a sequence of word vectors.\n# Each tensor in the second list is a sequence of tag indices.\n# The list of integer consist of the lengths of the sequences in order.\nsequences_vectors, sequences_labels, lengths = zip(*[\n(torch.FloatTensor(seq_vectors), torch.LongTensor(labels), len(seq_vectors))\nfor (seq_vectors, labels) in sorted(batch, key=lambda x: len(x), reverse=True)\n])\n\nlengths = torch.LongTensor(lengths)\n\n\ntrain_loader = DataLoader(train_dataset_vectorizer, batch_size=batch_size, shuffle=True, collate_fn=pad_collate_fn, num_workers=4)\n\n\n## Full Network\n\nWe use a second trick, packing.\n\nSince our sequences are of variable lengths and that we want to be as efficient as possible when packing them, we cannot use the PyTorch nn.Sequential class to define our model. Instead, we define the forward pass so that it uses packed sequences (again, you can read this good explanation on why we pack sequences).\n\nclass RecurrentNet(nn.Module):\ndef __init__(self, lstm_network, fully_connected_network):\nsuper().__init__()\nself.hidden_state = None\n\nself.lstm_network = lstm_network\nself.fully_connected_network = fully_connected_network\n\n\"\"\"\nDefines the computation performed at every call.\n\nShapes:\nlengths: batch_size\n\n\"\"\"\n\nlstm_out, _ = pad_packed_sequence(lstm_out, batch_first=True, total_length=total_length)\n\ntag_space = self.fully_connected_network(lstm_out) # shape: batch_size * longest_sequence_length, 8 (tag space)\nreturn tag_space.transpose(-1, 1) # we need to transpose since it's a sequence # shape: batch_size * 8, longest_sequence_length\n\nfull_network = RecurrentNet(lstm_network, fully_connected_network)\n\n\n## Summary\n\nWe have created an LSTM network (lstm_network) and a fully connected network (fully_connected_network), and we use both components in the full network. The full network makes use of padded-packed sequences, so we created the pad_collate_fn function to do the necessary work within the DataLoader. Finally, we will load the data using the vectorizer (within the DataLoader using the pad_collate function). This means that the addresses will be represented by word embeddings. Also, the address components will be converted into categorical value (from 0 to 7).\n\n## The Training\n\nNow that we have all the components for the network, let’s define our optimizer (Stochastic Gradient Descent) (SGD).\n\noptimizer = optim.SGD(full_network.parameters(), lr)\n\n\n### Poutyne Experiment\n\nDisclaimer: David is a developer on the Poutyne library, so we will present code using this framework. See the project here.\n\nLet’s create our experiment using Poutyne for automated logging in the project root directory (./). We will also set the loss function and a batch metric (accuracy) to monitor the training. The accuracy is computed on the word-tag level, meaning that every correct tag prediction is a good prediction. For example, the accuracy of the prediction StreetNumber, StreetName with the ground truth StreetNumber, StreetName is 1 and the accuracy of the prediction StreetNumber, StreetNumber with the ground truth StreetNumber, StreetName is 0.5.\n\nexp = Experiment(\"./\", full_network, device=device, optimizer=optimizer,\nloss_function=cross_entropy, batch_metrics=[\"acc\"])\n\n\nUsing our experiment, we can now launch the training as simply as\n\nexp.train(train_loader, valid_generator=valid_loader, epochs=epoch_number)\n\n\nIt will take around 40 minutes per epochs, so a couple hours for the complete training.\n\n### Results\n\nThe next figure shows the loss and the accuracy during our training (blue) and during our validation (orange) steps. After 10 epochs, we obtain a validation loss and accuracy of 0.01981 and 99.54701 respectively, satisfying values for a first model. Also, since our training accuracy and loss closely match their respective validation values, our model does not appear to be overfitted on the training set.", null, "## Bigger model\n\nIt seems that our model performed pretty well, but just for fun, let’s unleash the full potential of LSTMs using a bidirectional approach (bidirectional LSTM). What it means is that instead of simply viewing the sequence from the start to the end, we also train the model to see the sequence from the end to the start. It’s important to state that the two directions are not shared, meaning that the model sees the sequence in one direction at the time, but gathers the information from both directions into the fully connected layer. That way, our model can get insight from both directions.\n\nInstead of using only one layer, let’s use a bidirectional bi-LSTM, which means that we use two layers of hidden state for each direction.\n\nSo, let’s create the new LSTM and fully connected network.\n\ndimension = 300\nnum_layer = 2\nbidirectional = True\n\nlstm_network = nn.LSTM(input_size=dimension,\nhidden_size=dimension,\nnum_layers=num_layer,\nbidirectional=bidirectional,\nbatch_first=True)\n\ninput_dim = dimension * 2 #since bidirectional\n\nfully_connected_network = nn.Linear(input_dim, tag_dimension)\n\nfull_network_bi_lstm = RecurrentNet(lstm_network, fully_connected_network)\n\n\n### Training\n\nexp_bi_lstm = Experiment(\"./\", full_network_bi_lstm, device=device, optimizer=optimizer,\nloss_function=cross_entropy, batch_metrics=[\"acc\"])\n\n\n### Results\n\nHere are our validation results for the last epoch of the larger model. On the validation dataset, we can see that we obtain a marginal gain of around 0.3% for the accuracy over our previous simpler model. This is only a slight improvement.\n\nModel Bidirectional bi-LSTM\nLoss 0.0050\nAccuracy 99.8594\n\nBut now that we have our two trained models, let’s use the test set as a final and unique step for evaluating their performance.\n\nexp.test(test_loader)\n\n\nThe next table presents the results of the bidirectional bi-LSTM with two layers and the previous model (LSTM with one layer).\n\nModel LSTM one layer Bidirectional bi-LSTM\nLoss 0.0152 0.0050\nAccuracy 99.5758 99.8550\n\nWe see similar validation results for both models. Also, we still see a little improvement in accuracy and total loss for the larger model. Considering that we only improved by around 0.3%, one can argue that the difference is only due to training variance (mostly due to our random sampling of training batches). To test the robustness of our approach, we could train our model multiple times using different random seeds and report the mean and standard deviation of each metric over all experiments rather than the result of a single training. Let’s try something else.\n\n### Zero Shot Evaluation\n\nSince we have at our disposition addresses from other countries, let’s see if our model has really learned a typical address sequence or if it has simply memorized all the training examples.\n\nWe will test our model on three different types of dataset\n\n• first, on addresses with the exact same structure as in our training dataset: addresses from the United States of America (US) and the United Kingdom (UK)\n• secondly, on addresses with the exact same structure as those in our training dataset but written in a totally different language: addresses from Russia (RU)\n• finally, on addresses that exhibit a different structure and that are written in a different language: addresses from Mexico (MX).\n\nFor each test, we will use a dataset of 100,000 examples in total, and we will evaluate using the best epoch of our two models (i.e. last epoch for both of them). Also, we will use the same pre-processing steps as before (i.e. data vectorization, the same pad collate function), but we will only apply a test phase, meaning no training step.\n\ndownload_data('./data/', \"us\")\n\nus_data = pickle.load(open(\"./data/us.p\", \"rb\")) # 100,000 examples\ngb_data = pickle.load(open(\"./data/gb.p\", \"rb\")) # 100,000 examples\nru_data = pickle.load(open(\"./data/ru.p\", \"rb\")) # 100,000 examples\nmx_data = pickle.load(open(\"./data/mx.p\", \"rb\")) # 100,000 examples\n\ndataset_vectorizer.vectorize(us_data)\ndataset_vectorizer.vectorize(gb_data)\ndataset_vectorizer.vectorize(ru_data)\ndataset_vectorizer.vectorize(mx_data)\n\n##### First Test\n\nNow let’s test for the United States of America and United Kingdom.\n\nus_loader = DataLoader(us_data, batch_size=batch_size, collate_fn=pad_collate_fn)\n\n\n\nThe next table presents the results of both models for both countries. We obtain better results for the two countries using the bidirectional bi-LSTM (around 8% better). It’s interesting to see that, considering address structures are similar to those in the training dataset (Canada), we obtain near as good results as those observed during training. This suggests that our model seems to have learned to recognize the structure of an address. Also, despite the language being the same as in the training dataset (i.e. some English addresses in the bilingual canadian address dataset), we obtain poorer results. That situation is most likely due to the fact that the postal code formats are not the same. For the US, it is 5 digits, and for the UK it is similar to that of Canada, but it is not always a letter followed by a number and not always 6 characters. It is normal for a model to have difficulty when faced with new patterns. All in all, we can say that our model has achieved good results.\n\nModel (Country) LSTM one layer Bidirectional bi-LSTM\nLoss (US) 0.6176 0.3078\nAccuracy (US) 84.7396 91.8220\nLoss (UK) 0.4368 0.1571\nAccuracy (UK) 86.2543 95.6840\n##### The Second and Third Test\n\nNow let’s test for Russia and Mexico.\n\nBut first, let’s discuss how our French embeddings can generate word vectors for vocabulary in a different language. FastText uses subword embeddings when complete embeddings do not exist. For example, we can assume the presence of a word embedding vector for the word Roi, but we face an out-of-vocabulary (OOV) for the word H1A1 since this word is not a real word. The trick with fastText is that it creates composite embeddings using the subword with fixed window size (length of the subword) when facing OOV words. For example, a two characters window embeddings of H1A1 would be the aggregated embeddings of the subword H1, 1A and A1.\n\nru_loader = DataLoader(ru_data, batch_size=batch_size, collate_fn=pad_collate_fn)\n\n\n\nThe next table presents the results of both models for the two countries tested. We see that the first test (RU) gives poorer results than those for Mexican addresses, even if these latter are written in a different structure and language. This situation could be explained by both languages’ roots; Spanish is closer to French than Russian is. An interesting thing is that even in a difficult annotation context, both models perform relatively well. It suggests that our models have really learned the logic of an address sequence. It could also mean that, if we train our model longer, we could potentially improve our results. Other modifications that could improve our models are discussed in the next and final section.\n\nModel (Country) LSTM one layer Bidirectional bi-LSTM\nLoss (RU) 2.5181 4.6118\nAccuracy (RU) 48.9820 47.3185\nLoss (MX) 2.6786 1.7147\nAccuracy (MX) 50.2013 63.5317\n\n### Summary\n\nIn summary, we found that using a bidirectional bi-LSTM seems to perform better on addresses not seen during training, including those coming from other countries. Still, the results for addresses from other countries are not as good as those for Canadian addresses (training dataset). A solution to this problem could be to train a model using all the data from all over the world. This approach was used by Libpostal, which trained a CRF over an impressive near 100 million addresses (yes, 100 million). If you want to explore this avenue, the data they used is publicly available here.\n\nWe also explored the idea that the language disparity has a negative impact on the results, since we use monolingual word embeddings (i.e. French), which is normal considering that they were trained for a specific language.\n\nAlert of self-promotion of our work here. We’ve personally explored this avenue in an article using subword embedding for address parsing and we’ve release our trained models here.\n\nThat being said, our model still performed well on the Canadian dataset, and one can simply train simpler LSTM model using country data to obtain the best results possible with a model as simple as possible." ]
[ null, "https://www.dotlayer.org/en/training-rnn-using-pytorch/address_parsing.png", null, "https://www.dotlayer.org/en/training-rnn-using-pytorch/data_snapshot.png", null, "https://www.dotlayer.org/en/training-rnn-using-pytorch/graph/training_graph.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84367895,"math_prob":0.9145915,"size":26872,"snap":"2021-43-2021-49","text_gpt3_token_len":6097,"char_repetition_ratio":0.1340256,"word_repetition_ratio":0.01956242,"special_character_ratio":0.2295326,"punctuation_ratio":0.14406084,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9798567,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-26T22:52:46Z\",\"WARC-Record-ID\":\"<urn:uuid:8a2b4a95-f120-4d7d-8373-16769d303fdb>\",\"Content-Length\":\"60367\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8ccaa6f7-4252-4cc5-a860-ee53b30a3475>\",\"WARC-Concurrent-To\":\"<urn:uuid:c47ec2bb-754a-4203-b5f7-a51a1bd9ed67>\",\"WARC-IP-Address\":\"157.245.84.7\",\"WARC-Target-URI\":\"https://www.dotlayer.org/en/training-rnn-using-pytorch/\",\"WARC-Payload-Digest\":\"sha1:Q4X3QPHNC6ACMFORG7VC4MI647BYFMHO\",\"WARC-Block-Digest\":\"sha1:S4HL6M5VRJODG7FWQY5QP4DKEEEZBC7C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358074.14_warc_CC-MAIN-20211126224056-20211127014056-00631.warc.gz\"}"}
https://stackoverflow.com/questions/5093090/whats-the-syntax-for-declaring-an-array-of-function-pointers-without-using-a-se
[ "# What's the syntax for declaring an array of function pointers without using a separate typedef?\n\nArrays of function pointers can be created like so:\n\n``````typedef void(*FunctionPointer)();\nFunctionPointer functionPointers[] = {/* Stuff here */};\n``````\n\nWhat is the syntax for creating a function pointer array without using the `typedef`?\n\n• Interesting question, but in \"real\" code you should just follow the golden rule of function pointers: use `typedef` otherwise no one will be able to understand your code. `:)` – Matteo Italia Feb 23 '11 at 15:43\n\n``````arr //arr\narr [] //is an array (so index it)\n* arr [] //of pointers (so dereference them)\n(* arr [])() //to functions taking nothing (so call them with ())\nvoid (* arr [])() //returning void\n``````\n\n``````void (* arr [])() = {};\n``````\n\nBut naturally, this is a bad practice, just use `typedefs` :)\n\nExtra: Wonder how to declare an array of 3 pointers to functions taking int and returning a pointer to an array of 4 pointers to functions taking double and returning char? (how cool is that, huh? :))\n\n``````arr //arr\narr //is an array of 3 (index it)\n* arr //pointers\n(* arr )(int) //to functions taking int (call it) and\n*(* arr )(int) //returning a pointer (dereference it)\n(*(* arr )(int)) //to an array of 4\n*(*(* arr )(int)) //pointers\n(*(*(* arr )(int)))(double) //to functions taking double and\nchar (*(*(* arr )(int)))(double) //returning char\n``````\n\n:))\n\n• Oh God my eyes. @_@ – Maxpm Feb 23 '11 at 21:07\n• Thank you Matteo; cdecl.org is an awesome site! – player_03 Nov 21 '12 at 4:56\n• Awesome explanation of the approach! :) – Narek Jul 16 '14 at 3:09\n\nRemember \"delcaration mimics use\". So to use said array you'd say\n\n`````` (*FunctionPointers)();\n``````\n\nCorrect? Therefore to declare it, you use the same:\n\n`````` void (*FunctionPointers[])() = { ... };\n``````\n\nUse this:\n\n``````void (*FunctionPointers[])() = { };\n``````\n\nWorks like everything else, you place [] after the name." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6753993,"math_prob":0.9579344,"size":1724,"snap":"2020-34-2020-40","text_gpt3_token_len":452,"char_repetition_ratio":0.18255813,"word_repetition_ratio":0.20788531,"special_character_ratio":0.32482597,"punctuation_ratio":0.11564626,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96908134,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-11T03:48:43Z\",\"WARC-Record-ID\":\"<urn:uuid:c38697a0-10d4-44f7-bd3c-fad7fdc463b6>\",\"Content-Length\":\"167886\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7abfeba1-df8a-4564-882a-d296ff8d6e8e>\",\"WARC-Concurrent-To\":\"<urn:uuid:40db541a-226c-485d-8d8e-524b6e64544e>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/5093090/whats-the-syntax-for-declaring-an-array-of-function-pointers-without-using-a-se\",\"WARC-Payload-Digest\":\"sha1:WHEU2TG6R2ZHUHG7KEMX7PT3NVIPR4HO\",\"WARC-Block-Digest\":\"sha1:TFDCM6DRHXPCSDOVPQSTCM4YDWAJH6EI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738727.76_warc_CC-MAIN-20200811025355-20200811055355-00449.warc.gz\"}"}
http://www.worldebooklibrary.org/articles/eng/Arithmetica
[ "", null, "#jsDisabledContent { display:none; } My Account | Register | Help", null, "Flag as Inappropriate", null, "This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate?          Excessive Violence          Sexual Content          Political / Social Email this Article Email Address:\n\n# Arithmetica\n\nArticle Id: WHEBN0001777555\nReproduction Date:\n\n Title: Arithmetica", null, "Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date:\n\n### Arithmetica\n\nArithmetica (Greek: Ἀριθμητικά) is an Ancient Greek text on mathematics written by the mathematician Diophantus in the 3rd century AD. It is a collection of 130 algebraic problems giving numerical solutions of determinate equations (those with a unique solution) and indeterminate equations.\n\nEquations in the book are called Diophantine equations. The method for solving these equations is known as Diophantine analysis. Most of the Arithmetica problems lead to quadratic equations. It was these equations which inspired Pierre de Fermat to propose Fermat's Last Theorem, scrawled in the margins of Fermat's copy of Arithmetica, which states that the equation x^n+y^n=z^n, where x, y, z and n are non-zero integers, has no solution with n greater than 2.\n\nIn Book 3, Diophantus solves problems of finding values which make two linear expressions simultaneously into squares or cubes. In book 4, he finds rational powers between given numbers. He also noticed that numbers of the form 4n + 3 cannot be the sum of two squares. Diophantus also appears to know that every number can be written as the sum of four squares. If he did know this result (in the sense of having proved it as opposed to merely conjectured it), his doing so would be truly remarkable: even Fermat, who stated the result, failed to provide a proof of it and it was not settled until Joseph Louis Lagrange proved it using results due to Leonhard Euler.\n\nArithmetica was originally written in thirteen books, but the Greek manuscripts that survived to the present contain no more than six books. In 1968, Fuat Sezgin found four previously unknown books of Arithmetica at the shrine of Imam Rezā in the holy Islamic city of Mashhad in northeastern Iran. The four books are thought to have been translated from Greek to Arabic by Qusta ibn Luqa (820–912). Norbert Schappacher has written:\n\n[The four missing books] resurfaced around 1971 in the Astan Quds Library in Meshed (Iran) in a copy from 1198 AD. It was not catalogued under the name of Diophantus (but under that of Qust¸a ibn Luqa) because the librarian was apparently not able to read the main line of the cover page where Diophantus’s name appears in geometric Kufi calligraphy.\n\nArithmetica became known to mathematicians in the Islamic world in the tenth century when Abu'l-Wefa translated it into Arabic." ]
[ null, "http://read.images.worldlibrary.org/App_Themes/wel-mem/images/logo.jpg", null, "http://read.images.worldlibrary.org/images/SmallBook.gif", null, "http://www.worldebooklibrary.org/images/delete.jpg", null, "http://www.worldebooklibrary.org/App_Themes/default/images/icon_new_window.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89563084,"math_prob":0.6513298,"size":4275,"snap":"2019-43-2019-47","text_gpt3_token_len":1032,"char_repetition_ratio":0.09763522,"word_repetition_ratio":0.0060790274,"special_character_ratio":0.2194152,"punctuation_ratio":0.13762626,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9694051,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-16T20:54:43Z\",\"WARC-Record-ID\":\"<urn:uuid:40f85f8d-a68d-4bdf-8162-6acb74f71b01>\",\"Content-Length\":\"95197\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ea24632-f4d4-48a6-a2f1-e4e64952fb1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:345ea641-198a-499f-9a44-338d7e646f9e>\",\"WARC-IP-Address\":\"66.27.42.21\",\"WARC-Target-URI\":\"http://www.worldebooklibrary.org/articles/eng/Arithmetica\",\"WARC-Payload-Digest\":\"sha1:XTVFJWLAZ2SCZAIBSI7UKSUX2FNEO4UF\",\"WARC-Block-Digest\":\"sha1:QZU25EN3G5LOAO2BNAE7PPZZWLHQS64T\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668765.35_warc_CC-MAIN-20191116204950-20191116232950-00390.warc.gz\"}"}
https://dfrieds.com/python/dictionaries
[ "Python Beginner Concepts Tutorial\n\n# Dictionaries Basics\n\n### Dictionary Definition\n\nA dictionary is a Python data structure. It contains a collection of indices called keys and a collection of values associated with each key.\n\nIn other programming languages, you may hear of a similar data structure to a Python dictionary called a hashtable. The hashtable algorithm is used in the Python source code to implement a dictionary object.\n\nThe association of a key and a value is called a key-value pair or an item.\n\nA simple dictionary example is a mapping of English words as keys to Spanish words as values.\n\nIn a dictionary, each key is unique.\n\n```english_to_spanish = {'one': 'uno'}\n```\n\nIn English, `one` is equalivalent to `uno` in Spanish.\n\nWe enclose a dictionary in curly braces.\n\n`one` is a key and `uno` is its value.\n\n#### Add a key-value pair to a dictionary\n\nThe code below adds a key of `two` with a corresponding value of `dos`.\n\n```english_to_spanish['two'] = 'dose'\n```\n```english_to_spanish\n```\n```{'one': 'uno', 'two': 'dose'}\n```\n\n#### Modify an existing value in a dictionary\n\nAbove, we spelled the Spanish equivalent of the number two incorrectly. We wrote dose when it is spelled dos.\n\nDictionaries are mutable so we can change this key's value in our structure.\n\n```english_to_spanish['two'] = 'dos'\n```\n```english_to_spanish\n```\n```{'one': 'uno', 'two': 'dos'}\n```\n\n#### Lookup a key's corresponding value\n\nDictionaries allow for easy lookup of values.\n\n```english_to_spanish['one']\n```\n```'uno'\n```\n\n#### Check if a key appears in a dictionary\n\nThe `in` operator can be used to see if a key exists in a dictionary. If it does, we return `True`, otherwise `False`.\n\n```'two' in english_to_spanish\n```\n```True\n```\n\n#### Counts for each unique item\n\nA common use case of dictionary is to count the occurences of unique elements in a list or similar data structure. The advantage of using a dictionary here is that we don't have to know ahead of time which letters appear in our list. Also, with a dictionary, we only make keys for letters that do appear.\n\nBelow is a list of grades. We want to know the count of each of the grades.\n\nSince each grade letter is unique, each can be a key in a dictionary with its value being the count of occurences.\n\n```letter_grades = [\"B+\", \"A\", \"B+\", \"A\", \"A+\", \"A-\", \"A\"]\n```\n```count_of_each_grade = {}\n\nfor letter in letter_grades:\nif letter in count_of_each_grade:\ncount_of_each_grade[letter] += 1\nelse:\ncount_of_each_grade[letter] = 1\n```\n```count_of_each_grade\n```\n```{'A': 3, 'A+': 1, 'A-': 1, 'B+': 2}\n```\n\nThe dictionary `count_of_each_grade` tells us there were 3 A grades, 1 A+ and so forth.\n\n#### Data types allowed for keys and values\n\nIn a Python dictionary, the keys must be immutable meaning they can't be changed in place. Therefore, keys can be a data type such as a:\n\n• tuple (must contain only mutable objects)\n• int\n• float\n• string\n\nA key cannot be a list since lists are mutable and can be modified in place with index assignments or using methods like `append` or `extend`.\n\nValues in a dictionary do not have to be unique. The values of a dictionary can be of any data type.\n\nHere's an example of an acceptable dictionary with various data types as keys and values.\n\n```a = {(3,2): 1, 9.5: 3.14, 3: (3.1), 'hi': 'yo'}\n```\n\n#### A more complex example\n\nA common use case of dictionaries in software applications is to store personally identifiable information of users.\n\nIn a software application, there could be two users with the same first and last name. Therefore, each user is tyipcally assigned a unique id as an integer.\n\nIf we lookup a user's id, we may find information about them as seen below.\n\n```users = {10: {'first_name': 'Dan', 'last_name': 'Friedman', 'email': '[email protected]'},\n11: {'first_name': 'May', 'last_name': 'Parker', 'email': '[email protected]'}}\n```\n\nThis example shows how a value can also be another dictionary of key-value pairs." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82210034,"math_prob":0.72831774,"size":3736,"snap":"2019-35-2019-39","text_gpt3_token_len":936,"char_repetition_ratio":0.14013934,"word_repetition_ratio":0.0031746032,"special_character_ratio":0.25535333,"punctuation_ratio":0.13623978,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9509063,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-19T22:19:47Z\",\"WARC-Record-ID\":\"<urn:uuid:99d5219d-d1b4-4f37-a781-4cb41405e44d>\",\"Content-Length\":\"18841\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3f3bbed4-96df-48b7-bafd-ba6e672c86ef>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff06d0df-9489-4d13-bd1b-35d4c1e37ba0>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://dfrieds.com/python/dictionaries\",\"WARC-Payload-Digest\":\"sha1:D6I2BYI5XKCIVU6VSWG4ORDAMKUAIHLR\",\"WARC-Block-Digest\":\"sha1:MLFF2APSNYYUTTEVTYM55B5FJ4N7BW6L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027315132.71_warc_CC-MAIN-20190819221806-20190820003806-00246.warc.gz\"}"}
https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11.html
[ "Subscriber Authentication Point\nFree Access\n Issue A&A Volume 537, January 2012 A41 10 The Sun https://doi.org/10.1051/0004-6361/201117957 04 January 2012\n\n## 1. Introduction\n\nSudden energy releases in the solar atmosphere are known to generate large scale global waves propagating over long distances (see, e.g. Moreton & Ramsey 1960; Uchida 1970; Thompson et al. 1999; Ballai et al. 2005). The energy stored in these waves can be released by traditional dissipative mechanisms, but it could also be transferred to magnetic structures which may come in contact with global waves. This scenario is true not only for coronal structures, but applies to all magnetic entities in the solar atmosphere that can serve as waveguides (see, e.g. Wills-Davey & Thompson 1999; Patsourakos & Vourlidas 2009; Liu et al. 2010, etc.). In the corona, EIT waves generated by coronal mass ejections (CMEs) and/or flares could interact with coronal loops, resulting in the generation of kink modes, i.e. oscillations which exhibit periodic movement about the loop’s symmetry axis. Global waves can also interact with prominence fibrils as observed by, e.g. Ramsey & Smith (1966), and more recently by Eto et al. (2002), Jing et al. (2003); Okamoto et al. (2004); Isobe & Tripathi (2007); Pintér et al. (2008). Oscillations of magnetic structures were (and are currently) used as a basic ingredient in one of solar physics most dynamically expanding fields, namely coronal seismology, where observations of wave characteristics (amplitude, wavelength, propagation speed, damping time/length) are corroborated with theoretical modelling (dispersion and evolutionary equations, as well as MHD models) in order to derive quantities that cannot be directly or indirectly measured (magnetic field magnitude and sub-resolution structuring, transport coefficients, heating functions, thermal state of the plasma, stratification parameters, etc.). Considerable advances have been achieved in diagnosing the state of the field and plasma (see, e.g. Roberts et al. 1984; Nakariakov et al. 1999; Nakariakov & Ofman 2001; Ofman & Thompson 2002; Ruderman & Roberts 2002; Andries et al. 2005, 2009; Ballai et al. 2005, 2011; Gruszecki et al. 2006, 2007, 2008; Banerjee et al. 2007; Ofman 2007, 2009; McLaughlin & Ofman 2008; Verth et al. 2007; Ballai 2007; Ruderman et al. 2008; Van Doorsselaere et al. 2008; Verth & Erdélyi 2008; Verth et al. 2008; Morton & Erdélyi 2009; Ruderman & Erdélyi 2009; Andries et al. 2009; Selwa & Ofman 2009; Selwa et al. 2010). It is highly likely that higher resolution observations made possible recently by space satellites such as STEREO, Hinode, SDO (and future missions) will further help understanding the complicated reality of the solar plasma environment. Indeed since their launch, data provided by these satellites are already shedding light on numerous aspects of coronal seismology, e.g. Verwichte et al. (2009) used STEREO data to determine the three-dimensional geometry of the loop, SDO/AIA data was used by Aschwanden & Schrijver (2011) to prove the coupling of the kink mode and cross-sectional oscillations that could be explained as a consequence of the loop length variation in the vertical polarization mode. Finally, based on Hinode data, Ofman & Wang (2008) provided the first evidence for transverse waves in coronal multithreaded loops with cool plasma ejected from the chromosphere flowing along the threads. On the other hand the development of even the fundamental mode is not always guaranteed, as was shown observationally by, e.g. Aschnwanden et al. (2002) and later using MHD modelling by Selwa & Ofman (2010) and Selwa et al. (2011a,b).\n\nThe dispersion relations for many simple (and some quite complicated) plasma waves under the assumptions of ideal magnetohydrodynamics (MHD) are well known; they were derived long before accurate EUV observations were available (see, e.g. Edwin & Roberts 1983; Roberts et al. 1984) using simplified models within the framework of ideal and linear MHD. Although the realistic interpretation of many observations is made difficult especially by the poor spatial resolution of present satellites not being quite sufficient, considerable amount of information about the thermodynamical and dynamical state of the plasma, and the structure and magnitude of the coronal magnetic field, can still be obtained.\n\nThe mathematical description of waves and oscillations in solar structures is, in general, given by equations whose coefficients vary in space and time. It has been recognised recently by, e.g. Andries et al. (2005) that the longitudinal stratification (i.e. along the longitudinal symmetry axis of the tube that coincides with the direction of the magnetic field) modifies the periods of oscillations of coronal loops. Accordingly, in the case of kink waves, these authors showed that the ratio P1/P2 (where P1 refers to the period of the fundamental transversal oscillation, while P2 stands for the period of the first harmonic of the same oscillation) can differ – sometimes considerably – from the value of 2, that would be recovered if the loops were homogeneous. These authors also showed that the deviation of P1/P2 from 2 is proportional to the degree of stratification (see also, e.g. McEwan et al. 2006; Van Doorsselaere et al. 2007; Ballai et al. 2011). Later, studies by, e.g. Verth et al. (2007), showed that it is not only density stratification that is able to modify the P1/P2 period ratio, but the variation of the loop’s cross section area has also an effect on the period ratio. While the density stratification tends to decrease the period ratio, a modification of the cross section (i.e. when the magnetic field is expanding as we approach the apex) tends to increase the P1/P2 value. Observationally is much easier to detect the fundamental mode of kink oscillations (having the largest amplitude and the smallest damping rate), however high resolution observations made possible the evidence of even higher harmonics (see, e.g. De Moortel & Brady 2007; Van Doorsselaere et al. 2009).\n\nIn this study we will restrict our attention only to the effect of density stratification, the effect of the magnetic field structuring is left for a later analysis. The period ratio P1/P2 is connected to the density scale-height that quantifies the variation of density along the magnetic structure. All previous studies considered that the density stratification (indirectly the scale-height) is identical inside and outside the magnetic structure. However, the scale-height is directly linked to the temperature (via the sound speed) and an equal scale-height would mean an equal temperature, clearly not applicable for, e.g. coronal loops and/or prominence fibrils.\n\nThe aim of this paper is to investigate, using a simple mathematical approach, the effect of the environment on the period ratio P1/P2 and the consequences of the inclusion of a distinct environment on estimations of the degree of density stratification. The paper is structured in the following way: in Sect. 2 we introduce the mathematical formalism and obtain analytical results for typical coronal and prominence conditions. Later, in Sect. 3 we use our findings to draw conclusions on the implications on coronal seismology and we present a method that could help diagnosing not only the degree of stratification in the loop, but also the temperature ratio of the plasma inside and outside the loop. Finally, our results are summarised in the last section.\n\n## 2. The mathematical formulation of the problem and analytical results\n\nEUV observations made by the recent high resolution space satellites (SOHO, TRACE, STEREO, Hinode, SDO) showed that, after all, coronal loops are enhancements of plasma (tracing the magnetic field in virtue of the frozen-in theorem) and the density of a typical loop can be as much as 10 times larger than the density of the environment. The heating of these coronal structures – according to the accepted theories (see, e.g. Klimchuk 2006; Erdélyi & Ballai 2007 and references therein)- occurs at the footpoints, while thermal conduction, flows, waves, instabilities and turbulences will help the heat to propagate along the full length of the loop. It is also obvious (as see in X-ray by, e.g. Hinode/XRT) that the temperature of the loop exceeds the temperature of the environment. A typical length of a coronal loop is 20–200 Mm, which means that the density inside the loop (seen in EUV) can vary by an order of magnitude, leading to the necessity of studying the effect of density stratification on the oscillations of coronal loops.\n\nAs pointed out for the first time by Dymova & Ruderman (2005, 2006), the propagation of kink waves in a straight tube in the thin tube approximation can be described by", null, "(1)where vr denotes the radial (transversal) component of the velocity vector and the quantity cK is the propagation speed of kink waves (often called the density weighted Alfvén wave) defined as (see e.g. Edwin & Roberts 1983)", null, "(2)where ρi and ρe denote the densities inside and outside the coronal loop, and vAi and vAe represent the propagation speeds of the internal and external Alfvén waves, respectively and the dynamics is treated in the cold plasma approximation. For identical magnetic field inside and outside the tube, the fact that ρi > ρe means that vAe > vAi. If we lift the thin flux tube restriction, then Eq. (1) must be complemented by terms that would describe dispersion. It is interesting to note that a similar equation was found recently by Murawski & Musielak (2010) describing Alfvén waves. As we specified, our approach is using the cold plasma approximation in which the dynamics of kink oscillations in a coronal loop is described by Eq. (1). For the sake of completeness we need to mention that the cold plasma approximations is not always true, especially in hot coronal loops observed by the SUMER instrument (e.g. Wang et al. 2003). It is very likely that the dynamics of kink oscillation will be described by a similar equations as Eq. (1) but with an extra term resulting from considering the effect of pressure perturbation. Furthermore the consideration of higher values of plasma-β will affect the values of eigenfrequencies as was found by McLaughlin & Ofman (2004), and Ofman (2010, 2011a).\n\nAssuming that all temporal changes occur with the same frequency, ω, we can consider that the temporal dependence of variables (including vr) has the form exp(iωt), which means that the PDE given by Eq. (1) transforms into", null, "(3)In reality the kink speed does not depend only on the longitudinal coordinate, z, but on all 3 coordinates. It is known that the dependence on the transversal coordinate, r, leads to the phenomena requiring short transversal length scales (resonant absorption, phase mixing, turbulence, wave leakage), used to explain the rapid damping of kink oscillations (see, e.g. Ofman & Aschwanden 2002; Ruderman & Roberts 2002; Ruderman 2008, etc). Equation (3) implies that the eigenfunctions, vr, are driven by particular forms of cK(z), through the particular profile of the quantities that make up the kink speed (density, magnetic field). Inspired from the eigenvalue problem of Rayleigh-Ritz procedure, McEwan et al. (2008) used a variational principle that allows the calculation of eigenvalues, ω, – a method that is employed by our analysis. Let us multiply the above equation by vr and integrate from the apex to the footpoint of the loop as", null, "(4)Using integration by parts in the first integral (taking into account that for the fundamental mode vr(L) = dvr(0)/dz = 0 and for the first harmonic vr(0) = vr(L) = 0), the above equation simplifies to", null, "(5)which results into the equation derived earlier by McEwan et al. (2008)", null, "(6)where", null, "In order to express the eigenvalue of such problem, we consider some trial functions for vr that satisfy the boundary conditions imposed at the footpoints and the apex of the loop. Since we are interested only in the characteristics of fundamental mode of kink oscillations and its first harmonic, we will assume that vr(z) will be proportional to cos(πz/2L) for the fundamental mode and sin(πz/L) for the first harmonic. It is obvious that these choices for eigenfunctions correspond to the homogeneous plasma, however – as we show in the Appendix – the corrections to the eigenfunction due to density stratification are rather small.\n\nThe problem of how the kink speed depends on the longitudinal coordinate, z, is a rather delicate problem and only simplified cases can be solved analytically. For simplicity, let us consider that the magnetic field inside and outside of the coronal loop are identical and homogeneous, while the density varies exponentially according to", null, "(7)where ρi(0) and ρe(0) are the densities inside and outside the loop at z = 0, i.e. at the the loop apex and Hi and He are the density scale-heights inside and outside the loop. Obviously the choice of density reflects a simplified description of the coronal loop model where plasma is isothermal and other further effects are neglected, however, this density profile allows us to obtain analytical results. A realistic description would require taking into account that the plasma is not isothermal (inside and outside the loop), the loop is curved and the density can depend on other coordinates, as well. This form of density dependence on the z coordinate was earlier used by, e.g. Verth et al. (2007); McEwan et al. (2008); Morton & Erdélyi (2009); Morton & Ruderman (2011); Morton et al. (2011). With our chosen density profiles, the kink speed given by Eq. (2) becomes", null, "(8)where B0 is the magnitude of the magnetic field, vAi(0) is the Alfvén speed at the apex of the loop, and ξ is the density ratio, i.e. ρi(0)/ρe(0). Since the density outside the coronal loop is smaller than inside, we will consider that ξ ≥ 1. The quantities Hi and He are the density scale-heights and they are proportional to the temperature of the plasma. Here we denoted χ = He/Hi. Since the temperature of the loop is higher than its environment, we will take χ ≤ 1, so that the value χ = 1 corresponds to an identical density variation with height inside and outside the loop and identical temperatures, χ → ∞ resulting in a constant density in the environment of the loop, while the limit χ → 0 represents a case when the plasma inside the loop is homogeneous.\n\nUsing the particular form of vr for the fundamental mode and its first harmonic, we obtain that in the case of the fundamental mode", null, "(9)where we introduced the dimensionless variable y = L/πHi, Iν(x) is the modified Bessel function of order ν, Lν(x) is the modified\n\nStruve function of order ν, and the index f stands for the fundamental mode. For the first harmonic we obtain that", null, "(10)where the superscript 1 in the expressions of", null, "and", null, "stands for the first harmonic. Now using Eq. (6) for both modes we obtain that", null, "(11)Inspecting the above relations we can see that the period ratio P1/P2 does not depend on Alfvén speed or loop length (they cancel out when calculating Eq. (11)). For coronal conditions we plot the period ratio given by Eq. (11) for ξ = 2 with the variable y varying between 0 and 10, although the larger values of y are rather unrealistic since y = 10 would correspond to a scale-height of 30 times shorter than the loop length (the scale-height corresponding to a typical temperature of 1 MK is 47 Mm). Another variable in our problem is the ratio of scale-heights (i.e. temperatures), so χ will be varied in the interval 0 to 1.\n\nThe dependence of the P1/P2 period ratio on χ and the ratio L/πHi for coronal conditions (ξ = 2) is shown in Fig. 1 with the case discussed earlier by, e.g. Andries et al. (2005) corresponding to the value χ = 1. In addition to the ratio L/πHi our model prescribes a possible diagnostic of the temperature difference between the loop and its environment.", null, "Fig. 1The variation of the P1/P2 period ratio with the temperature parameter, χ, and the ratio L/πHi for the case of a typical coronal loop (the density ratio, ξ, is 2).\n\nThe importance of changes when the different temperature of the environment is taken into account can be shown in a relative percentage plot shown in Fig. 2.", null, "Fig. 2The relative variation of the P1/P2 period ratio with the temperature parameter, χ, and the ratio L/H for the case of a typical coronal loop (the density ratio, ξ, is taken to be 2).\n\nThe relative change was calculated as the percentage change of the results of our investigation compared to the case when χ = 1. As we can see, the changes in the domain corresponding to values of χ close to 1 are not significant. However, as the temperature of the environment becomes lower than the temperature inside the loop, this difference shows changes of the order of 10–20% for values of χ of up to 0.5, while for the cases with χ near zero, the difference can be even 40% (for χ = 0.2 and L/πHi = 2). Since the relative change is negative, it means that for the same value of P1/P2 calculated assuming the same temperature the ratio, L/πHi is overestimated. A change of 25% in the period ratio occurring at approximative values of L/πHi = 0.8 and χ = 0.65 would mean that for environment temperature that is 35% less than the loop temperature, the scale-height is underestimated by about 25%. It is important to note that the density ratio, ξ, does have an important effect of the variation of period ratio. An increase of ξ to the value of 10 would result in relative percentage change reduction and the maximum value of the change is attaining its maximum value at 33% (for χ = 0.2 and L/πHi = 1).\n\nThe same analysis was repeated for prominence structures. These structures are known to be of chromospheric origin and show rather long stability. Prominence fibrils are surrounded by much hotter and less denser corona. For these structure we suppose that the density of the prominence two orders of magnitude times higher, i.e. we take ξ = 100. The typical temperature of prominences varies between 5 × 103 and 104 K, while the temperature of the surrounding corona can be even two orders of magnitude higher. That is why, the value of χ is chosen to change in the interval 50–150.", null, "Fig. 3The same as in Fig. 1 but we plot the variation of P1/P2 for prominences where ξ = 100 and χ varies between 50 and 150.\n\nAs we can see in Fig. 3, the changes of the period ratio P1/P2 for prominences does not show large variation with χ and an analysis of the relative change (compared to the case corresponding to χ = 50) would reveal that these changes are of the order of 0.1%.\n\nStrictly speaking the form of equilibrium densities given by Eq. (7) is obtained after imposing an equilibrium of forces along the vertical direction (considering that the loop is vertical) when the forces created by pressure gradients are balanced by gravitational forces. Moreover, the density scale-heights given before are connected to the gravitational acceleration. In an isothermal plasma the density scale-height is given as", null, "where cS is the sound speed and γ is the adiabatic index. In this case, the effect of the environment is described by a similar equation as given by Eq. (1), however, the governing equation is supplemented by an extra term that describes dispersive effects. Recently, Ballai et al. (2008) studied the nature of forced kink oscillations in a coronal loop and they obtained that the dynamics of the oscillations is given by an inhomogeneous equation of the form", null, "(12)where ℱ represents the external driver. This equation can be cast into a Klein-Gordon equation after introducing a new function, such that vr(z,t) = Q(z,t)eλ(z)z, where the value of λ(z) is chosen in such a way that all first derivatives with respect to z vanish. Introducing this ansatz into Eq. (12) we obtain", null, "", null, "", null, "(13)The condition that the coefficient of ∂Q/∂z is zero reduces to", null, "(14)With these restrictions, our governing equation transforms into", null, "(15)Here ωC is the cut-off frequency of kink oscillations and is given by", null, "(16)It is obvious that the existence of this dispersive term is due to the different densities between the loop and its environment. The dispersion will affect the value of the P1/P2 period ratio. In the case of a homogeneous loop (understood in a local sense) where the propagation of kink oscillations is described by a Klein-Gordon equation, we can easily find that the period ratio is given by", null, "(17)where the second term in the square bracket gives the deviation of P1/P2 from 2.\n\nIt is well known (see, e.g. Rae & Roberts 1982; Ballai et al. 2006) that only those waves will be able to propagate in such magnetic structure whose frequencies are larger than the cut-off frequency given by Eq. (16). This condition will impose an upper boundary on the applicability region of the variables L/πHi and χ. The solution of Eq. (15) is a transversal kink oscillation propagating with speed cK followed by a wake that is oscillating with the frequency ωC. Due to the height dependence of densities, the cut-off frequency will also depend on z. In terms of the variables used in our analysis, the cut-off frequency can be written as", null, "(18)where we used the notations", null, "In order to evaluate the effect of the cut-off on the period ratio, P1/P2, let us now return to Eq. (12) and repeat the same calculation as before. For simplicity we neglect the inhomogeneous part on the RHS of Eq. (12) and assume that perturbations oscillate with the the same real frequency, ω. Using the same variational method, for the fundamental mode we obtain", null, "(19)Here", null, "and", null, "are already defined by Eq. (9) and", null, "is simply given by", null, "(20)For the first harmonic we obtain that the frequency is given by", null, "(21)where", null, "and", null, "are specified by Eq. (10) and", null, "is defined as", null, "(22)The influence of the kink cut-off period on the period ratio P1/P2 is shown in Figs. 4 and 5 and its effect becomes obvious when Figs. 1 and 4 are compared.", null, "Fig. 4The variation of P1/P2 period ratio for coronal loops when the effect of the cut-off period is taken into account.\n\nFirst of all, we need to note that the effect of the dispersive term is more accentuated for increasing values of L/πHi. Next, imposing the condition that the periods, we are investigating, are smaller than the cut-off period means that the domain of interest is restricted, as shown in Fig. 4 (in fact, only P1 is required to be smaller than the cut-off period, as P2 is always smaller than P1). The region where the above condition is not satisfied was flagged by zero and the drop in the P1/P2 to zero represents the boundary of the region where this imposed condition is satisfied. Looking from above, the domain of applicability is shown in Fig. 5, with the domain labelled by index “I” indicating the region where the periods are smaller than the cut-off period, while the region “II” corresponds to the set of (L/πHi, χ) values for which no physical solution is found.", null, "Fig. 5The domain where the periods of fundamental mode and its first harmonic are smaller than the kink cut-off period, given by Eq. (16) for coronal conditions. The domain of permitted values is shown by label “I”, while region “II” corresponds to the unphysical results.\n\nWhen the dispersive term is not taken into account, the P1/P2 period ratio is independent on the length of the loop and the Alfvén speed (basically, the independent parameter is the density ratio, ξ). However, once the dispersive term is considered in Eq. (12), the P1/P2 ratio will depend on the length and the Alfvén speed measured at the apex of the loop. For the numerical example shown here, we have chosen a length of 150 Mm and an Alfvén speed at the apex of the loop (z = 0) of 1000 km s-1. Since the length of the loop is given, varying L/πHi would mean a change in Hi. In our numerical analysis we stopped at L/πHi = 5 that corresponds to a scale-height of 9.5 Mm. Assuming a loop in hydrostatic equilibrium, the value of H = 9.5 Mm leads to a temperature of 0.2 MK. As the value of L/πHi decreases, the scale-height increases. It is easy to verify that for a fixed value of L, the P1/P2 period ratio is proportional to vAi(0), and an increase/decrease of 200 km s-1 would result in a change of only 4% towards the large L/πHi part of our investigated domain. If we fix the value of the Alfvén speed at the loop apex, the variation of P1/P2 is inversely proportional to L, but again, for a change of 50 Mm in L drives changes of the order of 2% for large values of L/πHi.\n\nFor solar prominences we repeated the calculations but now assuming that ξ = 100, L = 1 Mm and vAi(0) = 120 km s-1 (see Fig. 6). Under these conditions it is obvious that possible solutions are found for L/πHi < 2.4. The condition that a hydrostatic equilibrium is reached inside the prominence means that the smallest scale-height we use is 1.32 × 105 m, which corresponds to a minimum temperature of 2800 K (nearly a quarter of the typical prominence temperature). As we approach smaller values for L/πHi, the temperatures increase, so that when L/πHi = 0.7, the temperature is approximately 104 K, a typical prominence temperature.", null, "Fig. 6The same as Fig. 4, but now the period ratio is plotted for prominence conditions.\n\n## 3. Implications for magneto-seismology\n\nThe immediate implication of our calculations is that the P1/P2 period ratio has no one-to-one correspondence with the internal stratification of the magnetic structure, but depends also on the temperature ratio between the interior and exterior of the magnetic structure, i.e. an observed period ratio allows the diagnostics of the temperature ratio, too. Our analysis shows that the effect of temperature difference is more pronounced for those cases where the temperature inside the waveguide is larger than outside (e.g. coronal loops) and, in general, negligible in prominence cases. For coronal loops it is also evident that noticeable effects of the temperature difference on the P1/P2 ratio are encountered for relative small values of L/πHi (say, below 5) and temperature ratio that are smaller than 70%.\n\nOur analysis also opens a new way of diagnosing the multi-temperature loops and their environment. The relations derived in the present study show that the physical parameters entering the problem are the density ratio, temperature ratio and the ratio of loop length and scale-height. Out of these quantities, the density is the parameter that can be determined (although with errors) from emission, so we will suppose that the value of ξ is known. The diagnostics of the loop in the light of the new introduced parameter becomes possible once we specify an additional relation connecting the temperature ratio and the density scale-height measured against the length of the loop. As a possibility we investigate the case when for the same loop we can determine now only the period of the fundamental mode (P1) and its first harmonic (P2) but also the period of the second harmonic, here denoted by P3. Now we can form a new ratio, P1/P3, which can be determined in a similar way as above. Since the measurement of the three periods refer to the same loop we can estimate the value of χ and L/πHi in a very easy way.", null, "Fig. 7An example on how the period ratio of the first three harmonics of a coronal loop kink oscillations can be used to diagnose the density scale-height of the loop and the temperature difference between the coronal loop and its environment. Here the P1/P2 dependence is shown by the dotted line while the solid line stands for the value of P1/P3.\n\nIn Fig. 7 we illustrate such a case. We suppose that for a loop of half-length of 150 Mm with an Alfvén speed at the apex of the loop of 1000 km s-1 we measure the period ratio of 1.72 for P1/P2 and 2.67 for P1/P3. By specifying the value of the period measurement means that in a dependence similar to the one shown in Fig. 1 we obtain an arc which in a (L/πHi) coordinate system will look like the dotted line curve in Fig. 8. In a similar way, for the value of P1/P3 = 2.67 we would obtain another curve, here shown by the solid line (here the curves are the projections of the intersection of similar surfaces as in Fig. 1 with the horizontal surface corresponding to the specified period ratio). Since the two measurements correspond to the same loop, their intersection point will give us the exact value of L/πHi and χ. For the particular example used here we obtain L/πHi = 0.57 and χ = 0.67. Our results show very little sensitivity with the density ratio, for example, if the ratio would be 10 then the intersection point would change to L/πHi = 0.54 and χ = 0.66.\n\n## 4. Conclusions\n\nIn order to carry out coronal seismology it is imperative to know the relationship between the composition of a plasma structure and the oscillations supported by the coronal loop. High resolution observations make possible an accurate diagnostic of not only the magnetic field strength, but also the thermodynamical state of the plasma.\n\nThe period ratio P1/P2 (and its deviation for the canonical value of 2) belonging to the period of transversal fundamental kink mode and its first harmonic is a perfect tool for diagnosing the longitudinal structure of magnetic structures. In our study we investigated the effect of the environment on the period ratio assuming that the density scale-heights (implicitly the temperature) inside and outside magnetic structures are different.\n\nUsing a simple variational method first applied in this context by McEwan et al. (2008), we derived for the first time an analytical expression that connects the value of the period of kink oscillations and parameters of the loop. We showed that in the case of coronal loops the effect of temperature difference between the loop interior and exterior can lead to changes of the order of 30–40% that could have significant implications on the diagnosis of longitudinal density structuring of the coronal loop. In the case of prominences, due to the very large density and temperature difference between the prominence and coronal plasma, the changes in P1/P2 due to the different temperature are very small. Once dispersive effects are taken into account (through a Klein-Gordon equation) the domain of applicability of P1/P2 seismology in the case of coronal loops becomes restricted and physically accepted solutions can not be found for any temperature ratio (here denoted by the parameter χ).\n\nSince our model introduces a new variable in the process of plasma diagnostic a new relation is needed that connects the parameters of interest. To illustrate the possibilities hidden in our analysis we have chosen the case when the same loop shows the presence of the additional second harmonic. Superimposing the dependences of the P1/P2 and P1/P3 ratios with respect to the temperature ratio factor, χ and L/πHi we could find the set of the values that satisfies a hypothetical measurement.\n\nFinally we need to emphasise that our approach supposes a certain degree of simplification, therefore our results do not provide an absolute qualitative and quantitative conclusion. First we supposed that the loop is thin, and Eq. (1) can be applied to describe the dynamics of kink oscillations in coronal loops. It is obvious that this statement is not true for very short loops (the ratio of the loop ratio and its length is not very small) in which case, the governing equation has to be supplemented by an extra term. Secondly, our isothermal supposition of the loop and its environment is also that needs refinement as observations (see, e.g. Winebarger et al. 2003; Warren et al. 2008; Berger et al. 2011; Mulu-Moore 2011) show that the loops are not always in hydrostatic equilibrium nor isothermal. Here we supposed the idealistic situation of a static background, however recent analysis by Ruderman (2011) showed that the the temporal dependence of density through flow and cooling can also influence the ratio of the two periods.\n\n## Acknowledgments\n\nI.B. acknowledges the financial support by NFS Hungary (OTKA, K83133). We are grateful to the anonymous referee for his/her suggestions that helped to improve the quality of the paper.\n\n## References\n\n1. Andries, J., Arregui, I., & Goossens, M. 2005, ApJ, 624, L57 [NASA ADS] [CrossRef] [Google Scholar]\n2. Andries, J., van Doorsselaere, T., Roberts, B., et al. 2009, Space Sci. Rev., 149, 3 [NASA ADS] [CrossRef] [Google Scholar]\n3. Aschwanden, M. J., & Schrijver, C. J. 2011, ApJ, 736, 102 [NASA ADS] [CrossRef] [Google Scholar]\n4. Aschwanden, M. J., de Pontieu, B., Schrijver, C. J., & Title, A. M. 2002, Sol. Phys., 206, 99 [NASA ADS] [CrossRef] [Google Scholar]\n5. Ballai, I. 2007, Sol. Phys., 246, 177 [NASA ADS] [CrossRef] [Google Scholar]\n6. Ballai, I., Erdélyi, R., & Pintér, B. 2005, ApJ, 633, L145 [NASA ADS] [CrossRef] [Google Scholar]\n7. Ballai, I., Erdélyi, R., & Hargreaves, J. 2006, Phys. Plasmas, 14, 042108 [NASA ADS] [CrossRef] [Google Scholar]\n8. Ballai, I., Douglas, M., & Marcu, A. 2008, A&A, 488, 1125 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n9. Ballai, I., Jess, D., & Douglas, M. 2011, A&A, 534, A13 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n10. Banerjee, D., Erdélyi, R., Oliver, R., & O’Shea, E. 2007, Sol. Phys., 246, 3 [NASA ADS] [CrossRef] [Google Scholar]\n11. Berger, T., Testa, P., Hillier, A., et al. 2011,Nature, 472, 197 [Google Scholar]\n12. De Moortel, I., & Brady, C. S. 2007, ApJ, 664, 1210 [NASA ADS] [CrossRef] [Google Scholar]\n13. Dymova, M. V., & Ruderman, M. S. 2005, Sol. Phys., 229, 79 [NASA ADS] [CrossRef] [Google Scholar]\n14. Dymova, M. V., & Ruderman, M. S. 2006, A&A, 457, 1069 [Google Scholar]\n15. Edwin, P. M., & Roberts, B. 1983, Sol. Phys., 88, 179 [NASA ADS] [CrossRef] [Google Scholar]\n16. Erdélyi, R., & Ballai, I. 2007, Astron. Nachr., 328, 726 [NASA ADS] [CrossRef] [Google Scholar]\n17. Eto, S., Isobe, H., Naukage, N., et al. 2002, PASJ, 54, 481 [NASA ADS] [CrossRef] [Google Scholar]\n18. Gruszecki, M., Murawski, K., Selwa, M., & Ofman, L. 2006, A&A, 460, 887 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n19. Gruszecki, M., Murawski, K., Solanski, S. K., & Ofman, L. 2007, A&A, 469, 1117 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n20. Gruszecki, M., Murawski, K., & Ofman, L. 2008, A&A, 488, 757 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n21. Isobe, H., & Tripathi, D. 2007, A&A, 449, L17 [Google Scholar]\n22. Jing, J., Lee, J., Spirock, T. J., et al. 2003, ApJ, 584, L103 [NASA ADS] [CrossRef] [Google Scholar]\n23. Klimchuk, 2006, Sol. Phys., 234, 41 [NASA ADS] [CrossRef] [Google Scholar]\n24. Liu, W., Nitta, N. V., Schrijver, C. J., et al. 2010, ApJ, 723, 53 [Google Scholar]\n25. McEwan, M. P., Donnelly, G. R., Díaz, A. J., & Roberts, B. 2006, A&A, 460, 893 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n26. McEwan, M. P., Díaz, A. J., & Roberts, B. 2008, A&A, 481, 819 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n27. McLaughlin, J. A., & Ofman, L. 2008, ApJ, 682, 1338 [NASA ADS] [CrossRef] [Google Scholar]\n28. Moreton, G. E., & Ramsey, H. E. 1960, Astron. Soc., 72, 357 [Google Scholar]\n29. Morton, R., & Erdélyi, R. 2009, A&A, 502, 315 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n30. Morton, R., & Ruderman, M. S. 2011, A&A, 527, A53 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n31. Morton, R., Ruderman, M. S., & Erdélyi, R. 2011, A&A, 534, A27 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n32. Mulu-Moore, F. M., Winebarger, A. R., Warren, H. P., & Aschwanden, M. 2011, ApJ, 733, 59 [NASA ADS] [CrossRef] [Google Scholar]\n33. Murawski, K., & Musielak, Z. E. 2010, A&A, 518, A37 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n34. Nakariakov, V. M., & Ofman, L. 2001, A&A, 372, 253 [Google Scholar]\n35. Nakariakov, V. M., Ofman, L., Deluca, E. E., Roberts, B., & Davila, J. M. 1999, Science, 285, 862 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]\n36. Ofman, L. 2007, ApJ, 665, 1134 [NASA ADS] [CrossRef] [Google Scholar]\n37. Ofman, L. 2009, ApJ, 694, 502 [NASA ADS] [CrossRef] [Google Scholar]\n38. Ofman, L., & Aschwanden, M. J. 2002, ApJ, 576, L153 [NASA ADS] [CrossRef] [Google Scholar]\n39. Ofman, L., & Thompson, B. J. 2002, ApJ, 574, 440 [NASA ADS] [CrossRef] [Google Scholar]\n40. Ofman, L., & Wang, T. J. 2008, A&A, 482, 9 [Google Scholar]\n41. Okamoto, T. J., Nakai, H., & Keiyama, A. 2004, ApJ, 608, 1124 [NASA ADS] [CrossRef] [Google Scholar]\n42. Patsourakos, S., & Vourlidas, A. 2009, ApJ, 700, L182 [NASA ADS] [CrossRef] [Google Scholar]\n43. Pintér, B., Jain, R., Tripathi, D., & Isobe, H. 2008, ApJ, 680, 1560 [NASA ADS] [CrossRef] [Google Scholar]\n44. Rae, I. C., & Roberts, B. 1982, ApJ, 256, 761 [NASA ADS] [CrossRef] [Google Scholar]\n45. Ramsey, H. E., & Smith, S. F., 1966, AJ, 71, 197 [NASA ADS] [CrossRef] [Google Scholar]\n46. Roberts, B., Edwin, P. M., & Benz, A. O. 1984, ApJ, 279, 857 [NASA ADS] [CrossRef] [Google Scholar]\n47. Ruderman, M. S. 2011, Sol. Phys., 271, 41 [NASA ADS] [CrossRef] [Google Scholar]\n48. Ruderman, M. S., & Erdélyi, R. 2009, Space Sci. Rev., 149, 199 [NASA ADS] [CrossRef] [Google Scholar]\n49. Ruderman, M. S., & Roberts, B. 2002, ApJ, 577, 475 [NASA ADS] [CrossRef] [Google Scholar]\n50. Ruderman, M. S., Verth, G., & Erdélyi, R. 2008, ApJ, 686, 694 [NASA ADS] [CrossRef] [Google Scholar]\n51. Selwa, M., & Ofman, L. 2009, Ann. Geophys., 27, 3899 [Google Scholar]\n52. Selwa, M., & Ofman, L. 2010, ApJ, 714, 170 [NASA ADS] [CrossRef] [Google Scholar]\n53. Selwa, M., Murawski, K., Solanki, S. K., & Ofman, L. 2010, A&A, 512, A76 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n54. Selwa, M., Ofman, L., & Solanski, S. K. 2011a, ApJ, 726, 42 [NASA ADS] [CrossRef] [Google Scholar]\n55. Selwa, M., Solanski, S. K., & Ofman, L. 2011b, ApJ, 728, 87 [NASA ADS] [CrossRef] [Google Scholar]\n56. Thompson, B. J., Gurman, J. B., Neupert, W. M., et al. 1999, ApJ, 517, L151 [NASA ADS] [CrossRef] [Google Scholar]\n57. Uchida, Y. 1970, Astron. Soc. Japan, 22, 341 [Google Scholar]\n58. Van Doorsselaere, T., Nakariakov, V. M., & Verwichte, E. 2007, A&A, 473, 959 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n59. Van Doorsselaere, T., Ruderman, M. S., & Robertson, D. 2008, A&A, 485, 849 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n60. Van Doorsselaere, T., Birtlill, D. C. C., & Evans, G. R. 2009, A&A, 508, 1485 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n61. Verth, G., & Erdélyi, R. 2008, A&A, 486, 1015 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n62. Verth, G., Van Doorsselaere, T., Erdélyi, R., & Goossens, M. 2007, A&A, 475, 341 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n63. Verth, G., Erdélyi, R., & Jess, D. B. 2008, ApJ, 687, L45 [NASA ADS] [CrossRef] [Google Scholar]\n64. Verwichte, E., Aschwanden, M. J., Van Doorsselaere, T., Foullon, C., & Nakariakov, V. M. 2009, ApJ, 698, 397 [NASA ADS] [CrossRef] [Google Scholar]\n65. Warren, H. P., Ugarte-Urra, I., Doschek, G. A., et al. 2008, ApJ, 686, 131 [Google Scholar]\n66. Wang, T. J., Solanki, S. K., Curdt, W., et al. 2003, A&A, 406, 1105 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n67. Wills-Davey, M. J., & Thompson, B. J. 1999, Sol. Phys., 190, 467 [NASA ADS] [CrossRef] [Google Scholar]\n68. Winebarger, A. R., Warren, H. P., & Seaton, D. B. 2003, ApJ, 593, 1164 [NASA ADS] [CrossRef] [Google Scholar]\n\n## Online material\n\n### Appendix A: Corrections to the eigenfunctions due to the density stratification\n\nIn the Appendix we estimate the corrections to the chosen eigenfunctions due to the density stratification. Analytical progress can be made in the small y/χ limit. Since χ is a value smaller than one, this condition would automatically mean that we work in the small y limit, provided χ is not becoming too small. We are interested only in the characteristics of fundamental mode of kink oscillations and its first harmonic. Following Eq. (3) with the boundary conditions vr(L) = dvr(0)/dz = 0 and vr(0) = vr(L) = 0 for the fundamental mode and first harmonic, we introduce a new variable so that", null, "In the new notations the density inside the loop can be written as", null, "(A.1)where h is the loop height above the solar atmosphere (a similar equation can be written for the external density). Next, we are working in the approximation h/Hi = ϵ ≪ 1, so Eq. (3) becomes", null, "(A.2)where ξ = ρi(0)/ρe(0) > 1 is the density ratio and χ = He/Hi < 1, with He and Hi being the density scale heights inside and outside the loop. Let us we write vr and Ω as", null, "(A.3)Substituting these expansions into Eq. (A.2) and collecting terms proportional to subsequent powers of ϵ we obtain", null, "These equations must be solved separately for the fundamental mode and its first harmonic taking into account the boundary conditions.\n\n• For the fundamental mode", null, "• For the first harmonic", null, "Let us first calculate the correction to the fundamental mode. It is easy to show that the solution of Eq. (A.4) taking into account the above boundary condition becomes", null, "(A.6)and", null, "(A.7)It is important to note that the form of the above solution is exactly the same as the solution we employed for the eigenfunction, vr. In the next order of approximation we obtain Eq. (A.5) which can be written as", null, "(A.8)This boundary value problem will permit solutions only if the right-hand side satisfies the compatibility condition that can be obtained after multiplying the left-hand side by the expression of", null, "and integrating with respect to the variable ζ between 0 and 1, or", null, "After some straightforward calculus we can find that", null, "(A.9)As a result, Eq. (A.5) becomes", null, "(A.10)This differential equation will have the solution", null, "(A.11)Applying the boundary condition", null, ", we find the constant", null, "In order to find the value of C2, we use the property of orthogonality, i.e.", null, "which result in", null, "As a result, the first order correction to vr corresponding to the fundamental mode is", null, "(A.12)In Fig. A.1 we plot the correction to the eigenfunction for ϵ = 0.1, χ = 0.9, and ξ = 10. Figure (A.1) shows that we can approximate vr(z) by cos(πz/2L) since the first order correction brings changes of about 1(%), i.e. insignificant.", null, "Fig. A.1 Correction to the eigenfunction for the fundamental mode kink oscillation when ϵ = 0.1. Here L = 1.5 × 108 m represents the loop length\n\n#### A.1. Corrections to the first harmonic\n\nThe same analysis can be repeated for the first harmonic, taking into account the right boundary conditions. After a straightforward calculation it is easy to show that", null, "The correction to the eigenfunction corresponding to the first harmonic has been plotted in Fig. A.2 for the same values as before. It is obvious that the changes introduced by stratification in the value of the eigenfunction are of the order of 2(%), i.e. negligably small.", null, "Fig. A.2The same as Fig. A.1 but here we represent the correction to the eigenfunction for the first harmonic kink oscillation.\n\nThe two figures show that the effect of density stratification becomes more important for higher harmonics. Given the very large values of χ we used for prominences, the approximations used in this Appendix will always be valid. Fo the graphical represention of corrections in Figs. A.1 and A.2 we used χ = 1. If we lower this value to, e.g. 0.7 the corrections would still be small since the maximum relative change in the eigenfunction describing the fundamental mode would be 1.1%, while for the first harmonic, this would increase to 3.5%.\n\nThe robustness of our analysis was checked using a full numerical investigation for arbitrary values of χ and y. A typical dependence of the P1/P2 period ratio with respect to L/πHi for one value of χ is shown in Fig. A.3 where the solid line corresponds to the analytical and the dotted line represent the numerical results, in both cases the density is inhomogeneous with respect to the coordinate z. The loop is set into motion using a gaussian-shaped source and we use a full reflective boundary conditions at the two footpoints of the loop. After the oscillations are formed, we use the FFT procedure to obtain the values of periods. Our analysis shows that the differences between the results obtained using the variational method and a full numerical investigation are of the order of 7% but towards the large range of L/πHi. Restricting ourself to realistic values, i.e. L/πHi < 5, we see that the results obtained with the two methods coincide with great accuracy.", null, "Fig. A.3Comparison of the analytical (solid line) and numerical (dotted line) results for the P1/P2 variation with L/πHi for coronal case corresponding to χ = 0.53.\n\n## All Figures", null, "Fig. 1The variation of the P1/P2 period ratio with the temperature parameter, χ, and the ratio L/πHi for the case of a typical coronal loop (the density ratio, ξ, is 2). In the text", null, "Fig. 2The relative variation of the P1/P2 period ratio with the temperature parameter, χ, and the ratio L/H for the case of a typical coronal loop (the density ratio, ξ, is taken to be 2). In the text", null, "Fig. 3The same as in Fig. 1 but we plot the variation of P1/P2 for prominences where ξ = 100 and χ varies between 50 and 150. In the text", null, "Fig. 4The variation of P1/P2 period ratio for coronal loops when the effect of the cut-off period is taken into account. In the text", null, "Fig. 5The domain where the periods of fundamental mode and its first harmonic are smaller than the kink cut-off period, given by Eq. (16) for coronal conditions. The domain of permitted values is shown by label “I”, while region “II” corresponds to the unphysical results. In the text", null, "Fig. 6The same as Fig. 4, but now the period ratio is plotted for prominence conditions. In the text", null, "Fig. 7An example on how the period ratio of the first three harmonics of a coronal loop kink oscillations can be used to diagnose the density scale-height of the loop and the temperature difference between the coronal loop and its environment. Here the P1/P2 dependence is shown by the dotted line while the solid line stands for the value of P1/P3. In the text", null, "Fig. A.1 Correction to the eigenfunction for the fundamental mode kink oscillation when ϵ = 0.1. Here L = 1.5 × 108 m represents the loop length In the text", null, "Fig. A.2The same as Fig. A.1 but here we represent the correction to the eigenfunction for the first harmonic kink oscillation. In the text", null, "Fig. A.3Comparison of the analytical (solid line) and numerical (dotted line) results for the P1/P2 variation with L/πHi for coronal case corresponding to χ = 0.53. In the text\n\nCurrent usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.\n\nData correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days." ]
[ null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq5.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq8.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq18.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq22.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq25.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq26.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq27.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq31.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq37.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq48.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq54.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq56.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq57.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq58.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig1_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig2_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig3_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq74.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq77.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq81.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq82.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq83.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq85.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq86.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq88.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq89.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq90.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq91.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq92.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq93.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq94.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq95.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq96.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq97.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq56.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq57.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq98.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq99.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig4_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig5_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig6_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig7_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq119.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq120.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq123.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq127.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq129.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq130.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq131.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq132.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq133.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq134.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq135.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq137.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq138.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq139.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq140.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq141.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq142.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq144.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq145.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq146.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig8_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-eq151.png", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig9_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig10_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig1_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig2_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig3_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig4_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig5_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig6_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig7_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig8_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig9_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11-fig10_small.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86722094,"math_prob":0.9596628,"size":49398,"snap":"2023-40-2023-50","text_gpt3_token_len":12910,"char_repetition_ratio":0.17315869,"word_repetition_ratio":0.1305291,"special_character_ratio":0.2609215,"punctuation_ratio":0.16183767,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98931324,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,6,null,6,null,3,null,6,null,6,null,6,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,6,null,6,null,3,null,3,null,6,null,6,null,6,null,6,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,6,null,3,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T17:05:45Z\",\"WARC-Record-ID\":\"<urn:uuid:ef035cce-de92-433c-96b4-7fc0f989d1b0>\",\"Content-Length\":\"225610\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6290864-749f-4d19-91cd-6993a69542ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:1c7ceccb-6c5d-414c-abae-fec673870aaf>\",\"WARC-IP-Address\":\"167.114.155.65\",\"WARC-Target-URI\":\"https://www.aanda.org/articles/aa/full_html/2012/01/aa17957-11/aa17957-11.html\",\"WARC-Payload-Digest\":\"sha1:CVJILQYSDZYZ4NDWP6GD3ISWLOKTEZAV\",\"WARC-Block-Digest\":\"sha1:T55IWALDDOVIAGABUXCJK5P57VIAWQAO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100229.44_warc_CC-MAIN-20231130161920-20231130191920-00466.warc.gz\"}"}
https://study.com/academy/exam/course/high-school-geometry-help-course.html
[ "# High School Geometry: Help and Review Final Exam\n\nFree Practice Test Instructions:\n\nChoose your answer to the question and click 'Continue' to see how you did. Then click 'Next Question' to answer the next question. When you have completed the free practice test, click 'View Results' to see your results. Good luck!\n\n#### Question 2 2. Locate point (-2, 2).", null, "#### Question 3 3. Which pair of angles are alternate interior angles?", null, "#### Question 4 4. All of the following statements about the pictured triangle must be true, EXCEPT:", null, "#### Question 5 5. If arc AC is 122 degrees and arc BC is 192 degrees, what is the measure of angle ACB?", null, "#### Question 6 6. Given the triangle PQE, calculate the measure of Angle Q.", null, "#### Question 10 10. In the picture below, if arc AB is 82 degrees, what is the measure of angle BAC?", null, "#### Question 12 12. In the pictured triangle, XV is an angle bisector. What is the length of XY?", null, "#### Question 14 14. Find the perimeter.", null, "" ]
[ null, "https://study.com/cimages/multimages/16/408598836_cartesian_q2.jpg", null, "https://study.com/cimages/multimages/16/663720054_alternate_exterior_angles_4.jpg", null, "https://study.com/cimages/multimages/16/1557983291_q3.jpg", null, "https://study.com/cimages/multimages/16/2057715481_q5.jpg", null, "https://study.com/cimages/multimages/16/angles_and_triangles_practice_-_quiz_question_4.png", null, "https://study.com/cimages/multimages/16/1206463318_q4.jpg", null, "https://study.com/cimages/multimages/16/845160848_q3.jpg", null, "https://study.com/cimages/multimages/16/50c08002-5b9a-4221-a5ba-11f4e920727f_capture11.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90300417,"math_prob":0.53129834,"size":1952,"snap":"2020-10-2020-16","text_gpt3_token_len":483,"char_repetition_ratio":0.14168377,"word_repetition_ratio":0.24719101,"special_character_ratio":0.25153688,"punctuation_ratio":0.13429257,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.98726237,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,2,null,3,null,8,null,9,null,null,null,null,null,9,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-29T01:13:33Z\",\"WARC-Record-ID\":\"<urn:uuid:cdb25a51-8955-4daf-8189-77e0b33e2df0>\",\"Content-Length\":\"147604\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b981980-9858-43fc-bf20-ae47141124a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:5d1af208-00e4-4d6b-a2d8-169227175066>\",\"WARC-IP-Address\":\"52.0.4.252\",\"WARC-Target-URI\":\"https://study.com/academy/exam/course/high-school-geometry-help-course.html\",\"WARC-Payload-Digest\":\"sha1:XTDWCE7I2KIH6SLZNEJVB3Y4PJQC5H4W\",\"WARC-Block-Digest\":\"sha1:STZ3ZTMKVKVVWPPMOJTSN6PBWF7O27ZT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875148163.71_warc_CC-MAIN-20200228231614-20200229021614-00357.warc.gz\"}"}
https://www.mina.moe/archives/11762
[ "# 模拟费用流\n\n• 观察 Dp 方程\n• 观察流量网络\n• Dp 方程的二维直观表示\n• 括号序列与折线法\n• 单调性和凹凸性分析\n• 匹配\n\n### 模型 3\n\n• 一只兔子 $b$如果选择洞 $a$,总权值的增加量即为 $v_a+x_b$。之后一个好洞 $c$替换这个坏洞时,总权值又会增加 $s_c+y_c-v_a-2x_b$,对于好洞 $c$来说,它找到了一只兔子,并且产生了 $s_c+y_c-v_a-2x_b$的贡献,因此我们可以新增加一只权值为 $-v_a-2x_b$的兔子。\n• 一个洞 $b$如果找到了兔子 $a$,总权值的增加量 $w_a+y_b+s_b$。之后一个兔子 $c$抢走了这个洞时,总权值又会增加 $x_c-w_a-2y_b$,对于 $c$来说,它找到了一个洞,而 $a$也不会因此无家可归,因为当我们处理 $a$的时候,我们肯定为它分配了在它前面的洞,后来 $b$找到了 $a$,现在 $b$把 $a$赶走了,$a$显然可以回去找之前分配给它的洞,赶走里面住的兔子,以此类推。这样,一切又回到了 $b$没有发现 $a$时的模样,因此,对于 $c$来说,我们可以新增一个权值为 $-w_a-2y_b$的洞。\n• 一个洞 $b$如果找到了兔子 $a$,总权值增加量 $w_a+y_b+s_b$。之后一个新的洞 $c$替代了 $b$,总权值又会增加 $y_c+s_c-y_b-s_b$,对于 $c$来说,它找到了一个权值为 $-y_b-s_b$的兔子,因此我们新增一个这样的兔子。\n\n### 8 条评论", null, "#### a1b3c7d9 · 2020年3月2日 7:55 下午", null, "#### 源曲明 · 2019年11月7日 10:44 上午", null, "#### ruogu · 2019年7月24日 11:29 上午", null, "" ]
[ null, "https://secure.gravatar.com/avatar/8e7020336ac00b00fd68502c4df08498", null, "https://secure.gravatar.com/avatar/7c0f0ef2de6f472568a7f848d3959660", null, "https://secure.gravatar.com/avatar/d2bedb72fe17cc9a07620f51ddbf767c", null, "https://secure.gravatar.com/avatar/d2bedb72fe17cc9a07620f51ddbf767c", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.9942819,"math_prob":0.9999063,"size":3722,"snap":"2023-40-2023-50","text_gpt3_token_len":3719,"char_repetition_ratio":0.067240454,"word_repetition_ratio":0.0,"special_character_ratio":0.23025255,"punctuation_ratio":0.018691588,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994942,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T22:11:58Z\",\"WARC-Record-ID\":\"<urn:uuid:e74433b4-e2e5-4f84-8985-7f0ac2821405>\",\"Content-Length\":\"84405\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bfdf8d1b-0934-447f-b0ab-e21757ef7c80>\",\"WARC-Concurrent-To\":\"<urn:uuid:b8f333c1-15c2-40c7-b2cb-cd374552902c>\",\"WARC-IP-Address\":\"154.17.1.77\",\"WARC-Target-URI\":\"https://www.mina.moe/archives/11762\",\"WARC-Payload-Digest\":\"sha1:ZKJSTRWXKLMFDWQQBN5QPXG7FIDQTJF2\",\"WARC-Block-Digest\":\"sha1:G3VWNDFJCQXG6DOYJ6CO5WNA4F5WUZNY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510326.82_warc_CC-MAIN-20230927203115-20230927233115-00509.warc.gz\"}"}
https://codegolf.stackexchange.com/questions/152857/determine-tic-tac-toe-winner-round-based?noredirect=1
[ "# Determine Tic-Tac-Toe winner (round based)\n\nLet's play some code-golf!\n\nThe challenge is to find the winner of a game of Tic-Tac-Toe.\n\nThis has been done many times by giving a board that has one clear winner but here is the twist:\n\nThe cells are numbered like this:\n\n1|2|3\n-+-+-\n4|5|6\n-+-+-\n7|8|9\n\n\nYou get an array of exactly 9 moves like that:\n\n{3, 5, 6, 7, 9, 8, 1, 2, 3}\n\n\nThis is parsed as follows:\n\n• Player 1 marks cell 3\n• Player 2 marks cell 5\n• Player 1 marks cell 6\n• Player 2 marks cell 7\n• Player 1 marks cell 9\n• Player 1 has won\n\nNote: The game does not stop after one player has won, it may happen that the losing player manages to get three in a row after the winning player, but only the first win counts.\n\nYour job is now to get 9 numbers as input and output the winning player and the round in which the win occured. If no one wins, output something constant of your choice. You can receive input and provide output through any standard mean / format.\n\nHave fun!\n\nSome more examples as requested:\n\n{2,3,4,5,6,7,1,8,9} => Player 2 wins in round 6\n{1,2,4,5,6,7,3,8,9} => Player 2 wins in round 8\n{1,2,3,5,4,7,6,8,9} => Player 2 wins in round 8\n\n• Welcome to PPCG! This is a nice first post, but usually we do not like very restrictive input / output formats. Would you consider removing the \"Player X wins in round Y\" and let us output in any reasonable format, like a list [X, Y]? In case of a tie, can we output any other consistent value instead? I recommend so, because printing those exact strings aren't really part of golfing. For future challenge ideas, I recommend using the sandbox. :-) – Mr. Xcoder Jan 9 '18 at 10:17\n• Sorry, my bad. I think it is correct now. – Grunzwanzling Jan 9 '18 at 13:16\n• Read the challenge to the end, I say that there may be a draw and that you can output something of your choice when it happens. I return {2,6} when player 2 wins in round 6 and {0,0} when no one wins. – Grunzwanzling Jan 9 '18 at 13:18\n• Can we use 0-indexed everything? (cells, players, rounds) – Arnauld Jan 9 '18 at 21:56\n• \"You get an array of exactly 9 moves like that: {3, 5, 6, 7, 9, 8, 1, 2, 3}\" - should 3 really appear twice? – Jonathan Allan Jan 9 '18 at 22:09\n\n# Retina, 114 bytes\n\n(.)(.)\n$1O$2X\n^\n123;;456;;789¶X\n{(.)(.*¶)(.)\\1\n$3$2\n}.*(.)(.)*\\1(?<-2>.)*(?(2)(?!))\\1.*¶(..)*\n$1$#3\n.*¶\nT\nTdRd\n\n\nTry it online! Based on my answer to Tic-Tac-Toe - X or O?. Outputs X<N> if the first player wins after N turns, O<N> if the second player wins, T if neither wins. Explanation:\n\n(.)(.)\n$1O$2X\n^\n123;;456;;789¶X\n\n\nCreates an internal board, and also marks each move with the player whose move it is.\n\n{(.)(.*¶)(.)\\1\n$3$2\n\n\nApplies a move.\n\n}.*(.)(.)*\\1(?<-2>.)*(?(2)(?!))\\1.*¶(..)*\n$1$#3\n\n\nSearches for a win, and if one is found, then replace the board with the winner and the number of remaining moves.\n\n.*¶\nT\n\n\nIf the moves are exhausted and nobody has won then the game is a tie.\n\nTdRd\n\n\nCalculate the number of the round from the number of remaining moves.\n\n• This is one of the more.... voluptuous answers I've seen here. – Lord Farquaad Jan 9 '18 at 21:41\n\n# MATL, 39 bytes\n\n3:g&+XIx\"IX@oXK@(XIt!yXdyPXd&hK=Aa?KX@.\n\n\nOutput is\n\n• 1 and R, in separate lines, if user 1 wins in round R;\n• 0 and R, in separate lines, if user 2 wins in round R;\n• empty if no one wins.\n\n### Explanation\n\n3: % Push [1 2 3]\ng % Convert to logical. Gives [true true true]\n&+ % Matrix of all pairs of additions. Gives a 3×3 matrix, which represents\n% the board in its initial state, namely all cells contain 2. This value\n% means \"cell not used yet\". 1 will represent \"cell marked by user 1\",\n% and 0 will represent \"cell marked by user 2\"\nXI % Copy into clipboard I\nx % Delete\n\" % Implicit input: array with moves. For each move\nI % Push current board state\nX@ % Push iteration index (starting at 1), that is, current round number\no % Modulo 2: gives 1 or 0. This represents the current user\nXK % Copy into clipboard K\n@ % Push current move ((that is, cell index)\n( % Write user identifier (1 or 0) into that cell. Cells are indexed\n% linearly in column-major order. So the board is transposed compared\n% to that in the challenge, but that is unimportant\nXI % Copy updated board into clipboard I\nt! % Duplicate and transpose\ny % Duplicate from below: push copy of board\nXd % Extract main diagonal as a 3×1 vector\ny % Duplicate from below: push copy of transposed board\nPXd % Flip vertically and extract main diagonal. This is the anti-diagonal\n% of the board\n&h % Concatenate stack horizontally. This concatenates the board (3×3),\n% transposed board (3×3), main diagonal (3×1 vector) and anti-diagonal\n% (3×1) into an 3×8 matrix\nK= % Push current user identifier. Test for equality with each entry of the\n% 3×8 matrix\nA % For each column, this gives true if all its entries are true. Note\n% that the first three columns in the 3×8 matrix are the board columns;\n% the next three are the board rows; and the last two columns are the\n% main diagonal and anti-diagonal. The result is a 1×8 vector\na % True if any entry is true, meaning the current user has won\n? % If true\nK % Push current user identifier\nX@ % Push current round number\n. % Break for loop\n% Implicit end\n% Implicit end\n% Implicit display\n\n\n# Javascript (ES6), 130 bytes\n\nm=>m.reduce((l,n,i)=>l||(b[n-1]=p=i%2+1,\"012,345,678,036,147,258,048,246\".replace(/\\d/g,m=>b[m]).match(\"\"+p+p+p)&&[p,i+1]),0,b=[])\n\n\nf=m=>m.reduce((l,n,i)=>l||(b[n-1]=p=i%2+1,\"012,345,678,036,147,258,048,246\".replace(/\\d/g,m=>b[m]).match(\"\"+p+p+p)&&[p,i+1]),0,b=[])\nconsole.log(JSON.stringify(f([3,5,6,7,9,8,1,2,3])))\nconsole.log(JSON.stringify(f([2,3,4,5,6,7,1,8,9])))\nconsole.log(JSON.stringify(f([1,2,4,5,6,7,3,8,9])))\nconsole.log(JSON.stringify(f([1,2,3,5,4,7,6,8,9])))\n\n## Explanation\n\nm=>m.reduce((l,n,i)=> // Reduce the input array with n as the current move\nl||( // If there is already a winner, return it\nb[n-1]=p=i%2+1, // Set the cell at b[n-1] to the current player p\n\"012,345,678,036,147,258,048,246\" // For every digit in the list of possible rows:\n.replace(/\\d/g,m=>b[m]) // Replace it with the player at the cell\n.match(\"\"+p+p+p) // If any of the rows is filled with p:\n&&[p,i+1] // Return [p, current move]\n),0,b=[])\n\n• Would you mind providing an explanation or an ungolfed version please? I'm interested in understanding your solution. – Jack Jan 10 '18 at 0:53\n\n# Java (OpenJDK 8), 445 bytes\n\nint[] t(int[]m){int[][]f=new int;boolean z=false;for(int i=0;i<9;i++){f[m[i]%3][m[i]/3]=z?2:1;if(f[m[i]%3]==(z?2:1)&&f[m[i]%3]==(z?2:1)&&f[m[i]%3]==(z?2:1)||f[m[i]/3]==(z?2:1)&&f[m[i]/3]==(z?2:1)&&f[m[i]/3]==(z?2:1)||m[i]%3+m[i]/3==2&&f==(z?2:1)&&f==(z?2:1)&&f==(z?2:1)||m[i]%3==m[i]/3&&f==(z?2:1)&&f==(z?2:1)&&f==(z?2:1)){return(new int[]{(z?2:1),++i});}z=!z;}return(new int[]{0,0});}\n\n\nTry it online!\n\nReturn value {1,8} means that player 1 won in round 8. Return value {0,0} means draw.\n\n• Unless you remove all the unnecessary spacing, this answer is considered invalid due to the lack of golfing effort. Moreover, it is not really recommended to answer your own challenge that fast, and you might want to add a TIO link such that we can test your code. – Mr. Xcoder Jan 9 '18 at 11:06\n• Reference links: Serious contender, compete in your own challenge – user202729 Jan 9 '18 at 11:16\n• I am sorry, I copied the wrong thing. It is in fact way shorter – Grunzwanzling Jan 9 '18 at 11:29\n• You can see the tips for golfing in Java question to remove some bytes. For example false can be replaced by 1<0, and the space after the first ] can be removed. – user202729 Jan 9 '18 at 11:50\n• 442 bytes. Also the reason why the \"Header\" and \"Footer\" section exists on TIO is that you don't need to comment //Code that was submitted and //End of code. – user202729 Jan 9 '18 at 11:52\n\n# Kotlin, 236 bytes\n\ni.foldIndexed(l()to l()){o,(a,b),p->fun f(i:(Int)->Int)=b.groupBy(i).any{(_,v)->v.size>2}\nif(f{(it-1)/3}|| f{it%3}|| listOf(l(1,5,9),l(3,5,7)).any{b.containsAll(it)}){return p%2+1 to o}\nb to a+p}.let{null}\nfun l(vararg l:Int)=l.toList()\n\n\n## Beautified\n\n i.foldIndexed(l() to l()) { o, (a, b), p ->\nfun f(i: (Int) -> Int) = b.groupBy(i).any { (_, v) -> v.size > 2 }\nif (f { (it - 1) / 3 } || f { it % 3 } || listOf(l(1, 5, 9), l(3, 5, 7)).any { b.containsAll(it) }) {\nreturn p % 2 + 1 to o\n}\nb to a + p\n}.let { null }\nfun l(vararg l:Int)= l.toList()\n\n\n## Test\n\nfun f(i: List<Int>): Pair<Int, Int>? =\ni.foldIndexed(l()to l()){o,(a,b),p->fun f(i:(Int)->Int)=b.groupBy(i).any{(_,v)->v.size>2}\nif(f{(it-1)/3}|| f{it%3}|| listOf(l(1,5,9),l(3,5,7)).any{b.containsAll(it)}){return p%2+1 to o}\nb to a+p}.let{null}\nfun l(vararg l:Int)=l.toList()\n\ndata class Test(val moves: List<Int>, val winner: Int, val move: Int)\n\nval tests = listOf(\nTest(listOf(3, 5, 6, 7, 9, 8, 1, 2, 3), 1, 5),\nTest(listOf(2, 3, 4, 5, 6, 7, 1, 8, 9), 2, 6),\nTest(listOf(1, 2, 4, 5, 6, 7, 3, 8, 9), 2, 8),\nTest(listOf(1, 2, 3, 5, 4, 7, 6, 8, 9), 2, 8)\n)\n\nfun main(args: Array<String>) {\ntests.forEach { (input, winner, move) ->\nval result = f(input)\nif (result != winner to move) {\nthrow AssertionError(\"$input${winner to move} $result\") } } } ## TIO TryItOnline # Python 2, 170 bytes q=map(input().index,range(1,10)) z=zip(*[iter(q)]*3) o='', for l in[q[2:7:2],q[::4]]+z+zip(*z): r=[n%2for n in l];y=all(r)*2+1-any(r) if y:o+=[max(l)+1,y], print min(o) #swap cell number / turn q=map(input().index,range(1,10)) #split in 3 parts (rows) z=zip(*[iter(q)]*3) #starting value for the list with the results #since string are \"greater\" than lists, this will #be the output value when there is a draw o='', #iterate over diagonals, rows and columns for l in[q[2:7:2],q[::4]]+z+zip(*z): #use %2 to separate between player 1 and 2 r=[n%2 for n in l] #store in y the value of the player if the trio is a valid win, 0 otherwise #it's a win if all moves are from the same player y=all(r)*2+1-any(r) #if y has a valid player, add the highest turn of the trio, and the player to o if y:o+=[max(l)+1,y], #output the smaller turn of the valid winning trios print min(o) # Jelly, 38 bytes ;⁵s2ZṬḤ2¦SṖs3µ,ṚJị\"$€;;ZEÐfṀḢµ$ƤµTḢ,ị¥ Try it online! Player 1 win: [round, 1] Player 2 win: [round, 2] Tie: [0, 0] ## Python 3.6+, 137 bytes n=m=c=z=0 for a in input():m+=1<<~-int(a);c+=1;z=z or f'{c&1}:{c}'*any(m&t==t for t in[7,56,448,73,146,292,273,84]);n,m=m,n print(z or-1) Output format is winner number:round or -1 for a tie. Player 2 is 0 Player 1 is 1. Input in the form of a undeliminated string of 1-indexed square numbers. # Jelly, 35 bytes 9s3,ZU$$;ŒD€Ẏf€⁸L€3e s2ZÇƤ€ZFTḢ;Ḃ A monadic link taking a list of the moves and returning a list, [move, player] where the players are identified as 1 (first to act) and 0 (second to act). Try it online! ### How? 9s3,ZU$$;ŒD$€Ẏf€⁸L€3e - Link 1: any winning play?: list of player's moves:\n9s3 - (range of) nine split into threes = [[1,2,3],[4,5,6],[7,8,9]]\n$- last two links as a monad:$ - last two links as a monad:\nZ - transpose = [[1,4,7],[2,5,8],[3,6,9]]\nU - upend = [[7,4,1],[8,5,2],[9,6,3]]\n, - pair = [[[1,2,3],[4,5,6],[7,8,9]],[[7,4,1],[8,5,2],[9,6,3]]]\n$€ - last two links as a monad for €ach: ŒD - diagonals = [[1,5,9],[2,6],,,[4,8]] or [[7,5,3],[4,2],,,[8,6]] ; - concatenate = [[1,2,3],[4,5,6],[7,8,9],[1,5,9],[2,6],,,[4,8]] or [[7,4,1],[8,5,2],[9,6,3],[7,5,3],[4,2],,,[8,6]] Ẏ - tighten = [[1,2,3],[4,5,6],[7,8,9],[1,5,9],[2,6],,,[4,8],[7,4,1],[8,5,2],[9,6,3],[7,5,3],[4,2],,,[8,6]] - i.e.: row1 row2 row3 diag\\ x x x x col1 col2 col3 diag/ x x x x - where x's are not long enough to matter for the rest... ⁸ - chain's left argument, list of player's moves f€ - filter to keep those moves for €ach of those lists to the left L€ - length of €ach result 3e - 3 exists in that? (i.e. were any length 3 when filtered down to only moves made?) s2ZÇƤ€ZFTḢ;Ḃ$ - Main link: list of the moves e.g. [2,3,4,5,6,7,1,8,9]\ns2 - split into twos [[2,3],[4,5],[6,7],[1,8],]\nZ - transpose [[2,4,6,1,9],[3,5,7,8]]\nƤ€ - for Ƥrefixes of €ach:\nZ - transpose [[0,0],[0,0],[0,1],[0,1],]\nF - flatten [0,0,0,0,0,1,0,1,0]\nT - truthy indices [ 6 8 ]\nḢ - head (if empty yields 0) 6\nḂ - modulo by 2 (evens are player 2) 0\n; - concatenate [6,0]\n\n\n# Python 2, 168 bytes\n\nimport itertools as z\nf=lambda g:next(([i%2+1,i+1]for i in range(9) if any(c for c in z.combinations([[0,6,1,8,7,5,3,2,9,4][j]for j in g[i%2:i+1:2]],3)if sum(c)==15)),0)\n\n\nOutputs (player,round) or 0 for a tie.\n\nMaps the game onto a 3-by-3 magic square and looks for sets of 3 Os or Xs that sum to 15.\n\n# Clean, 244 ... 220 bytes\n\nimport StdEnv\nf[a,b]i#k= \\l=or[and[isMember(c+n)(take i l)\\\\c<-:\"123147159357\"%(j,j+2)]\\\\j<-[0,3..9]&h<-:\"\u0003\u0001\\0\\0\",n<-[h-h,h,h+h]]\n|k a=(1,i*2-1)|i>4=(0,0)|k b=(2,i*2)=f[a,b](i+1)\n@l=f(map(map((!!)l))[[0,2..8],[1,3..7]])1\n\n\nTry it online!\n\nThe string iterated into h contains nonprintables, and is equivalent to \"\\003\\001\\000\\000\".\n\n# Python 2, 140136 134 bytes\n\nlambda a,i=0:i<9and(any(set(a[i%2:i+1:2])>=set(map(int,t))for t in'123 456 789 147 258 369 159 357'.split())and(i%2+1,i+1)or f(a,i+1))\n\n\nTry it online!\n\nEDIT: 4 bytes + 2 bytes thx to Eric the Outgolfer.\n\nOutputs a tuple (playerNumber, roundNumber) or False if there is no winner.\n\n• 134 bytes – Erik the Outgolfer Jan 11 '18 at 13:43\n• @Erik - Yah, I should have done that already; but I'm fighting the flu and my eyes were hurting :). Thx! – Chas Brown Jan 12 '18 at 2:04\n• Well, get well soon. :) – Erik the Outgolfer Jan 12 '18 at 12:38" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9044279,"math_prob":0.9912841,"size":1483,"snap":"2020-10-2020-16","text_gpt3_token_len":485,"char_repetition_ratio":0.118999325,"word_repetition_ratio":0.07017544,"special_character_ratio":0.34322318,"punctuation_ratio":0.18351063,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97991157,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-20T15:30:03Z\",\"WARC-Record-ID\":\"<urn:uuid:f632653c-c883-491d-a9ff-2d3a9ad79db1>\",\"Content-Length\":\"250285\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:618b4e0e-11a1-4570-9d4a-e777190d8284>\",\"WARC-Concurrent-To\":\"<urn:uuid:d29092bf-e664-4281-9277-ce194088bb64>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://codegolf.stackexchange.com/questions/152857/determine-tic-tac-toe-winner-round-based?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:MQAH56CKX5YRBA64YSNGSFIV2EMSOOOY\",\"WARC-Block-Digest\":\"sha1:X2A5WPZKKAS76BCFDWMSA7KNXKZ5LYZW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144979.91_warc_CC-MAIN-20200220131529-20200220161529-00294.warc.gz\"}"}
http://ibmathstuff.wikidot.com/quadratics-functions
[ "### Definition of a Quadratic Function\n\nA quadratic function can be written in the form of $ax^2+bx+c$ where $a \\neq 0$. If $a = 0$ then the function would be linear and not quadratic!\n\n### Terminology\n\nRoots and Zeros are the values of the independent variable (often x) that make the entire function zero. These coincide with the x-intercepts. The only difference being that the x-intercepts are coordinate points on a graph, roots or zeros are a value of a variable.\n\nStandard Form, Vertex Form and Factored Form Quadratics are often written in different forms. Each has is own usefulness. For more on quadratic forms try this page.\n\n### Finding Zeros\n\nA huge number of problems involving quadratics involve finding the zeros. There are three main ways of finding zeros:\n\nThe quadratic formula is in your IB Formula Booklet. I'll put it here for reference too:\n\n(1)\n\\begin{align} x=\\frac{b \\pm \\sqrt{b^2-4ac}}{2a} \\end{align}\n\nThis is a pretty simply \"plug-and-chug\" equation. The letters in the formula correspond to the quadratic form $y=ax^2+bx+c$. Its key to remember that in general two solutions come from the Quadratic Formula… this is due to the $\\pm$ in the equation.\n\nThe number of zeros and the type of zeros can be determined from the discrimnate:\n\n(2)\n\\begin{align} \\Delta = b^2 - 4ac \\end{align}\n\nThis is nothing more than the part of the quadratic equation that is in the square root. If the discriminate is zero the there will be only one solution. There are still two zeros, but they are repeated. This also coincides with the vertex being on the x-axis! If the discriminate is greater than zero there will be two solutions and they will both be real. If the discriminate is negative there will be two solution but both zeros with be imaginary.\n\n Discriminate Solutions & Zeros $\\Delta = 0$ 1 solution, two repeated real zeros $\\Delta > 0$ 2 solutions, two distinct real zeros $\\Delta < 0$ 2 solutions, two distinct imaginary zeros\n\n#### GDC - Graphing Display Calculator\n\nYour calculator has a function for finding the zero of any function. This can be found under the \"calc\" option at the top right (Ti-84).\n\nLook here for screen shots of exactly how to do it. You will need to scroll down a bit to find the part on zeros.\n\n#### Factoring\n\nThis third option is a good option if you are \"good\" at factoring and can do it quickly. Otherwise the above two methods are more \"sure-fire.\" Factoring can be a waste of time as not all quadratics can be factored, at least not with real numbers…\n\nFactoring for those unfamiliar with it looks like:\n\n(3)\n\\begin{equation} x^2+bx+c=(x+d)(x+e) \\end{equation}\n\nWhere $d+e=b$ and $(d)(e)=c$.\n\nFor those who really like to factor or are faced with a question that requires them to factor see the page regarding the Factor Theorem.\n\npage revision: 31, last edited: 26 Feb 2013 11:18" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9077948,"math_prob":0.9973644,"size":2607,"snap":"2020-45-2020-50","text_gpt3_token_len":609,"char_repetition_ratio":0.129466,"word_repetition_ratio":0.004454343,"special_character_ratio":0.22477944,"punctuation_ratio":0.07509881,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994572,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-02T00:11:16Z\",\"WARC-Record-ID\":\"<urn:uuid:797a24af-ec1c-4098-be05-9c25f4e28ca2>\",\"Content-Length\":\"32521\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f523fb1-292f-4d23-b505-8bfb80d74c57>\",\"WARC-Concurrent-To\":\"<urn:uuid:63268092-b7c4-4e06-9bb8-810b914de30f>\",\"WARC-IP-Address\":\"107.20.139.176\",\"WARC-Target-URI\":\"http://ibmathstuff.wikidot.com/quadratics-functions\",\"WARC-Payload-Digest\":\"sha1:GPPL7LINSS4SRYZB5PVQ5HY5CE3CAGYI\",\"WARC-Block-Digest\":\"sha1:7OQFEHROBHYNTSXBEZTWKUXP4BA5FZHN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141685797.79_warc_CC-MAIN-20201201231155-20201202021155-00280.warc.gz\"}"}
https://profmattstrassler.com/articles-and-posts/the-higgs-particle/the-standard-model-higgs/seeking-and-studying-the-standard-model-higgs-particle/
[ "# Seeking and Studying the Standard Model Higgs Particle\n\nMatt Strassler 12/11/11 (Updated now with figures.  This was written while suffering from jet lag, so it might not yet be as clear as I’d like.  Let me know what you can’t follow.)\n\nIn this article, I’m going to explain how we search for the Standard Model Higgs particle (the simplest possible type of Higgs particle that might be present in nature) and, if we find a candidate for this particle, how we check whether it really is of Standard Model type, or whether it is a look-alike that is in fact more complicated.\n\nIn particular, I’m going to combine together what I have told you about the production of the Standard Model Higgs particle  at the Large Hadron Collider [LHC] with the decays of the Standard Model Higgs particle. These can be mixed and matched: any of the possible production processes that makes a Higgs can be followed by any of the possible decays of the Higgs. Unfortunately quite a few of the resulting combinations are impractical to measure at the LHC, even with the giant data sets that will be available a decade from now. But from the many that can be measured over the coming decade, a lot of information about the Higgs can be obtained, and the question of whether a candidate Higgs particle really is or is not a Standard Model Higgs particle can be answered with reasonable confidence.\n\nFirst, of course, we have to find the darn thing, if it is there. Since lightweight Standard Model Higgs particles (with mass-energy [E = m c2] of 115-141 GeV) are in the spotlight right now, let’s focus on how to find them.\n\nThe search for the lightweight Standard Model Higgs particle (including crucial subtleties!)\n\nAlthough there are five production processes for a Standard Model Higgs particle, we simply don’t have enough data yet (and won’t anytime soon) to pick out the four smaller ones. Right now (as long as the Higgs is really of Standard Model type — otherwise this might not be true) the dominant way that the Higgs is made at the LHC is via two gluons colliding to make a Higgs particle, which proceeds via an indirect effect involving top quark/antiquark “virtual particles”. Both ATLAS and CMS have made many tens of thousands of Higgs particles already. But as I described here, most of them decay in ways that cannot be detected, at least not easily! Only the decays of the Higgs to two photons, to two W particles, and to two Z particles can be reliably detected with the current amount of data, and even then only when the W’s or Z’s themselves decay in a favorable way.  Certain other decay modes are more common, but there is so much background to these signals that they cannot be observed, at least not with the current amount of data.\n\nIn a few moments, I will point you to an article where I went into much more pedagogical detail, but let me first summarize the pros and cons of each of the three main processes that are used in the search for a lightweight Standard Model Higgs particle (in the 115-141 GeV range):\n\n1. Higgs –> ZZ –> two charged lepton/anti-lepton pairs (specifically the leptons in question are electrons or muons, not taus, which are much more difficult to handle):\n\nPros:\n\n• The signal is extremely clean: because the leptons and anti-leptons can be measured precisely, the mass that a Higgs particle would have had to have, if it produced the lepton/anti-lepton pairs in its decay, can be determined precisely, to within 3 or 4 GeV.\n• Meanwhile the background is very small, still less than one event for each 2 GeV-wide Higgs mass bin as of the end of 2011.\n\nCons:\n\n• For smaller Higgs masses, the probability for this process becomes very small.  For a 125 GeV Standard Model Higgs particle, only about 1 in 12,000 Higgs particles decays this way.\n• Experimentally, the probability of detecting all four leptons and anti-leptons is not that high, so the number of Higgs particles detected this way, for each experiment, might currently (end-2011) be zero, one, or two, and probably not more.", null, "Fig. 1: Summer 2011 data (Left: CMS, Right: ATLAS) showing the number of collisions with 2 leptons (electrons or muons only) and 2 anti-leptons versus their \"invariant mass\" (the mass that a Higgs particle must have had if the leptons and anti-leptons were its decay products.) The background from other processes is the smooth curve (violet for CMS, light gray for ATLAS; the data is shown in black points with error bars. Note the background is an average (and can be fractions of an event) while the data in each bin must be an integer (1,2,3...). Note also that the bins are 10 GeV wide, and most have zero events. The effect that Higgs particles would have had on the data is shown by peaks (red to orange for CMS, colored for ATLAS), one for each of three different masses (so only ONE peak would have appeared in the data, had the Higgs been there!)\n\nStill, although not convincing, two events at the same Higgs mass (as of end-2011) would be noteworthy, because the background to this process is so small.\n\n2. Higgs –> two photons:\n\nPros:\n\n• The signal is extremely clean: because the photons can be measured precisely, the mass that a Higgs particle would have had to have, if the photons were produced in a Higgs decay, can be known precisely, to within 1 or 2 GeV.\n• The rate is not quite so small; about 1 in 500 Higgs particles decays this way.\n• The background does not need to be calculated; it can be measured.\n\nCons:\n\n• The background is much larger than for the previous case, perhaps 400-500 events (as of end-2011) for each 1 GeV-wide Higgs mass bin.\n• The larger size of the background implies that a lot of two photon events from Higgs particles must be observed before they will stand out from the crowd.", null, "Fig. 2: Summer 2011 data (Left: CMS, Right: ATLAS) showing the number of collisions with 2 photons versus their \"invariant mass\" (the mass that a Higgs particle must have had if the photons were its decay products.) The estimated background from other processes is shown in smooth red curves; the black points with error bars are the data. The effect that a 120 GeV Standard Model Higgs particle would have had on the data is shown, exaggerated by a factor of FIVE (!!!), by little peaks (blue, at bottom, for CMS; red dotted, on the curve, for ATLAS). Note the bins are 1 GeV wide for CMS and 2 GeV wide for ATLAS, and that CMS shows 1.5 times as much data as ATLAS, for historical reasons.\n\nGenerally, the lighter the Standard Model Higgs particle, the more important is the two-photon decay for finding it. But to find the Higgs particle in the two-photon measurement requires a lot of data; we have just enough now to start having a chance of seeing it. It was easier to look for the Standard Model Higgs in the somewhat heavier range, where the decay to two W’s and to two Z’s was more common, and that’s why this range was excluded first.\n\n3. Higgs –> W W –> lepton/anti-neutrino + neutrino/anti-lepton (where again these leptons include only electrons and muons, not taus):\n\nPros:\n\n• The signal is relatively large: 1 in 100 Standard Model Higgs particles with a mass of 125 GeV would decay this way.\n• It is much larger for heavier Higgs particles, growing to nearly 1 in 20 for a Higgs mass of 160 GeV, and it played the dominant role in ruling out the Standard Model Higgs from 141 to 190 GeV.\n\nCons:\n\n• The presence of the two neutrinos means that some amount of information about each collision is unmeasurable, and the mass of a Higgs that might have been responsible for what is observed in that collision cannot be uniquely determined.\n• The background is large, has various components, and cannot be easily measured.\n• The theoretical calculation of some part of the background is not simple, and it is difficult and controversial to determine just how uncertain the calculation is.\n• Another (smaller) part of the background is either difficult or impossible to calculate and must be measured.\n• Determining the missing momentum of the neutrinos requires precise measurements of everything else in the collision, not just the charged lepton and anti-lepton.  There needs to be considerable confidence, on the part of the experimenters, that they are doing this correctly.\n• The signal appears as a small excess above the background and (unlike the other two cases just described) does not have the distinctive shape of a narrow peak above a simple background.\n\nI want to emphasize the pros and cons of the Higgs –> WW decay, which played a very big role this summer. Many of you may know there were hints at the July conference in Grenoble that a Standard Model-like Higgs might be present in the 140-150 GeV range. But as I explained in an article later that week, these hints rested on uncertain ground— precisely because they relied mainly on the WW measurement. I discussed the three crucial processes (decays to photons, ZZ and WW) in a lot of detail, showing how they are measured experimentally and what the data will look like after it is collected, and I explained why I would not trust the hints from Higgs –> WW as they were back then until signs of the Higgs showed up in either or both of the other two processes.", null, "Fig. 3: Summer 2011 data (Left: CMS, Right: ATLAS) showing analyses of collisions with a charged lepton (electron or muon), a charged anti-lepton, and signs of neutrinos. The number of events versus the angle between the lepton and anti-lepton (in the plane transverse to the beam) is shown; CMS (at left) and ATLAS (at right) do the analysis slightly differently, so the plots cannot be compared. (The top plots include events with no observed jets; the bottom plots include events with one or more jets.) The estimated background from various other processes is shown; the total background is the grey in CMS and the dark blue in ATLAS (with an uncertainty indicated in light grey for ATLAS.) The data are the black points with error-bars; note data is slightly in excess of the expectation in all plots. The shape and size of the effect that a Higgs particle would have is shown in red curves (for 160 GeV in CMS, 150 GeV in ATLAS); notice the effect would be broad and indistinct, so the mass of the Higgs could not be inferred precisely from this data.\n\nAnd indeed, I was right not to trust them.   As happens much more often than not, those hints went away — though why they went away isn’t entirely clear. [Was the hint just a statistical fluctuation that CMS and especially ATLAS both happened to have? Or was there in fact an error in how various WW backgrounds were being calculated or measured, and that affected the later analyses in Mumbai and in Paris? This issue has been controversial (my Rutgers colleagues even published a paper pointing out a plausible  problem) and the details are not public; perhaps we will never know the full story.]\n\nBut in any case, Higgs –>WW was a problem this summer, and there is a possibility it is going to be a problem on December 13th for a similar reason. Fortunately, even if this is the case, these problems will eventually go away once we have more data.  If the Higgs particle really is there at 125 GeV or something like that, we will reach the point in a few months where evidence for the Higgs particle is so strong in the processes H –> two photons and/or H –> two lepton/anti-lepton pairs that we don’t need to use any information from the WW decay mode anymore.  I suspect we won’t get to that point until the middle of 2012, but perhaps the December 13th presentations will surprise me.\n\nMore technically, for those who have scientific backgrounds: the Higgs –> WW decay suffers from substantial theoretical and systematic uncertainties.  The other two processes do not; they have mainly statistical uncertainties.  Statistical analysis of the significance of a signal is straightforward only when the dominant uncertainties are statistical,  because statistical uncertainties are essentially random, while systematic and theoretical uncertainties generally are not.\n\nSo I encourage you to read that July article, in order to learn some important lessons from the summer’s events — ones that still might be relevant this coming week, and will certainly be on my mind as I try to understand the experiments’ presentations on Tuesday.\n\nHow we study the Higgs particle once we have found it\n\nOnce we’ve got a candidate for a Higgs particle, the story has just begun.  The only thing we know about the object will be its mass, and a rough measure of its production rate times its decay rate for the one or two processes that we’ve observed initially.  There’s a lot more we need to learn, and a lot more we can learn, from the LHC.\n\nHow do we even know this new particle actually is a Higgs particle of some type (not necessarily the Standard Model version of the Higgs particle)?\n\nThe thing that makes a particle a Higgs particle, by definition, is that the field in which it is a ripple participates in giving mass to the W and Z particles. (Actually the definitions are a little slippery here; what one really should say, to avoid confusion, is that the Higgs “sector” might consist of several closely related particles, all of which are typically called Higgs particles, and at least one of which must have the property I just described.) If a field gives the W and Z particles all or part of their masses, then the field’s particle will have a strong direct interaction with two W particles and with two Z particles.  And this in turn automatically leads to three large effects:\n\n• the decays H –> WW and H –> ZZ,\n• the production process quark + antiquark –> W H and quark + antiquark –> Z H, and\n• the production process quark + quark –> quark + quark + H.\n\nSo if we can observe and measure any of these processes for a new particle, that will prove that the new particle is a Higgs particle.\n\nWhy must this interaction be present? Schematically, it is because of how the Higgs mechanism works: before the Higgs field has a non-zero value, the world allows an interaction of the form H H W W and H H Z Z, where “H” here is the Higgs field when its average value is zero, and “W” and “Z” are the W and Z fields with corresponding massless W and Z particles. This interaction allows two Higgs particles to collide to make two W particles, but there is no interaction involving just one Higgs particle at a time. But once the Higgs field has a non-zero average constant value throughout all of space and time (usually called “v”, historically) then we are motivated to write H = v + H, where H represents any difference of H from its average value v. When we do this we find\n\nH H Z Z   –>   v2 Z Z + 2 v H Z Z + H H Z Z\n\nThe first term, in red, (a constant times Z Z) shifts the energy of a Z particle that is at rest — in other words, it provides what we call mass-energy! That’s the Higgs mechanism!  This is how the Higgs field gives the Z particle a mass.  [I promise to explain this point better elsewhere, but not now.]\n\nThe second term, in blue, allows a single Higgs particle to turn into two Z particles (or a Z particle and a Z virtual particle, as described here.) It also allows a virtual Z particle to turn into a Z particle and a Higgs particle (as described here) or for two virtual Z particles to turn into a Higgs particle (also described here.)  Those are the three processes mentioned in the bullet points above.\n\nThe third term, in green, is similar to what we had when v was zero, and plays almost no role in the near term at the LHC.\n\n[Actually, indirect interactions, such as the one that leads to Higgs –> two photons, can also cause Higgs –>WW.  But an indirect interaction is much weaker than the direct interaction that I’ve just described.  A non-Higgs particle would decay to W particles with about the same rate as it decays to two photons, or less, while a Higgs particle is much more likely to decay to two W particles than it is to decay to two photons.]\n\nObservations that can detect the decays H–>WW and H –>ZZ and verify they are consistent with a pretty strong Higgs-W-W and Higgs-Z-Z interaction will occur very soon after the discovery of the Higgs particle, or maybe even during the discovery, especially if the Higgs particle is heavier than about 125 GeV. The decay H –> W W can be observed right away (though there may be controversy about it, because of the difficulties mentioned above about convincing oneself that one understands all the uncertainties in this measurement).  Sometime later, Around the same time [the LHC experiments improved their methods!] H –> Z Z will be observed (and this will be clean and convincing, but may require a lot of data, especially for a lighter Higgs.) Convincing observation of either of these decays will be incontrovertible evidence that the particle that we have discovered is a Higgs particle of some type, a ripple in a Higgs field.\n\nOnce we are sure we have a Higgs particle, there is a whole set of questions to be asked about the strength of the interactions of this particle with the other known particles. Let’s call the strength of the interaction of the Higgs with W particles gW, the strength of its interaction with bottom quarks gb, etc. We want to know whether the Higgs behaves exactly like the Standard Model Higgs\n\n• as far as the W and Z particles are concerned\n• as far as top quarks are concerned\n• as far as bottom quarks are concerned\n• as far as tau leptons are concerned\n\nThese boil down to the questions of whether gW, gZ, gt, gb, gτ are the numbers predicted in the Standard Model, which in each case are the particle’s mass divided by v (up to a square root of 2.)  We’d ask more questions about even lighter particles if we could, but the answers are probably out of reach at the LHC.\n\nAlso, we will want to know whether there any signs of new particles participating in the indirect interactions between the Higgs and gluons and/or beween the Higgs and photons. In other words, are the interaction strengths gg and gγ (γ standing for photon) what the Standard Model would predict? Let’s also recall that in the Standard Model gg is related to gt, and gγ is related to gt and gW, through the indirect effects that generate gg and gγ in the first place.\n\nUnfortunately, it’s not so easy to answer these questions right away, for the following reason. What we measure is always a production rate for a particular process multiplied by the probability for a particular decay; the production rate is related to the strength of the Higgs interaction with one class of particles, while the decay rate is related to the strength of the Higgs interaction with a second (possibly the same) class of particles.\nSo for example, when we measure the rate for g g –> H –> γγ, we are measuring something proportional to (gg gγ)2. Similarly the rate for g g –> H –> WW is proportional to (gg gW)2. There’s no way to easily measure the individual interaction strengths separately.\n\nBut that said, if the rates for gg –> H –> γγ, g g –> H –> W W and gg –> H –> Z Z come out as predicted in the Standard Model (and we’ll already have a very rough idea of whether this is the case by the end of 2012, if the Higgs is found soon) then that will give us some confidence that none of the couplings gW, gZ, gγ, gg can be very different from what it is expected for the Standard Model Higgs particle — which in turn will give confidence that gt, which contributes to gγ and gg, is also what it is predicted for the Standard Model Higgs.\n\nTo convince ourselves that gb and gτ are as predicted in the Standard Model will take longer, and will require measuring things like\n\n• quark + quark  –> quark + quark + H, with H decaying to τ+τ, and\n• up quark +down antiquark –> W+ H , with W+ decaying to a charged anti-lepton and a neutrino, and the Higgs decaying to a bottom quark/anti-quark pair\n\nIf instead it turns out that some of these measurements differ from the predictions of the Standard Model, not only will this difference tell us that we’re dealing with a Higgs more complicated than the Standard Model Higgs, and that the Standard Model needs significant modification, but also the details of the differences may give us important clues as to what changes are needed.  For instance, if the rates for g g –> H –> γγ, g g –> H –> W W and g g –> H –> Z Z are all in the predicted proportion, but are uniformly smaller from what is predicted in the Standard Model, that could suggest that there are two or more Higgs particles, and the field corresponding to the particle we are measuring is giving the known particles only a part of their masses. If instead the ratio of two-photon events to WW and ZZ events differs from the expectation in the Standard Model, then that would suggest that there are unknown particles contributing to the indirect interaction between photons and the Higgs particle. If the ratio of gb and gτ is not as predicted, that could either suggest that there is one Higgs field giving mass to the quarks and a different Higgs field giving mass to the leptons, or it could suggest that there are new particles which are indirectly affecting the interaction of Higgs particles with bottom quarks and/or tau leptons.  Measuring gt using the fifth production process (gluon + gluon –> top quark + top anti-quark + Higgs) would help distinguish those two possibilities.  In short, careful measurements of the interaction strengths of the Higgs particle with other particles will provide key tests of the Standard Model Higgs hypothesis, and provide considerable guidance should any of those tests fail.\n\nIn the end we won’t know for absolutely sure what’s going on at the LHC until we make many measurements not only of Higgs particles but of many other processes, including precision studies of top quarks and searches for many new types of particles that might or might not be present in nature. Meanwhile, other non-LHC experiments may also weigh in with important discoveries or tests of the Standard Model. In the end, we’ll have to combine all the voluminous information we have in order to reach the most complete conclusions possible.  But testing whether any Higgs particle we find is of Standard Model-type will certainly be one of the most important things the LHC will be doing over the coming years.\n\n### 14 responses to “Seeking and Studying the Standard Model Higgs Particle”\n\n1.", null, "Chris Austin\n\nThanks a lot for all these explanations. Is there a simple explanation for why the branching ratio for H -> Z Z -> e+ e- e+ e-, 1 in 48,000 for a 125 GeV Higgs if I understand correctly what you wrote, is so small compared to the branching ratio for H -> W+ W- -> e+ nu_e e- nubar_e, 1 in 400 for a 125 GeV Higgs? Is it just the higher Z mass?\n\nI think you might be missing a g in the second last paragraph, in g g -> H -> gamma gamma.\n\n•", null, "Matt Strassler\n\nThanks for catching the missin g.\n\n1) The Z is heavier than the W, yes, so for a lightweight Higgs of order 120 GeV, its virtual particle is considerably harder to make than the W’s virtual particle.\n\n2) The W has a stronger coupling to the Higgs, by a factor of 2 in the decay rate.\n\n3) The W decays to an electron or muon and the corresponding anti-neutrino about 22% of the time (which is basically just 2/9), whereas the Z has much more complicated formulas for its interactions with other particles, and due to an interesting and accidental cancellation it decays to electron-positron or muon-antimuon only 6% of the time. That ratio of 3ish gets squared (since we have two W’s and two Z’s decaying) so that’s another factor of 10.\n\n4) If there are one lepton and one antilepton in your event, your chance of detecting both of them is probably about 60%. Sometimes one of them has too low energy, or heads out too close to the beampipe, or happens to land inside a jet. When you have two leptons and two antileptons, that gets squared, and actually a little worse than that. So that’s another factor of 2 or so.\n\nThat said, I’m a little worried I overcounted something by a factor of 2 somewhere. Not sure right now, and too tired.\n\n2.", null, "Robert Garisto (@RobertGaristo)\n\nSince we’ve never observed a fundamental scalar before, presumably it will be important to check that the new particle is a spin 0 boson. I assume we can just look at the angular dependence of the Higgs decay products. What’s the best process to do that for a 125 GeV Higgs and how much data is needed?\nCheers, Robert\n\n•", null, "Matt Strassler\n\nGood question. I am sure two photons is best for a direct measurement (ZZ to leptons/antileptons is too rare, WW too indistinct and too much background); how long it takes I don’t remember. (Of course you should remember also that we won’t *know* that it is a fundamental scalar! that’s an assumption that data will have to verify over time.) More convincing will be the decays to W’s and Z’s; if those are large and in the right ratio, it will be hard to argue this is not a Higgs particle of some type.\n\n•", null, "Andy\n\nActually, the H->WW->ll searches rely on the fact that the H is spin 0, to separate the signal from the SM WW background (which is a combination of spin 1/2 and 1 propagators). The spin 0 propagator causes the two leptons to be aligned with each other (in the plane transverse to the beam). So, if we see a signal in the WW->ll channel compatible with the Higgs production x decay rate to WW, this is already strong evidence that the propagator is spin 0.\n\n•", null, "Matt Strassler\n\nYes, excellent point! Thanks for the reminder… though this will require that we fully convince ourselves that we have good control of the WW measurement’s backgrounds. I guess we will have to see where the Higgs mass ends up, and once we know it, see how effectively we can work to reduce the systematic errors. I don’t know how bad it gets in the 115-130 GeV mass range." ]
[ null, "https://profmattstrassler.files.wordpress.com/2011/12/hcp_h2zz1.png", null, "https://profmattstrassler.files.wordpress.com/2011/12/hcp_h2photons.png", null, "https://profmattstrassler.files.wordpress.com/2011/12/hcp_h2ww1.png", null, "https://1.gravatar.com/avatar/48b0effd1f8f54bdeb79c178110f3037", null, "https://1.gravatar.com/avatar/adeb244d3a56730027286aae9204ecc5", null, "https://0.gravatar.com/avatar/c697586fb66b4e6c6e2a21a82f323386", null, "https://1.gravatar.com/avatar/adeb244d3a56730027286aae9204ecc5", null, "https://0.gravatar.com/avatar/0839581bc74c84fea9c2b61588365e1b", null, "https://1.gravatar.com/avatar/adeb244d3a56730027286aae9204ecc5", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9364531,"math_prob":0.9061219,"size":23508,"snap":"2019-51-2020-05","text_gpt3_token_len":5408,"char_repetition_ratio":0.16865215,"word_repetition_ratio":0.042266823,"special_character_ratio":0.22086099,"punctuation_ratio":0.084033616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95755726,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,4,null,4,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T14:43:17Z\",\"WARC-Record-ID\":\"<urn:uuid:2fcff602-aab8-4935-b15a-47e199d558b8>\",\"Content-Length\":\"152466\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:539b24fc-e7b5-4d7c-ae0f-b4868b5a5291>\",\"WARC-Concurrent-To\":\"<urn:uuid:6f0fc2ed-2197-47bc-9243-5117a82cdd99>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://profmattstrassler.com/articles-and-posts/the-higgs-particle/the-standard-model-higgs/seeking-and-studying-the-standard-model-higgs-particle/\",\"WARC-Payload-Digest\":\"sha1:QMPHF5IR5JTNL7FO657O6TUMXBX5LGWY\",\"WARC-Block-Digest\":\"sha1:SIVW7ZNVKRUJ3CTKQ53WT2HO23TIBUOR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594603.8_warc_CC-MAIN-20200119122744-20200119150744-00044.warc.gz\"}"}
https://www.hackmath.net/en/math-problem/3768
[ "Pedestrian\n\nDana started at 10:00 from the point A to point B. These points are distant 12 km. Determine how fast Daniel went when the place B arrived at 11:54. Speed express in km/h.\n\nResult\n\nv =  6.316 km/h\n\nSolution:", null, "Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):", null, "Be the first to comment!", null, "Next similar math problems:\n\n1. Two cars 2", null, "Two cars started from two positions 87 km distant at the same time in opposite directions at speeds 81 km/h and 75 km/h. What was the distance between them after 2 hours 50 minutes of driving.\n2. Cars 6", null, "At 9:00 am two cars started from the same town and traveled at a rate of 35 miles per hour and the other car traveled at a rate of 40 miles per hour. After how many hours will the cars be 30 miles apart?\n3. Cyclist vs car", null, "Cyclist rode out of the city at 18 km/h. 1 hour 30 minutes behind him started car and caught up with the cyclist in 50 minutes. How fast was driving the car? Where (what kilometer) from the city car overtook a cyclist?\n4. Speed of sound", null, "The average speed of sound is 330 meters per second. Estimate how long it will hear the church bell 1 km away. Calculate the distance from what would hear sound after 10 seconds.\n5. Bus 14", null, "Boatesville is 65.35 kilometers from Stanton. A bus traveling from Stanton is 24.13 kilometers from Boatesville. How far has the bus traveled?\n6. Round-trip", null, "A woman works at a law firm in city A which is about 50 miles from city B. She must go to the law library in city B to get a document. Find how long it takes her to drive​ round-trip if she averages 40 mph.\n7. Average speed", null, "Michal went out of the house by car at speed 98 km/h. He came into the goal place for 270 minutes. Determine the distance between the two places.\n8. Inter city bus", null, "The inter city bus leaves Suva at 10.00am and reaches Nadi at 1.00pm covering a distance of 219km. How long did it take to reach the bus Nadi?\n9. Thunderstorm", null, "The sound travels 1 km in about 3 seconds. How far is the storm if there is a time interval of 8 seconds between lightning and thunder?\n10. Cyclist 9", null, "A cyclist travels at a a speed of 4.25 km per hour. At that rate, how far can he travel in 3.75 hours?\n11. Chocolate", null, "I eat 24 chocolate in 10 days. How many chocolate I eat in 15 days at the same pace?\n12. Timeage", null, "Seven times of my age is 8 less than the largest two-digit number. How old I am?\n13. Pills", null, "If it takes 20 minutes to run a batch of 100 pills how many minutes would it take to run a batch of 50 pills\n14. Tram lines", null, "Trams of five lines driven at intervals of 5,8,10,12 and 15 minutes. At 12 o'clock come out of the station at the same time. About how many hours again all meet? How many times have earch tram pass for this stop?\n15. How old", null, "The student who asked how many years he answered: \"After 10 years I will be twice as old than as I was four years ago. How old is student?\n16. Temperature increase", null, "If the temperature at 9:am is 50 degrees. What is the temperature at 5:00pm if the temperature increases 4 degrees Fahrenheit each hour?\n17. Bed time", null, "Tiffany was 5 years old; her week night bedtime grew by ¼ hour each year. If, at age 18, her curfew time is 11pm, what was her bed time when she was 5 years old?" ]
[ null, "https://www.hackmath.net/tex/b82/b8275dd6d564c.svg", null, "https://www.hackmath.net/hashover/images/first-comment.png", null, "https://www.hackmath.net/hashover/images/avatar.png", null, "https://www.hackmath.net/thumb/31/t_2631.jpg", null, "https://www.hackmath.net/thumb/4/t_5204.jpg", null, "https://www.hackmath.net/thumb/22/t_2622.jpg", null, "https://www.hackmath.net/thumb/62/t_2662.jpg", null, "https://www.hackmath.net/thumb/95/t_5495.jpg", null, "https://www.hackmath.net/thumb/64/t_7864.jpg", null, "https://www.hackmath.net/thumb/87/t_3787.jpg", null, "https://www.hackmath.net/thumb/10/t_5110.jpg", null, "https://www.hackmath.net/thumb/68/t_6168.jpg", null, "https://www.hackmath.net/thumb/3/t_6803.jpg", null, "https://www.hackmath.net/thumb/4/t_2504.jpg", null, "https://www.hackmath.net/thumb/45/t_1645.jpg", null, "https://www.hackmath.net/thumb/51/t_5051.jpg", null, "https://www.hackmath.net/thumb/81/t_2981.jpg", null, "https://www.hackmath.net/thumb/5/t_5205.jpg", null, "https://www.hackmath.net/thumb/24/t_7424.jpg", null, "https://www.hackmath.net/thumb/46/t_5546.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95406294,"math_prob":0.95384103,"size":3271,"snap":"2019-43-2019-47","text_gpt3_token_len":858,"char_repetition_ratio":0.097030915,"word_repetition_ratio":0.0062305294,"special_character_ratio":0.26689085,"punctuation_ratio":0.101928376,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96180815,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-22T04:10:17Z\",\"WARC-Record-ID\":\"<urn:uuid:38bf22e7-b7cb-4a3a-bd06-296c0a24c848>\",\"Content-Length\":\"21136\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:12f670c7-2ca9-4065-a899-49621da861ad>\",\"WARC-Concurrent-To\":\"<urn:uuid:261cf0e5-35d1-4406-a12b-08e0242ac62e>\",\"WARC-IP-Address\":\"104.24.104.91\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/math-problem/3768\",\"WARC-Payload-Digest\":\"sha1:YG2WLZKEREJLETTALP4VW57HCN2S4ZE5\",\"WARC-Block-Digest\":\"sha1:7GCMWBDQHJYDHNPEQDTW5PL37OWVYV5N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987798619.84_warc_CC-MAIN-20191022030805-20191022054305-00005.warc.gz\"}"}
https://www.learnsteps.com/burning-ropes-puzzle/?shared=email&msg=fail
[ "# Burning the ropes – Puzzle\n\nBurning the ropes – Puzzle\n4.3 (85.71%) 7 votes\n\n#### Puzzle:\n\nThere are two ropes both are of different length. These ropes can be burnt in equal amount of time that is 1 hour each. You cannot fold the ropes. The rate of burning of rope is constant. Now the puzzle is you have to burn both the ropes exactly in 45 mins. What will you do?", null, "Light one of the rope from both sides and the other from one side only. Now that we know rope which burns from both side will burn in half an hour. So lets wait for half an hour. Here we don’t have clock half an hour is measured only by burning of first rope.\n\nAfter exactly half an hour the first ropes will be burned totally, at the exact time the second rope will be burned half which means it can burn for another half hour. Now light the other side of the second rope. Its burning time will be half that is half of half hour in 15 min.\n\nThus total time taken to burn both the ropes is 45 Min.", null, "•", null, "" ]
[ null, "https://i1.wp.com/www.learnsteps.com/wp-content/uploads/2016/12/DSCN7230b.jpg", null, "https://secure.gravatar.com/avatar/7deeb9cef823cc23ce744ebd45d9ea46", null, "https://secure.gravatar.com/avatar/f02de20d0ed075bb85d4e8033ec4f62a", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9490656,"math_prob":0.8522174,"size":916,"snap":"2019-35-2019-39","text_gpt3_token_len":208,"char_repetition_ratio":0.1633772,"word_repetition_ratio":0.011049724,"special_character_ratio":0.22816594,"punctuation_ratio":0.08374384,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98447573,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-21T03:44:05Z\",\"WARC-Record-ID\":\"<urn:uuid:b241f082-7426-4ab8-a69a-14bfab54e8f9>\",\"Content-Length\":\"78970\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bcb7f6c8-6857-47eb-b8fb-5edf5675e286>\",\"WARC-Concurrent-To\":\"<urn:uuid:bdb6b9fa-335b-416d-b078-87efa1001685>\",\"WARC-IP-Address\":\"139.59.80.223\",\"WARC-Target-URI\":\"https://www.learnsteps.com/burning-ropes-puzzle/?shared=email&msg=fail\",\"WARC-Payload-Digest\":\"sha1:YRKBWM5TF2MLASP4MSFOFJ4OHBZDRNOG\",\"WARC-Block-Digest\":\"sha1:MYGU4RTGSGXNBQA6GXJEWPKJ3XHKETXJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574182.31_warc_CC-MAIN-20190921022342-20190921044342-00422.warc.gz\"}"}
https://www.groundai.com/project/dynamic-sparse-graph-for-efficient-deep-learning/
[ "Dynamic Sparse Graph for Efficient Deep Learning\n\n# Dynamic Sparse Graph for Efficient Deep Learning\n\nLiu Liu, Lei Deng, Xing Hu, Maohua Zhu, Yufei Ding, Yuan Xie\nUniversity of California, Santa Barbara\n{liu_liu, leideng, huxing, maohuazhu, yufeiding, yuanxie}@ucsb.edu\n&Guoqi Li\nTsinghua University\[email protected]\n###### Abstract\n\nWe propose to execute deep neural networks (DNNs) with dynamic and sparse graph (DSG) structure for compressive memory and accelerative execution during both training and inference. The great success of DNNs motivates the pursuing of lightweight models for the deployment onto embedded devices. However, most of the previous studies optimize for inference while neglect training or even complicate it. Training is far more intractable, since (i) the neurons dominate the memory cost rather than the weights in inference; (ii) the dynamic activation makes previous sparse acceleration via one-off optimization on fixed weight invalid; (iii) batch normalization (BN) is critical for maintaining accuracy while its activation reorganization damages the sparsity. To address these issues, DSG activates only a small amount of neurons with high selectivity at each iteration via a dimension-reduction search (DRS) and obtains the BN compatibility via a double-mask selection (DMS). Experiments show significant memory saving (1.7-4.5x) and operation reduction (2.3-4.4x) with little accuracy loss on various benchmarks.\n\nDynamic Sparse Graph for Efficient Deep Learning\n\nLiu Liu, Lei Deng, Xing Hu, Maohua Zhu, Yufei Ding, Yuan Xie University of California, Santa Barbara {liu_liu, leideng, huxing, maohuazhu, yufeiding, yuanxie}@ucsb.edu Guoqi Li Tsinghua University [email protected]\n\n\\@float\n\nnoticebox[b]Preprint. Work in progress.\\end@float\n\n## 1 Introduction\n\nDeep Neural Networks (DNNs) have been achieving impressive progress in a wide spectrum of domains [2, 3, 4, 5, 6], while the models are extremely memory- and compute-intensive. The high representational and computational cost motivates many researchers to investigate approaches on improving the execution performance, including matrix or tensor decomposition [7, 8, 9, 10, 11], data quantization [12, 13, 14, 15, 16, 17, 18], and network pruning [19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]. However, most of the previous work aims at inference while the challenges for reducing the representational and computational cost of training are not well-studied. Although some work demonstrate acceleration in distributed training [32, 33, 34], we target at single-node optimization, and our method can also boost training in a distributed fashion.\n\nDNN training, which demands much more hardware resources in terms of both memory capacity and computation volume, is far more challenging than inference. Firstly, activation data in training will be stored for backpropagation, significantly increasing the memory consumption. Secondly, training iteratively updates model parameters using mini-batched stochastic gradient descent (SGD). We almost always expect larger mini-batches for higher throughput (Figure 1(a)), faster convergence, and better accuracy . However, memory capacity is often the limiting factor (Figure 1(b)); it may cause performance degradation or even make large models with deep structures or targeting high-resolution vision tasks hard to train [3, 36].\n\nIt is difficult to apply existing sparsity techniques towards inference phase to training phase because of the following reasons: 1) Prior arts mainly compress the pre-trained and fixed weight parameters to reduce the off-chip memory access in inference [37, 38], while instead, the dynamic neuronal activations turn out to be the crucial bottleneck in training , making the prior inference-oriented methods inefficient. Besides, during training we need to stash vast batched activation space for the backward gradient calculation. Therefore, neuron activations creates a new memory bottleneck (Figure 1(c)). In this paper, we will sparsify the neuron activations for training compression. 2) The existing inference accelerations usually add extra optimization problems onto the critical path [26, 27, 22, 25, 40, 31], i.e., ‘complicated training simplified inference’, which embarrassingly complicates the training phase. 3) Moreover, previous studies reveal that batch normalization (BN) is crucial for improving accuracy and robustness (Figure 1(d)) through activation fusion across different samples within one mini-batch for better representation [41, 42]. BN almost becomes a standard training configuration; however, inference-oriented methods seldom discuss BN and treat BN parameters as scaling and shift factors in the forward pass. We further find that BN will damage the sparsity due to the activation reorganization (Figure 1(e)). Since this work targets both training and inference, the BN compatibility problem should be addressed.", null, "Figure 1: Comprehensive motivation illustration. (a) Using larger mini-batch size helps improve throughput until it is compute-bound; (b) Limited memory capacity on a single computing node prohibits the use of large mini-batch size; (c) Neuronal activation dominates the representational cost when mini-batch size becomes large; (d) BN is indispensable for maintaining accuracy; (e) Upper and lower one are the feature maps before and after BN, respectively. However, using BN damages the sparsity through information fusion; (f) There exists such great representational redundancy that more than 80% of activations are close to zero.\n\nFrom the view of information representation, the activation of each neuron reflects its selectivity to the current stimulus sample , and this selectivity dataflow propagates layer by layer forming different representation levels. Fortunately, there is much representational redundancy, for example, lots of neuron activations for each stimulus sample are so small and can be removed (Figure 1(f)). Motivated by above comprehensive analysis regarding memory and compute, we propose to search critical neurons for constructing a sparse graph at every iteration. By activating only a small amount of neurons with a high selectivity, we can significantly save memory and simplify computation with tolerable accuracy degradation. Because the neuron response dynamically changes under different stimulus samples, the sparse graph is variable. The neuron-aware dynamic and sparse graph (DSG) is fundamentally distinct from the static one in previous work on permanent weight pruning since we never prune the graph but activate part of them each time. Therefore, we maintain the model expressive power as much as possible. A graph selection method, dimension-reduction search (DRS), is designed for both compressible activations with element-wise unstructured sparsity and accelerative vector-matrix multiplication (VMM) with vector-wise structured sparsity. Through double-mask selection (DMS) design, it is also compatible with BN. We can use the same selection pattern and extend our method to inference. In a nutshell, we propose a compressible and accelerative DSG approach supported by DRS and DMS methods. It can achieve 1.7-4.5x memory compression and 2.3-4.4x computation reduction with minimal accuracy loss. This work simultaneously pioneers the approach towards efficient online training and offline inference, which can benefit the deep learning in both the cloud and the edge.\n\n## 2 Approach\n\nOur method forms DSGs for different inputs, which are accelerative and compressive, as shown in Figure2(a). On the one hand, choosing a small number of critical neurons to participate in computation, DSG can reduce the computational cost by eliminating calculations of non-critical neurons. On the other hand, it can further reduce the representational cost via compression on sparsified activations. Different from previous methods using permanent pruning, our approach does not prune any neuron and the associated weights; instead, it activates a sparse graph according to the input sample at each iteration. Therefore, DSG does not compromise the expressive power of the model.", null, "Figure 2: (a) Illustration of dynamic and sparse graph (DSG); (b) Dimension reduction search (DRS) for construction of DSG; (c) Double mask selection (DMS) for BN compatibility.\n\nConstructing DSG needs to determine which neurons are critical. A naive approach is to select critical neurons according to the output activations. If the output neurons have a small or negative activation value, i.e., not selective to current input sample, they can be removed for saving representational cost. Because these activations will be small or absolute zero after the following ReLU non-linear function (i.e., ReLU() max(0, )), it’s reasonable to set all of them to be zero. However, this naive approach requires computations of all VMM operations within each layer before the selection of critical neurons, which is very costly.\n\n### 2.1 Dimension Reduction Search\n\nTo avoid the costly VMM operations in the mentioned naive selection, we propose an efficient method, i.e., dimension reduction search (DRS), to estimate the importance of output neurons. As shown in Figure2(b), we first reduce the dimensions of X and W, and then execute the lightweight VMM operations in the low-dimension space at minimal cost. After that, we estimate the neuron importance according to the virtual output activations. Then, a binary mask can be produced in which the zeros represent the non-critical neurons with small activations that are removable. We use a top-k search method that only keeps largest k neurons, where an inter-sample threshold sharing mechanism is leveraged to greatly reduce the search cost 111Implementation details are shown in Appendices.. Note that k is determined by the output size and a pre-configured sparsity parameter . Then we can compute the accurate activations of the critical neurons in the original high-dimension space and avoid calculating the non-critical neurons. Thus, besides the compressive sparse activations, DRS can further save a significant amount of expensive operations in high-dimensional space.", null, "Figure 3: Compressive and accelerative DSG. (a) original dense convolution; (b) converted accelerative VMM operation. (c) Zero-value compression.\n\nIn this way, a vector-wise structured sparsity can be achieved, as shown in Figure 3(b). The ones in the selection mask (marked as colored blocks) denote the critical neurons, and the non-critical ones can bypass the memory access and computation of a corresponding whole column of the weight matrix. Furthermore, the generated sparse activations can be compressed via the zero-value compression [43, 44, 45] (Figure 3(c)). Consequently, it is critical to reduce the vector dimension but keep the activations calculated in low-dimension space as accurate as possible, compared to the ones in original high-dimension space.\n\n### 2.2 Sparse Random Projection for Efficient DRS\n\nNotations: Each CONV layer has a four dimensional weight tensor (, , , ), where is the number of filters, i.e., the number of output feature maps (FMs); is the number of input FMs; (, ) represents the kernel size. Thus, the CONV layer in Figure 3(a) can be converted to many VMM operations, as shown in Figure 3(b). Each row in the matrix of input FMs is the activations from a sliding window across all input FMs (), and after the VMM operation with the weight matrix () it can generate points at the same location across all output FMs. Further considering the size of each output FM and the mini-batch size of , the whole rows of VMM operations has a computational complexity of . For the FC layer with input neurons and output neurons, this complexity is . Note that here we switch the order of BN and ReLU layer from ‘CONV/FC-BN-ReLU’ to ‘CONV/FC-ReLU-BN’, because it’s hard to determine the activation value of the non-critical neurons if the following layer is BN (this value is zero for ReLU). As shown in previous work, this reorganization could bring better accuracy .\n\nFor the sake of simplicity, we just consider the operation for each sliding window in the CONV layer or the whole FC layer under one single input sample as a basic optimization problem. The generation of each output activation requires an inner product operation, as follows:\n\n yj=φ(⟨Xi,Wj⟩) (1)\n\nwhere is the -th row in the matrix of input FMs (for the FC layer, there is only one X vector), is the -th column of the weight matrix , and is the neuronal transformation (e.g., ReLU function, here we abandon bias). Now, according to equation (1), the preservation of the activation is equivalent to preserve the inner product.\n\nWe introduce a dimension-reduction lemma, named Johnson–Lindenstrauss Lemma (JLL) , to implement the DRS with inner product preservation. This lemma states that a set of points in a high-dimensional space can be embedded into a low-dimensional space in such a way that the Euclidean distances between these points are nearly preserved. Specifically, given , a set of points in (i.e., all and ), and a number of , there exists a linear map such that\n\n (1−ϵ)∥Xi−Wj∥2≤∥f(Xi)−f(Wj)∥2≤(1+ϵ)∥Xi−Wj∥2 (2)\n\nfor any given and pair, where is a hyper-parameter to control the approximation error, i.e., larger larger error. When is sufficiently small, one corollary from JLL is the following norm preservation [48, 49]:\n\n P[ (1−ϵ)∥Z∥2≤∥f(Z)∥2≤(1+ϵ)∥Z∥2 ]≥1−O(ϵ2) (3)\n\nwhere Z could be any or , and denotes a probability. It means the vector norm can be preserved with a high probability controlled by . Given these basics, we can further get the inner product preservation:\n\n P[ |⟨f(Xi),f(Wj)⟩−⟨Xi,Wj⟩|≤ϵ ]≥1−O(ϵ2). (4)\n\nThe detailed proof can be found in the Appendices.\n\nRandom projection [48, 50, 51] is widely used to construct the linear map . Specifically, the original -dimensional vector is projected to a -dimensional () one, using a random matrix R. Then we can reduce the dimension of all and by\n\n f(Xi)=1√kRXi∈Rk,  f(Wj)=1√kRWj∈Rk. (5)\n\nThe random projection matrix R can be generated from Gaussian distribution . In this paper, we adopt a simplified version, termed as sparse random projection [51, 52, 53] with\n\n P(Rpq=√s)=12s;  P(Rpq=0)=1−1s;  P(Rpq=−√s)=12s (6)\n\nfor all elements in R. This R only has ternary values that can remove the multiplications during projection, and the remained additions are very sparse. Therefore, the projection overhead is negligible compared to other high-precision operations involving multiplication. Here we set with 67% sparsity in statistics.\n\nEquation (4) indicates the low-dimensional inner product can still approximate the original high-dimensional one in equation (1) if the reduced dimension is sufficiently high. Therefore, it is possible to calculate equation (1) in a low-dimensional space for activation estimation, and select the important neurons. As shown in Figure 3(b), each sliding window dynamically selects its own important neurons for the calculation in high-dimensional space, marked in red and blue as two examples. Figure 4 visualizes two sliding windows in a real network to help understand the dynamic DRS process. Here the neuronal activation vector ( length) is reshaped to a matrix for clarity. Now For the CONV layer, the computational complexity is only , which is less than the original high-dimensional computation with complexity because we usually have . For the FC layer, we also have .\n\n### 2.3 DMS for BN Compatibility\n\nTo deal with the important but intractable BN layer, we propose a double-mask selection (DMS) method presented in Figure 2(c). After the DRS estimation, we produce a sparsifying mask that removes the unimportant neurons. The ReLU activation function can maintain this mask by inhibiting the negative activation (actually all the activations from the CONV layer or FC layer after the DRS mask are positive under reasonably large sparsity). However, the BN layer will damage this sparsity through inter-sample activation fusion. To address this issue, we copy the same DRS mask and directly use it on the BN output. It is straightforward but reasonable because we find that although BN causes the zero activation to be non-zero (Figure 1(f)), these non-zero activations are still very small and can also be removed. This is because BN just scales and shifts the activations that won’t change the relative sort order. In this way, we can achieve fully sparse activation dataflow.\n\n## 3 Experimental Results\n\n### 3.1 Experiment Setup\n\nThe overall training algorithm is presented in the Appendices. Going through the dataflow where the red color denotes the sparse tensors, a widespread sparsity in both the forward and backward passes is demonstrated. Regarding the evaluation network models, we use LeNet and a multi-layered perceptron (MLP) on a small-scale FASHION dataset , VGG8 [12, 14]/ResNet8 (a customized ResNet-variant with 3 residual blocks and 2 FC layers)/ResNet20/WRN-8-2 on medium-scale CIFAR10 dataset , VGG8 and WRN-8-2 on another medium-scale CIFAR100 dataset , and ResNet18 /WRN-18-2 /VGG16 on the large-scale ImageNet dataset as workloads. The programming framework is PyTorch and the training platform is based on NVIDIA Titan Xp GPU. We adopt the zero-value compression method [43, 44, 45] for memory compression and MKL compute library on Intel Xeon CPU for the acceleration evaluation.\n\n### 3.2 Accuracy Analysis\n\nIn this section, we provide a comprehensive analysis regarding the influence of sparsity on accuracy and explore the robustness of MLP and CNN, the graph selection strategy, the BN compatibility, and the importance of width and depth.\n\nAccuracy using DSG. Figure 5(a) presents the accuracy curves on small and medium scale models by using DSG under different sparsity levels. Three conclusions are observed: 1) The proposed DSG affects little on the accuracy when the sparsity is 60%, and the accuracy will present an abrupt descent with sparsity larger than 80%. 2) Usually, the ResNet model family is more sensitive to the sparsity increasing since fewer parameters than the VGG family. For the VGG8 on the CIFAR10 dataset, the accuracy loss is still within 0.5% when sparsity reaches 80%. 3) Compared to MLP, CNN can tolerate more sparsity. Figure 5(b) further shows the results on large scale ImageNet models. Because training large model is time costly, we only present several experimental points. Consistently, the VGG16 shows better robustness compared to the ResNet18, and the WRN with wider channels on each layer performs much better than the other two models. We will discuss the topic of width and depth later.\n\nGraph Selection Strategy. To investigate the influence of graph selection strategy, we repeat the sparsity vs. accuracy experiments on CIFAR10 dataset under different selection methods. Two baselines are used here: the Oracle one that keeps the neurons with top-k activations after the whole VMM computation at each layer, and the random one that randomly selects neurons to keep. The results are shown in Figure 5(c), in which we can see that our DRS and the Oracle one perform much better than the random selection under high sparsity condition. Moreover, DRS achieves nearly the same accuracy with the oracle top-k selection, which indicates the proposed random projection method can find an accurate activation estimation in low-dimensional space. In detail, Figure 5(d) shows the influence of parameter that reflects the degree of dimension reduction. Lower can approach the original inner product more accurately, that brings higher accuracy but at the cost of more computation for graph selection since less dimension reduction. With , the accuracy loss is within 1% even if the sparsity reaches 80%.", null, "Figure 5: Comprehensive analysis on sparsity vs. accuracy. (a) & (b) Accuracy using DSG, and the influence of (c) the graph selection strategy, (d) the degree of dimension reduction, (e) the DMS for BN compatibility, (f) the network depth and width.\n\nBN Compatibility. Figure 5(e) focuses the BN compatibility issue. Here we use DRS for the graph sparsifying, and compare three cases: 1) removing the BN operation and using single mask; 2) keeping BN and using only single mask (the first one in Figure 2(c)); 3) keeping BN and using double masks (i.e. DMS). The one without BN is very sensitive to the graph ablation, which indicates the importance of BN for training. Comparing the two with BN, the DMS even achieves better accuracy since the regularization effect. This observation indicates the effectiveness of the proposed DMS method for simultaneously recovering the sparsity damaged by the BN layer and maintaining the accuracy.\n\nWidth or Depth. Furthermore, we investigate an interesting comparison regarding the network width and depth, as shown in Figure 5(f). On the training set, WRN with fewer but wider layers demonstrates more robustness than the deeper one with more but slimmer layers. On the validation set, the results are a little more complicated. Under small and medium sparsity, the deeper ResNet performs better (1%) than the wider one. While when the sparsity increases substantial (75%), WRN can maintain the accuracy better. This indicates that, in medium-sparse space, the deeper network has stronger representation ability because of the deep structure; however, in ultra-high-sparse space, the deeper structure is more likely to collapse since the accumulation of the pruning error layer by layer. In reality, we can determine which type of model to use according to the sparsity requirement. In Figure 5(b) on ImageNet, the reason why WRN-18-2 performs much better is that it has wider layers without reducing the depth.\n\n### 3.3 Representational Cost Reduction\n\nThis section presents the benefits from DSG on representational cost. We measure the memory consumption over five CNN benchmarks on both the training and inference phases. For data compression, we use zero-value compression algorithm [43, 44, 45]. Figure 6 shows the memory optimization results, where the model name, mini-batch size, and the sparsity are provided. In training, besides the parameters, the activations across all layers should be stashed for the backward computation. Consistent with the observation mentioned above that the neuron activation beats weight to dominate memory overhead, which is different from the previous work on inference. We can reduce the overall representational cost by average 1.7x (2.72 GB), 3.2x (4.51 GB), and 4.2x (5.04 GB) under 50%, 80% and 90% sparsity, respectively. If only considering the neuronal activation, these ratios could be higher up to 7.1x. The memory overhead for the selection masks is minimal (2%).\n\nDuring inference, only memory space to store the parameters and the activations of the layer with maximum neuron amount is required. The benefits in inference are relatively smaller than that in training since weight is the dominant memory. On the ResNet152, the extra mask overhead even offsets the compression benefit under 50% sparsity, whereas, we can still achieve up to average 7.1x memory reduction for activations and 1.7x for overall memory. Although the compression is limited for inference, it still can achieve noticeable acceleration that will be shown in the next section. Moreover, reducing costs for both training and inference is our major contribution.\n\n### 3.4 Computational Cost Reduction\n\nWe assess the results on reducing the computational cost of both training and inference. As shown in Figure 7, both the forward and backward pass consume much fewer operations, i.e., multiply-and-accumulate (MAC). On average, 1.4x (5.52 GMACs), 1.7x (9.43 GMACs), and 2.2x (10.74 GMACs) operation reduction are achieved in training under 50%, 80% and 90% sparsity, respectively. For inference with only forward pass, the results increase to 1.5x (2.26 GMACs), 2.8x (4.22 GMACs), and 3.9x (4.87 GMACs), respectively. The overhead of the DRS computation in low-dimensional space is relatively larger (6.5% in training and 19.5% in inference) compared to the mask overhead in memory cost. Note that the training demonstrates less improvement than the inference, which is because the acceleration of the backward pass is partial. The error propagation is accelerative, but the weight gradient generation is not because of the irregular sparsity that is hard to obtain practical acceleration. Although the computation of this part is also very sparse with much fewer operations222See Algorithm 1 in the Appendices, we don’t include its GMACs reduction for practical concern.\n\nFinally, we evaluate the execution time on CPU using Intel MKL kernels (). As shown in Figure 8(a), we evaluate the execution time of these layers after the DRS selection on VGG-8. Comparing to VMM baselines, our approach can achieve 2.0x, 5.0x, and 8.5x speedup under 50%, 80%, and 90% sparsity, respectively. When the baselines change to GEMM (general matrix multiplication), the speedup decreases to 0.6x, 1.6x, and 2.7x, respectively. The reason is that DSG generates dynamic vector-wise sparsity, which is not well supported by GEMM.\n\nWe further compare our approach with smaller dense models which could be another way to reduce computational cost. As shown in Figure 8(b), comparing with dense baseline, our approach can reduce training time with little accuracy loss. Even though the equivalent smaller dense models with the same effective nodes, i.e., reduced MACs, save more training time, the accuracy is much worse than our DSG approach.", null, "Figure 8: (a) Layer-wise execution time comparison; (b) Validation accuracy vs. training time of different models: large-sparse ones and smaller-dense ones with equivalent MACs.\n\n## 4 Related Work\n\nDNN Compression achieved up to 90% weight sparsity by randomly removing connections. [20, 21] reduced the weight parameters by pruning the unimportant connections. However, the compression is mainly achieved on FC layers, that makes it ineffective for CONV layer-dominant networks, e.g., ResNet. Moreover, it is difficult to obtain practical speedup due to the irregularity of the element-wise sparsity. Even if designing ASIC from scratch [37, 38], the index overhead is enormous and it only works under high sparsity. These methods usually require a pre-trained model, iterative pruning and fine-tune retrain, that targets inference optimization.\n\nDNN Acceleration Different from compression, the acceleration work consider more on the sparse pattern. In contrast to the fine-grain compression, coarse-grain sparsity was further proposed to optimize the execution speed. Channel-level sparsity was gained by removing unimportant weight filters , training penalty coefficients , or introducing group-lasso optimization [25, 24, 40]. introduced a L2-norm group-lasso optimization for both medium-grain sparsity (row/column) and coarse-grain weight sparsity (channel/filter/layer). introduced the Taylor expansion for neuron pruning. However, it just benefits the inference acceleration, and the extra solving of the optimization problem usually makes the training more complicated. demonstrated predicting important neurons then bypassed the unimportant ones via low-precision pre-computation on small networks. leveraged the randomized hashing to predict the important neurons. However, the hashing search aims at finding neurons whose weight bases are similar to the input vector, which cannot estimate the inner product accurately thus will probably cause significant accuracy loss on large models. used a straightforward top-k pruning on the back propagated errors for training acceleration. But they only simplified the backward pass and presented the results on tiny FC models. Furthermore, the BN compatibility problem that is very important for large-model training still remains untouched. pruned the gradients for accelerating distributed training, but the focus is on multi-node communication rather than the computation topic discussed in this paper.\n\n## 5 Conclusion\n\nIn this work, we propose DSG (dynamic and sparse graph) structure for efficient DNN training and inference through a DRS (dimension reduction search) sparsity forecast for compressive memory and accelerative execution and a DMS (double-mask selection) for BN compatibility without sacrificing model’s expressive power. It can be easily extended to the inference by using the same selection pattern after training. Our experiments over various benchmarks demonstrate significant memory saving (4.5x for training and 1.7x for inference) and computation reduction (2.3x for training and 4.4x for inference). Through significantly boosting both forward and backward passes in training, as well as in inference, DSG promises efficient deep learning in both the cloud and edge.\n\n## References\n\n• Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, p. 436, 2015.\n• K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.\n• K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.\n• O. Abdel-Hamid, A.-r. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, “Convolutional neural networks for speech recognition,” IEEE/ACM Transactions on audio, speech, and language processing, vol. 22, no. 10, pp. 1533–1545, 2014.\n• J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” arXiv preprint, vol. 1612, 2016.\n• Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al., “Google’s neural machine translation system: Bridging the gap between human and machine translation,” arXiv preprint arXiv:1609.08144, 2016.\n• J. Xue, J. Li, D. Yu, M. Seltzer, and Y. Gong, “Singular value decomposition based low-footprint speaker adaptation and personalization for deep neural network,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp. 6359–6363, IEEE, 2014.\n• A. Novikov, D. Podoprikhin, A. Osokin, and D. P. Vetrov, “Tensorizing neural networks,” in Advances in Neural Information Processing Systems, pp. 442–450, 2015.\n• T. Garipov, D. Podoprikhin, A. Novikov, and D. Vetrov, “Ultimate tensorization: compressing convolutional and fc layers alike,” arXiv preprint arXiv:1611.03214, 2016.\n• Y. Yang, D. Krompass, and V. Tresp, “Tensor-train recurrent neural networks for video classification,” arXiv preprint arXiv:1707.01786, 2017.\n• J. M. Alvarez and M. Salzmann, “Compression-aware training of deep networks,” in Advances in Neural Information Processing Systems, pp. 856–867, 2017.\n• M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio, “Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1,” arXiv preprint arXiv:1602.02830, 2016.\n• S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou, “Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients,” arXiv preprint arXiv:1606.06160, 2016.\n• L. Deng, P. Jiao, J. Pei, Z. Wu, and G. Li, “Gxnor-net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework,” Neural Networks, vol. 100, pp. 49–58, 2018.\n• C. Leng, H. Li, S. Zhu, and R. Jin, “Extremely low bit neural network: Squeeze the last bit out with admm,” arXiv preprint arXiv:1707.09870, 2017.\n• W. Wen, C. Xu, F. Yan, C. Wu, Y. Wang, Y. Chen, and H. Li, “Terngrad: Ternary gradients to reduce communication in distributed deep learning,” in Advances in Neural Information Processing Systems, pp. 1508–1518, 2017.\n• S. Wu, G. Li, F. Chen, and L. Shi, “Training and inference with integers in deep neural networks,” arXiv preprint arXiv:1802.04680, 2018.\n• J. L. McKinstry, S. K. Esser, R. Appuswamy, D. Bablani, J. V. Arthur, I. B. Yildiz, and D. S. Modha, “Discovering low-precision networks close to full-precision networks for efficient embedded inference,” arXiv preprint arXiv:1809.04191, 2018.\n• A. Ardakani, C. Condo, and W. J. Gross, “Sparsely-connected neural networks: towards efficient vlsi implementation of deep neural networks,” arXiv preprint arXiv:1611.01427, 2016.\n• S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for efficient neural network,” in Advances in neural information processing systems, pp. 1135–1143, 2015.\n• S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” arXiv preprint arXiv:1510.00149, 2015.\n• Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang, “Learning efficient convolutional networks through network slimming,” in 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2755–2763, IEEE, 2017.\n• H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf, “Pruning filters for efficient convnets,” arXiv preprint arXiv:1608.08710, 2016.\n• Y. He, X. Zhang, and J. Sun, “Channel pruning for accelerating very deep neural networks,” in International Conference on Computer Vision (ICCV), vol. 2, p. 6, 2017.\n• J.-H. Luo, J. Wu, and W. Lin, “Thinet: A filter level pruning method for deep neural network compression,” arXiv preprint arXiv:1707.06342, 2017.\n• W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li, “Learning structured sparsity in deep neural networks,” in Advances in Neural Information Processing Systems, pp. 2074–2082, 2016.\n• P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz, “Pruning convolutional neural networks for resource efficient inference,” 2016.\n• X. Sun, X. Ren, S. Ma, and H. Wang, “meprop: Sparsified back propagation for accelerated deep learning with reduced overfitting,” arXiv preprint arXiv:1706.06197, 2017.\n• R. Spring and A. Shrivastava, “Scalable and sustainable deep learning via randomized hashing,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 445–454, ACM, 2017.\n• Y. Lin, C. Sakr, Y. Kim, and N. Shanbhag, “Predictivenet: An energy-efficient convolutional neural network via zero prediction,” in Circuits and Systems (ISCAS), 2017 IEEE International Symposium on, pp. 1–4, IEEE, 2017.\n• T. Zhang, K. Zhang, S. Ye, J. Li, J. Tang, W. Wen, X. Lin, M. Fardad, and Y. Wang, “Adam-admm: A unified, systematic framework of structured weight pruning for dnns,” arXiv preprint arXiv:1807.11091, 2018.\n• Y. Lin, S. Han, H. Mao, Y. Wang, and W. J. Dally, “Deep gradient compression: Reducing the communication bandwidth for distributed training,” arXiv preprint arXiv:1712.01887, 2017.\n• P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He, “Accurate, large minibatch sgd: training imagenet in 1 hour,” arXiv preprint arXiv:1706.02677, 2017.\n• Y. You, Z. Zhang, C. Hsieh, J. Demmel, and K. Keutzer, “Imagenet training in minutes,” CoRR, abs/1709.05011, 2017.\n• S. L. Smith, P.-J. Kindermans, and Q. V. Le, “Don’t decay the learning rate, increase the batch size,” arXiv preprint arXiv:1711.00489, 2017.\n• Y. Wu and K. He, “Group normalization,” arXiv preprint arXiv:1803.08494, 2018.\n• S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, and W. J. Dally, “Eie: efficient inference engine on compressed deep neural network,” in Computer Architecture (ISCA), 2016 ACM/IEEE 43rd Annual International Symposium on, pp. 243–254, IEEE, 2016.\n• S. Han, J. Kang, H. Mao, Y. Hu, X. Li, Y. Li, D. Xie, H. Luo, S. Yao, Y. Wang, et al., “Ese: Efficient speech recognition engine with sparse lstm on fpga,” in Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 75–84, ACM, 2017.\n• A. Jain, A. Phanishayee, J. Mars, L. Tang, and G. Pekhimenko, “Gist: Efficient data encoding for deep neural network training,” in 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA), pp. 776–789, IEEE, 2018.\n• L. Liang, L. Deng, Y. Zeng, X. Hu, Y. Ji, X. Ma, G. Li, and Y. Xie, “Crossbar-aware neural network pruning,” arXiv preprint arXiv:1807.10816, 2018.\n• A. S. Morcos, D. G. Barrett, N. C. Rabinowitz, and M. Botvinick, “On the importance of single directions for generalization,” arXiv preprint arXiv:1803.06959, 2018.\n• S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.\n• Y. Zhang, J. Yang, and R. Gupta, “Frequent value locality and value-centric data cache design,” ACM SIGPLAN Notices, vol. 35, no. 11, pp. 150–159, 2000.\n• N. Vijaykumar, G. Pekhimenko, A. Jog, A. Bhowmick, R. Ausavarungnirun, C. Das, M. Kandemir, T. C. Mowry, and O. Mutlu, “A case for core-assisted bottleneck acceleration in gpus: enabling flexible data compression with assist warps,” in ACM SIGARCH Computer Architecture News, vol. 43, pp. 41–53, ACM, 2015.\n• M. Rhu, M. O’Connor, N. Chatterjee, J. Pool, Y. Kwon, and S. W. Keckler, “Compressing dma engine: Leveraging activation sparsity for training deep neural networks,” in High Performance Computer Architecture (HPCA), 2018 IEEE International Symposium on, pp. 78–91, IEEE, 2018.\n• D. Mishkin and J. Matas, “All you need is a good init,” arXiv preprint arXiv:1511.06422, 2015.\n• W. B. Johnson and J. Lindenstrauss, “Extensions of lipschitz mappings into a hilbert space,” Contemporary mathematics, vol. 26, no. 189-206, p. 1, 1984.\n• K. K. Vu, Random projection for high-dimensional optimization. PhD thesis, Université Paris-Saclay, 2016.\n• I. S. Kakade and G. Shakhnarovich, “Cmsc 35900 (spring 2009) large scale learning lecture: 2 random projections,” 2009.\n• N. Ailon and B. Chazelle, “The fast johnson–lindenstrauss transform and approximate nearest neighbors,” SIAM Journal on computing, vol. 39, no. 1, pp. 302–322, 2009.\n• D. Achlioptas, “Database-friendly random projections,” in Proceedings of the twentieth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pp. 274–281, ACM, 2001.\n• E. Bingham and H. Mannila, “Random projection in dimensionality reduction: applications to image and text data,” in Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 245–250, ACM, 2001.\n• P. Li, T. J. Hastie, and K. W. Church, “Very sparse random projections,” in Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 287–296, ACM, 2006.\n• Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.\n• H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.\n• S. Zagoruyko and N. Komodakis, “Wide residual networks,” arXiv preprint arXiv:1605.07146, 2016.\n• A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” 2009.\n• J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248–255, IEEE, 2009.\n• E. Wang, Q. Zhang, B. Shen, G. Zhang, X. Lu, Q. Wu, and Y. Wang, “Intel math kernel library,” in High-Performance Computing on the Intel® Xeon Phi™, pp. 167–188, Springer, 2014.\n• Y. He, G. Kang, X. Dong, Y. Fu, and Y. Yang, “Soft filter pruning for accelerating deep convolutional neural networks,” arXiv preprint arXiv:1808.06866, 2018.\n\n## Appendix A DRS for Inner Product Preservation\n\nTheorem 1. Given a set of points in (i.e. all and ), and a number of , there exist a linear map and a , for we have\n\n P[ |⟨f(Xi),f(Wj)⟩−⟨Xi,Wj⟩|≤ϵ ]≥1−O(ϵ2). (7)\n\nfor all and .\n\nProof. According to the definition of inner product and vector norm, any two vectors a and b satisfy\n\n {⟨a,b⟩=(∥a∥2+∥%b∥2−∥a−b∥2)/2⟨a,b⟩=(∥a+b∥2−∥a% ∥2−∥b∥2)/2. (8)\n\nIt is easy to further get\n\n ⟨a,b⟩=(∥a+b∥2−∥a% −b∥2)/4. (9)\n\nTherefore, we can transform the target in equation (7) to\n\n | ⟨f(Xi),f(Wj)⟩−⟨Xi,Wj⟩ |=| ∥f(Xi)+f(Wj)∥2−∥f(Xi)−f(%Wj)∥2−∥Xi+Wj∥2+∥Xi−% Wj∥2 |/4≤| ∥f(Xi)+f(Wj)∥2−∥Xi+W% j∥2 |/4+| ∥f(Xi)−f(Wj)∥2−∥Xi−Wj∥2 |/4, (10)\n\nwhich is also based on the fact that . Now recall the definition of random projection in equation (5) of the main text\n\n f(Xi)=1√kRXi∈Rk,  f(Wj)=1√kRWj∈Rk. (11)\n\nSubstituting equation (11) into equation (10), we have\n\n | ⟨f(Xi),f(Wj)⟩−⟨Xi,Wj⟩ |≤| ∥1√kRXi+1√k%RWj∥2−∥Xi+Wj∥2 |/4+| ∥1√kRXi−1√kR%Wj∥2−∥Xi−Wj∥2 |/4=| ∥1√kR(Xi+Wj)∥2−∥Xi+Wj∥2 |/4+| ∥1√kR(Xi−Wj)∥2−∥Xi−Wj∥2 |/4=| ∥f(Xi+Wj)∥2−∥Xi+Wj∥2 |/4+| ∥f(Xi−Wj)∥2−∥Xi−Wj∥2 |/4. (12)\n\nFurther recalling the norm preservation in equation (3) of the main text: there exist a linear map and a , for we have\n\n P[ (1−ϵ)∥Z∥2≤∥f(Z)∥2≤(1+ϵ)∥Z∥2 ]≥1−O(ϵ2). (13)\n\nSubstituting the equation (13) into equation (12) yields\n\n P[ | ∥f(Xi+Wj)∥2−∥% Xi+Wj∥2 |/4+| ∥f(Xi−Wj)∥2−∥Xi−Wj∥2 |/4...≤ϵ4(∥Xi+Wj∥2+∥Xi−Wj∥2)=ϵ2(∥Xi∥2+∥Wj∥2) ]...≥P( | ∥f(Xi+Wj)∥2−∥Xi+Wj∥2 |/4≤ϵ4∥Xi+Wj∥2 )...×P( | ∥f(Xi−Wj)∥2−∥Xi−Wj∥2 |/4≤ϵ4∥Xi−Wj∥2 )...≥[1−O(ϵ2)]⋅[1−O(ϵ2)]=1−O(ϵ2).. (14)\n\nCombining equation (12) and (14), finally we have\n\n (15)\n\nIt can be seen that, for any given and pair, the inner product can be preserved if the is sufficiently small. Actually, previous work [51, 52, 48] discussed a lot on the random projection for various big data applications, here we re-organize these supporting materials to form a systematical proof. We hope this could help readers to follow this paper. In practical experiments, there exists a trade-off between the dimension reduction degree and the recognition accuracy. Smaller usually brings more accurate inner product estimation and better recognition accuracy while at the cost of higher computational complexity with larger , and vice versa. Because the and are not strictly bounded, the approximation may suffer from some noises. Anyway, from the abundant experiments in the main text, the effectiveness of our approach for training dynamic and sparse neural networks has been validated.\n\n## Appendix B Implementation and overhead\n\nThe training algorithm for producing DSG is presented in Algorithm 1. Furthermore, the generation procedure of the critical neuron mask based on the virtual activations estimated in low-dimensional space is presented in Figure 9, which is a typical top-k search. The k value is determined by the activation size and the desired sparsity . To reduce the search cost, we calculate the first input sample within the current mini-batch and then conduct a top-k search over the whole virtual activation matrix for obtaining the top-k threshold under this sample. The remaining samples share the top-k threshold from the first sample to avoid costly searching overhead. At last, the overall activation mask is generated by setting the mask element to one if the estimated activation is larger than the top-k threshold and setting others to zero. In this way, we greatly reduce the search cost. Note that, for the FC layer, each sample is a vector.", null, "Figure 9: DRS mask generation: using a top-k search on the first input sample X(1) within each mini-batch to obtain a top-k threshold which is shared by the following samples. Then, we apply thresholding on the whole output activation tensor to generate the importance mask for this mini-batch.\n\nFurthermore, we investigate the influence of the on the DRS computation cost for importance estimation. We take several layers from the VGG8 on CIFAR10 as a study case, as shown in Table 1. With larger, the DRS can achieve lower dimension with much fewer operations. The average dimension reduction is 3.6x (), 8.5x (), 13.3x (), and 16.5x (). The resulting operation reduction is 3.1x, 7.1x, 11.1x, and 13.9x, respectively.\n\nYou are adding the first comment!\nHow to quickly get a good reply:\n• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.\n• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.\n• Your comment should inspire ideas to flow and help the author improves the paper.\n\nThe better we are at sharing our knowledge with each other, the faster we move forward.\nThe feedback must be of minimum 40 characters and the title a minimum of 5 characters", null, "", null, "", null, "" ]
[ null, "https://storage.googleapis.com/groundai-web-prod/media%2Fusers%2Fuser_85421%2Fproject_297529%2Fimages%2Fx1.png", null, "https://storage.googleapis.com/groundai-web-prod/media%2Fusers%2Fuser_85421%2Fproject_297529%2Fimages%2Fx2.png", null, "https://storage.googleapis.com/groundai-web-prod/media%2Fusers%2Fuser_85421%2Fproject_297529%2Fimages%2Fx3.png", null, "https://storage.googleapis.com/groundai-web-prod/media%2Fusers%2Fuser_85421%2Fproject_297529%2Fimages%2Fx5.png", null, "https://storage.googleapis.com/groundai-web-prod/media%2Fusers%2Fuser_85421%2Fproject_297529%2Fimages%2Fx8.png", null, "https://storage.googleapis.com/groundai-web-prod/media%2Fusers%2Fuser_85421%2Fproject_297529%2Fimages%2Fx9.png", null, "https://dp938rsb7d6cr.cloudfront.net/static/1.66/groundai/img/loader_30.gif", null, "https://dp938rsb7d6cr.cloudfront.net/static/1.66/groundai/img/comment_icon.svg", null, "https://dp938rsb7d6cr.cloudfront.net/static/1.66/groundai/img/about/placeholder.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87303853,"math_prob":0.8808821,"size":41601,"snap":"2019-51-2020-05","text_gpt3_token_len":10349,"char_repetition_ratio":0.14335169,"word_repetition_ratio":0.021197008,"special_character_ratio":0.25516215,"punctuation_ratio":0.1950048,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9657278,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-16T10:29:46Z\",\"WARC-Record-ID\":\"<urn:uuid:0c7b67f3-04f5-45d9-bd4e-0440ddd4d261>\",\"Content-Length\":\"573289\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d2d7ab3f-6a7b-4537-9c3c-ff982a2324bf>\",\"WARC-Concurrent-To\":\"<urn:uuid:6f9eae03-8b64-47be-a026-db0b050dac12>\",\"WARC-IP-Address\":\"35.186.203.76\",\"WARC-Target-URI\":\"https://www.groundai.com/project/dynamic-sparse-graph-for-efficient-deep-learning/\",\"WARC-Payload-Digest\":\"sha1:G6QVBTT4H2CIEZZT7YOGCT4DWMNOZS7Z\",\"WARC-Block-Digest\":\"sha1:YYMOYJQNZ6WZ6ITLUETPQSE54ZKUDN4R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541319511.97_warc_CC-MAIN-20191216093448-20191216121448-00242.warc.gz\"}"}