url
stringlengths 5
5.75k
| fetch_time
int64 1,368,854,400B
1,726,893,797B
| content_mime_type
stringclasses 3
values | warc_filename
stringlengths 108
138
| warc_record_offset
int32 873
1.79B
| warc_record_length
int32 587
790k
| text
stringlengths 30
1.05M
| token_count
int32 16
919k
| char_count
int32 30
1.05M
| metadata
stringlengths 439
444
| score
float64 2.52
5.13
| int_score
int64 3
5
| crawl
stringclasses 93
values | snapshot_type
stringclasses 2
values | language
stringclasses 1
value | language_score
float64 0.05
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
http://www.extension.iastate.edu/Pages/eccrops/cu060523.html | 1,498,442,202,000,000,000 | text/html | crawl-data/CC-MAIN-2017-26/segments/1498128320666.34/warc/CC-MAIN-20170626013946-20170626033946-00241.warc.gz | 520,559,299 | 6,352 | ## East-Central and Southeast Iowa Crop Information
Welcome!
CORN
May 23, 2006
Revised and updated May 24, 2006
CORN
Replant Decisions
For various reasons, some corn fields do not have the stand intended at planting time.
The following table can help in making replant decisions:
Influence of planting date and plant population on corn grain yields (percent)
Stand X 1,000 April 20 - May 5 May 13 - May 19 May 26 - June 1 June 10 - June 16 June 24 - June 28 28 – 32 100 99 90 68 52 24 94 93 85 64 49 20 81 80 73 55 42 16 74 73 67 50 38 12 68 67 61 46 35
Numerous gaps of up to 4-6 feet can reduce yields by an additional 5-6%.
Assuming a cost of \$10 per acre to destroy the existing stand, \$10 per acre to replant, and \$30 per acre for seed, the cost of a re-plant is about \$50 per acre. At \$2 per bushel, that is 25 bushels of corn. If the average yield is 180 bushels, then 25 bushels equals 14% of the yield. Using the above chart, a re-plant in the May 26 – June 1 range has a yield of 90% of an earlier planting, so there is 10% yield loss due to lateness of planting. Adding the loss from late planting to the loss due to the cost of replanting (10% + 14%), the total loss is 24%. Looking at the above chart, you will note that a uniform stand of 16,000 has a loss of 26%, which is essentially a break-even with performing a replant. And more recent research is suggesting the penalty for later planting is greater than the above chart suggests. It is not likely that it will pay to replant stands of 14,000 – 16,000 or more, if the remaining stand is fairly uniform. Remember that the uniformity of the stand needs to be considered in making decisions.
For more information, see Pm-1885 "Corn Planting Guide" http://www.extension.iastate.edu/Publications/PM1885.pdf and NCR 344 "Uneven Emergence in Corn" http://www.extension.iastate.edu/Publications/PM1885.pdf.
Insects
I continue to receive scattered reports of black cutworm feeding at treatable levels and also an occasional report of corn flea beetles being observed. Continue to monitor corn fields for these and other pests. Black cutworm and corn flea beetle information can be found at http://www.extension.iastate.edu/Pages/eccrops/insect.html.
Along Highway 34 (BurlingtonMount Pleasant area) stalk borers are starting to move from grassy areas to adjacent corn plants. Over the next few days, stalk borers farther north will begin their journey from grass to corn. If you observe many dead grass heads, you may want to consider a management strategy. See http://www.extension.iastate.edu/Pages/eccrops/insect.html for details.
SOYBEAN
Emergence Issues - Crusting
Crusting is causing some soybean emergence problems. To rotary hoe or not to rotary hoe – that is the question. Rotary hoeing may aid emergence, but it can also cause much damage. The best way to decide about rotary hoeing is to run the implement for a few yards through the field and then go back and see how much damage is being done to the crop that is trying to emerge. If little damage is being done to the crop, continue to rotary hoe. If damage is severe, leave the field and see if the plants can make it on their own.
Hopefully the predicted rains will alleviate the crusting problems.
Emergence Issues – Slow Emergence
Many are concerned with the slowness of soybeans to emerge after planting. The cold weather of last week is most likely the primary reason. If the seeds / seedlings look basically healthy, most likely everything will be alright, but it would be best to continue to monitor as slow emergence gives more time for insects and diseases to attack.
Replant Decisions
The following table can help in making replant decisions:
Plants/A Yield (bu) 150,000 45.1 125,000 44.8 100,000 45.1 75,000 44.2 50,000 41.6 75,000 with 1 foot gaps 43.6 75,000 with 2 foot gaps 41.5
Note that a uniform stand of 50,000 and a stand of 75,000 with 2 foot gaps produced only 3.5 bushels less than the top yield.
The above chart and additional information on soybean replanting decisions can be found in PM – 1851”Soybean Replant Decisions” http://www.extension.iastate.edu/Publications/PM1851.pdf.
Bean Leaf Beetles
Bean Leaf Beetles can easily be found in most soybean fields. While I have yet to be in a field that needed to be sprayed, fields should continue to be monitored. Threshold information can be found at on pages 120 – 122 of the May 15, 2006 ICM Newsletter
If you have any questions, please feel free to contact the Iowa State University Extension Office.
East Central and Southeast Iowa Crops Home Page ISU Extension and Outreach ISU ISU Extension Agronomy ISU Agronomy
Calendar Search Jobs Feedback Internet Resources State Extension Sites in Other States Local Extension Offices in Other States ISU Extension and Outreach Policies
Last Update: May 23, 2006
To subscribe or unsubscribe to newsletters or for questions or comments, contact: Virgil Schmitt [email protected]
Nondiscrimination Statement and Information Disclosures | 1,287 | 5,042 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.515625 | 3 | CC-MAIN-2017-26 | latest | en | 0.915417 |
https://springerplus.springeropen.com/articles/10.1186/s40064-015-1487-4 | 1,675,925,097,000,000,000 | text/html | crawl-data/CC-MAIN-2023-06/segments/1674764501407.6/warc/CC-MAIN-20230209045525-20230209075525-00234.warc.gz | 541,490,142 | 71,328 | # Mathematical modeling of a multi-product EMQ model with an enhanced end items issuing policy and failures in rework
## Abstract
This study uses mathematical modeling to examine a multi-product economic manufacturing quantity (EMQ) model with an enhanced end items issuing policy and rework failures. We assume that a multi-product EMQ model randomly generates nonconforming items. All of the defective are reworked, but a certain portion fails and becomes scraps. When rework process ends and the entire lot of each product is quality assured, a cost reduction n + 1 end items issuing policy is used to transport finished items of each product. As a result, a closed-form optimal production cycle time is obtained. A numerical example demonstrates the practical usage of our result and confirms a significant savings in stock holding and overall production costs as compared to that of a prior work (Chiu et al. in J Sci Ind Res India, 72:435–440 2013) in the literature.
## Background
Mathematical modeling is used in this study to examine a multi-product EMQ model with rework failures and an enhanced cost reduction end items issuing policy. The EMQ model made use of a mathematical technique to balance the setup and holding costs incurred in a production cycle, and derive most economic manufacturing quantity that minimizes the long run average system costs per unit time (Taft 1918). The assumption of traditional EMQ model includes a perfect manufacturing process for a single product, and a continuous finished product distribution policy. Although the assumptions of the EMQ model are simple and somehow unrealistic, its concept along with solution procedure has since been extensively applied to the fields of inventory control and production management (Hadley and Whitin 1963; Silver et al. 1998; Nahmias 2009; Battini et al. 2010a; Andriolo et al. 2014; Azzi et al. 2014; Glock et al. 2014). In order to increase machine utilization, the vendors in manufacturing sector often fabricate multiple products in sequence on a single machine. Rosenblatt and Finger (1983) considered a single machine multi-item production problem, whereas the machine was an electrochemical machining system, and its outputs are impact sockets of different sizes for power wrenches. They used a grouping procedure for various different products along with a modified version of an existing algorithm to confirm that the cycle times are the multiples of the shortest cycle time. Federgruen and Katalan (1998) examined stochastic economic batch scheduling problems with periodic base-stock policies, where all products are fabricated according to a given periodic item-order. They proposed some effective heuristics to minimize system-wide costs for such a periodic item-sequence production. Muramatsu et al. (2013) studied a multi-item multi-process dynamic lot size scheduling problem with setup time and general product structure. Various heterogeneous decision features such as lot sizing, lot sequencing, dispatching are considered. A near-optimal solution procedure was proposed to determine the decision features involved in this problem simultaneously. Caggiano et al. (2009) proposed a method for computing the channel fill rates in a multi-product, multi-echelon service parts distribution system. A simulation approach was employed to study multi-item three-echelon production- distribution system. They showed that the estimation errors are insignificant over a wide range of base stock level vectors and they also presented an enhanced approximation method to the problem. Jodlbauer and Reitner (2012) investigated a stochastic make-to-order multi-product manufacturing system under a common cycle policy. Effects of the safety stock, demand, cycle time, and setup time on the service level and on the total system costs were investigated. Papers that related to various aspects of planning and optimization issues on multi-item production can also be referred to Lin et al. (2014); Wu et al. (2014).
For the purpose of lowering producer’s inventory holding cost as well as minimizing the expected overall system cost, we extend Chiu et al.’s (2013) work by replacing their n fixed quantity end items delivery policy with an enhanced n + 1 issuing policy. Under the new policy, one extra delivery of end items is made in producer’s production uptime to satisfy customers’ demands during production uptime and rework time. Then, upon completion of the rework, additional n installments (fixed quantity) of end items are shipped at a fixed time interval. The objectives of this study are to determine optimal common cycle time that minimizes the long-run average system cost per unit time, and investigate effects of random defective rate, rework failures, and the enhanced end items issuing policy on the optimal operating cycle time as well as on the expected system costs per unit time.
## Problem description and mathematical modeling
We use mathematical modeling to examine a multi-product EMQ system with a cost reduction multi-distribution policy and rework failures. Consider there are L products to be fabricated in sequence on a single machine and there is a x i portion of nonconforming items being randomly produced at a rate d 1i during the production of product i (where $$i = 1,{ 2}, \ldots ,L$$). All items are screened and cost of quality inspection is included in unit production cost C i . Under the normal operation (which shortage are not permitted), the constant production rate P 1i for product i must satisfies $$(P_{{ 1 {\text{i}}}} - d_{{ 1 {\text{i}}}} - \lambda_{\text{i}} ) > 0$$, where λ i denotes demand rate for product i per year, and d 1i = x i P 1i. All nonconforming items are reworked at a rate of P 2i each cycle immediately following the end of regular production process. The additional rework cost is C Ri per item. A failure-in-rework rate φi exists during the reworking process, the production rate of scrap items during rework d 2i = φi P 2i, and those that fail in repair are discarded at a unit disposal cost C Si. A specific n + 1 multi-shipment policy is proposed in an attempt to reduce the vendor’s stock holding cost as compared to n multi-delivery policy used in Chiu et al. (2013). Under the proposed n + 1 delivery policy, the purpose of the first shipment of finished goods is to meet buyers’ product demands during vendor’s uptime and reworking time. Then upon completion of rework process, n fixed quantity installments of the end items are distributed to buyers at a fixed time interval t n (see Fig. 1).
Figure 2 depicts producer’s on-hand inventory level of perfect quality items of product i in the proposed n + 1 delivery model (in blue lines) and the expected reduction in vendor’s stock holding costs (yellow shaded area) in comparison with that of Chiu et al. (2013). Cost related variables used in our analysis comprise the setup cost K i per cycle, unit holding cost h i , unit holding cost h 1i during rework, the fixed shipping cost K 1i for product i per delivery, and unit shipping cost C Ti for each product i. Additional variables also include the following: T = common production cycle length, our decision variable, Q i = batch size per cycle for product i, n = number of installments (fixed quantity) of the finished lot to be shipped to buyers per cycle, H 1i = on-hand inventory in units of product i for meeting buyer’s demand during uptime t 1i and reworking time t 2i , H 2i = maximum level of on-hand inventory of product i when the regular production ends, H i = maximum level of on-hand inventory in units of product i when the rework process ends, t i = time required for producing enough items to meet demand of product i during vendor’s uptime t 1i and reworking time t 2i , t 1i = production uptime for product i, t 2i = the reworking time for product i, t 3i = the delivery time for product i, t ni = fixed interval of time between each installment of finished product i being delivered during t 3i, I(t) = level of on-hand perfect quality items at time t, I S(t) i = level of on-hand scrap items of product i at time t, TC(Q i ) = total production-inventory-delivery cost per cycle for product i, E[TCU(T)] = the expected system costs per unit time for L products in the proposed system.
From Fig. 1, the following equations (for $$i = 1,{ 2}, \ldots ,L$$) can be obtained directly:
$$T = t_{1i} + t_{2i} + t_{3i} = \frac{{Q_{i} \left[ {1 - \varphi_{i} E\left( {x_{i} } \right)} \right]}}{{\lambda_{i} }}$$
(1)
$$H_{1i} = \lambda_{i} (t_{1i} + t_{2i} )$$
(2)
$$H_{2i} = \left( {P_{1i} - d_{1i} } \right)\left( {t_{1i} - t_{i} } \right)$$
(3)
$$H_{i} = H_{2i} + \left( {P_{2i} - d_{2i} } \right)t_{2i}$$
(4)
$$t_{i} = \frac{{H_{1i} }}{{P_{1i} - d_{1i} }} = \frac{{\lambda_{i} (t_{1i} + t_{2i} )}}{{P_{1i} - d_{1i} }}$$
(5)
$$t_{1i} = \frac{{Q_{i} }}{{P_{1i} }} = \frac{{H_{1i} + H_{2i} }}{{P_{1i} - d_{1i} }}$$
(6)
$$t_{2i} = \frac{{H_{i} - H_{2i} }}{{P_{2i} - d_{2i} }}$$
(7)
$$t_{3i} = T - \left( {t_{1i} + t_{2i} } \right) = nt_{ni}$$
(8)
$$\lambda = \sum\limits_{i = 1}^{L} {\lambda_{i} }$$
(9)
The level of on-hand inventory of scrap items in the proposed model is depicted in Fig. 3 and the following equations (for $$i = 1,{ 2}, \ldots ,L$$) can be obtained:
$$t_{2i} = \frac{{x_{i} Q_{i} }}{{P_{2i} }}$$
(10)
$$d_{1i} t_{1i} = x_{i} Q_{i}$$
(11)
The variable holding cost for finished items of product i during delivery time t 3i are
$$h_{i} \left( {\frac{n - 1}{2n}} \right)H_{i} t_{3i}$$
(12)
The fixed and variable transportation costs for product i per cycle are
$$\left( {n + 1} \right)K_{1i} + C_{{{\text{T}}i}} Q_{i}$$
(13)
The production-inventory-delivery cost per cycle for L products comprises the setup cost, the variable manufacturing, reworking, and disposal costs (Fig. 3), the fixed and variable shipping cost, the holding cost during t 1i, t 2i, and t 3i . Therefore, TC(Q i) becomes
\sum\limits_{i = 1}^{L} {TC\left( {Q_{i} } \right)} = \sum\limits_{i = 1}^{L} {\left\{ \begin{aligned} K_{i} + C_{i} Q_{i} + C_{{{\text{R}}i}} \left( {x_{i} Q_{i} } \right) + C_{{{\text{S}}i}} \left( {\varphi_{i} x_{i} Q_{i} } \right) + \left( {n + 1} \right)K_{1i} + C_{{{\text{T}}i}} Q_{i} \left( {1 - \varphi_{i} x_{i} } \right) + h_{1i} \left[ {\frac{{d_{1i} t_{1i} }}{2}\left( {t_{2i} } \right)} \right] \hfill \\ + h_{i} \left[ {\frac{{H_{1i} }}{2}\left( {t_{i} } \right) + \frac{{H_{2i} }}{2}\left( {t_{1i} - t_{i} } \right) + \frac{{H_{2i} + H_{i} }}{2}\left( {t_{2i} } \right) + \frac{{d_{1i} t_{1i} }}{2}\left( {t_{1i} } \right) + \left( {\frac{n - 1}{2n}} \right)H_{i} t_{3i} } \right] \hfill \\ \end{aligned} \right\}}
(14)
Because the defective rate x is assumed to be a random variable with a known probability density function, in order to take the randomness of x into account, the expected value of x is used in this study. By substituting all parameters from Eqs. (1) to (13) in Eq. (14), and with further derivations the expected E[TCU(T)] can be obtained as follows:
E\left[ {TCU\left( T \right)} \right] = \sum\limits_{i = 1}^{L} {\left\{ \begin{aligned} &\frac{{C_{i} \lambda_{i} }}{{\left[ {1 - \varphi_{i} E\left[ {x_{i} } \right]} \right]}} + \frac{{K_{i} }}{T} + C_{{{\text{R}}i}} \lambda_{i} \left[ {\frac{{E\left[ {x_{i} } \right]}}{{\left[ {1 - \varphi_{i} E\left[ {x_{i} } \right]} \right]}}} \right] + C_{{{\text{S}}i}} \lambda_{i} \left[ {\frac{{\varphi_{i} E\left[ {x_{i} } \right]}}{{\left[ {1 - \varphi_{i} E\left[ {x_{i} } \right]} \right]}}} \right] + C_{Ti} \lambda_{i} + \frac{{\left( {n + 1} \right)K_{1i} }}{T} \\&+ \frac{{h_{i} T\lambda_{i}^{2} }}{2}\frac{1}{{\left[ {1 - \varphi_{i} E\left[ {x_{i} } \right]} \right]^{2} }}\left\{ \begin{aligned} \lambda_{i} \left[ {\frac{1}{{P_{1i} }} + \frac{{E\left[ {x_{i} } \right]}}{{P_{2i} }}} \right]^{2} \left[ {\frac{{2\lambda_{i} }}{{P_{1i} (1 - E\left[ {x_{i} } \right])}}} \right] - \frac{{E\left[ {x_{i} } \right]}}{{P_{2i} }}\left[ {\frac{1}{{P_{1i} }} - \left[ {1 - E\left[ {x_{i} } \right]} \right]} \right] \hfill \\ + \frac{1}{{P_{1i} }} + \left( {1 - \frac{1}{n}} \right)\left[ {1 - \varphi_{i} E\left[ {x_{i} } \right]} \right]\left[ {\frac{{\left[ {1 - \varphi_{i} E\left[ {x_{i} } \right]} \right]}}{{\lambda_{i} }} - \frac{2}{{P_{1i} }} - \frac{{E\left[ {x_{i} } \right]}}{{P_{2i} }}} \right] \hfill \\ - \left( {1 + \frac{1}{n}} \right)\left[ {\lambda_{i} \left[ {\frac{1}{{P_{1i} }} + \frac{{E\left[ {x_{i} } \right]}}{{P_{2i} }}} \right]^{2} } \right] \hfill \\ \end{aligned} \right\} \\ &\quad \quad\quad+ \frac{{h_{1i} T\lambda_{i}^{2} }}{{2P_{2i} }}\frac{{E\left( {x_{i} } \right)^{2} }}{{\left[ {1 - \varphi_{i} E\left( {x_{i} } \right)} \right]^{2} }} \end{aligned} \right\}}
(15)
Let
\begin{aligned} E_{0i} &= \frac{1}{{1 - \varphi_{i} E\left[ {x_{i} } \right]}}; \, E_{1i} = \frac{{E\left[ {x_{i} } \right]}}{{1 - \varphi_{i} E\left[ {x_{i} } \right]}}; \, E_{2i} = \left[ {\frac{1}{{P_{1i} }} + \frac{{E\left[ {x_{i} } \right]}}{{P_{2i} }}} \right]; \\ E_{3i} &= \frac{{2\lambda_{i} }}{{P_{1i} \left[ {1 - E\left[ {x_{i} } \right]} \right]}}; \, E_{4i} = \left[ {1 - \varphi_{i} E\left[ {x_{i} } \right]} \right]\left[ {\frac{{\left[ {1 - \varphi_{i} E\left[ {x_{i} } \right]} \right]}}{{\lambda_{i} }} - \frac{2}{{P_{1i} }} - \frac{{E\left[ {x_{i} } \right]}}{{P_{2i} }}} \right]. \end{aligned}
(16)
Then Eq. (15) becomes
E [ {TCU ( T)}] = \sum\limits_{i = 1}^{L} {\left\{ \begin{aligned} C_{i} \lambda_{i} E_{0i} + \frac{{K_{i} }}{T} + C_{{{\text{R}}i}} \lambda_{i} E_{1i} \left[ {\frac{{E\left[ {x_{i} } \right]}}{{\left[ {1 - \varphi_{i} E\left[ {x_{i} } \right]} \right]}}} \right] + C_{{{\text{S}}i}} \lambda_{i} \varphi_{i} E_{1i} + C_{Ti} \lambda_{i} + \frac{{\left( {n + 1} \right)K_{1i} }}{T} + \frac{{h_{1i} T\lambda_{i}^{2} E_{1i}^{2} }}{{2P_{2i} }} \hfill \\ + \frac{{h_{i} T\lambda_{i}^{2} }}{2}\left\{ {\lambda_{i} E_{2i}^{2} E_{3i} - \frac{{E\left[ {x_{i} } \right]}}{{P_{2i} }}\left[ {\frac{1}{{P_{1i} }} - \left[ {1 - E\left[ {x_{i} } \right]} \right]} \right] + \frac{1}{{P_{1i} }} + \left( {1 - \frac{1}{n}} \right)E_{4i} - \left( {1 + \frac{1}{n}} \right)\lambda_{i} E_{2i}^{2} } \right\} \hfill \\ \end{aligned} \right\}}
(17)
## Derivation of the optimal cycle time
Before derivation of the optimal common production cycle time T*, one should prove that the expected cost function E[TCU(T)] is convex. By differentiating E[TCU(T)] with respect to T gives the following first and second derivatives:
\frac{{dE\left[ {TCU(T)} \right]}}{dT} = \sum\limits_{i = 1}^{L} {\left\{ \begin{aligned}& - \frac{{K_{i} }}{{T^{2} }} - \frac{{\left( {n + 1} \right)K_{1i} }}{{T^{2} }} + \frac{{h_{1i} \lambda_{i}^{2} E_{1i}^{2} }}{{2P_{2i} }} \\ &+ \frac{{h_{i} \lambda_{i}^{2} }}{2}\left\{ {\lambda_{i} E_{2i}^{2} E_{3i} - \frac{{E\left[ {x_{i} } \right]}}{{P_{2i} }}\left[ {\frac{1}{{P_{1i} }} - \left[ {1 - E\left[ {x_{i} } \right]} \right]} \right] + \frac{1}{{P_{1i} }} + \left( {1 - \frac{1}{n}} \right)E_{4i} - \left( {1 + \frac{1}{n}} \right)\lambda_{i} E_{2i}^{2} } \right\} \hfill \\ \end{aligned} \right\}}
(18)
$$\frac{{d^{2} E\left[ {TCU(T)} \right]}}{{dT^{2} }} = \sum\limits_{i = 1}^{L} {\left\{ {\frac{{2\left[ {K_{i} + \left( {n + 1} \right)K_{1i} } \right]}}{{T^{3} }}} \right\}}$$
(19)
It can be seen that Eq. (19) is positive, since K i , n, K 1i, and T are all positive. Because the second derivative of E[TCU(T)] > 0, one confirms that E[TCU(T)] is convex for all T different from zero. It follows that by letting the first derivative of E[TCU(T)] = 0, one can derive the optimal common production cycle time T*. Let
\frac{{dE\left[ {TCU(T)} \right]}}{dT} = \sum\limits_{i = 1}^{L} {\left\{ \begin{aligned} &- \frac{{K_{i} }}{{T^{2} }} - \frac{{\left( {n + 1} \right)K_{1i} }}{{T^{2} }} + \frac{{h_{1i} \lambda_{i}^{2} E_{1i}^{2} }}{{2P_{2i} }} \\ &+ \frac{{h_{i} \lambda_{i}^{2} }}{2}\left\{ {\lambda_{i} E_{2i}^{2} E_{3i} - \frac{{E\left[ {x_{i} } \right]}}{{P_{2i} }}\left[ {\frac{1}{{P_{1i} }} - \left[ {1 - E\left[ {x_{i} } \right]} \right]} \right] + \frac{1}{{P_{1i} }} + \left( {1 - \frac{1}{n}} \right)E_{4i} - \left( {1 + \frac{1}{n}} \right)\lambda_{i} E_{2i}^{2} } \right\} \end{aligned} \right\}} = 0
(20)
or
\frac{1}{{T^{2} }}\sum\limits_{i = 1}^{L} {\left[ {K_{i} + \left( {n + 1} \right)K_{1i} } \right]} = \sum\limits_{i = 1}^{L} {\lambda_{i}^{2} \left\{ {\frac{{h_{1i} E_{1i}^{2} }}{{2P_{2i} }} + \frac{{h_{i} }}{2}\left[ \begin{aligned} \lambda_{i} E_{2i}^{2} E_{3i} - \frac{{E\left[ {x_{i} } \right]}}{{P_{2i} }}\left[ {\frac{1}{{P_{1i} }} - \left[ {1 - E\left[ {x_{i} } \right]} \right]} \right] \hfill \\ + \frac{1}{{P_{1i} }} + \left( {1 - \frac{1}{n}} \right)E_{4i} - \left( {1 + \frac{1}{n}} \right)\lambda_{i} E_{2i}^{2} \hfill \\ \end{aligned} \right]} \right\}}
(21)
Therefore, one has T* as follows:
$$T^{*} = \sqrt {\frac{{2\sum\limits_{i = 1}^{L} {\left[ {K_{i} + \left( {n + 1} \right)K_{1i} } \right]} }}{{\sum\limits_{i = 1}^{L} {\lambda_{i}^{2} \left\{ {\frac{{h_{1i} E_{1i}^{2} }}{{P_{2i} }} + h_{i} \left[ {\lambda_{i} E_{2i}^{2} E_{3i} - \frac{{E\left[ {x_{i} } \right]}}{{P_{2i} }}\left[ {\frac{1}{{P_{1i} }} - \left[ {1 - E\left[ {x_{i} } \right]} \right]} \right] + \frac{1}{{P_{1i} }} + \left( {1 - \frac{1}{n}} \right)E_{4i} - \left( {1 + \frac{1}{n}} \right)\lambda_{i} E_{2i}^{2} } \right]} \right\}} }}}$$
(22)
### Capacity and setup time effects on the optimal cycle time
Generally speaking, setup time is relatively short in comparison with uptime. But, if the setup time becomes a factor, one has to ensure that the cycle length is long enough to account for the setup, production, and reworking times of L products (Nahmias 2009). Let S i denote the production setup time for product i, the Eq. (23) must hold.
$$\sum\nolimits_{i = 1}^{L} {\left[ {S_{i} + \left( {Q_{i} /P_{1i} } \right) + \left( {x_{i} Q_{i} /P_{2i} } \right)} \right]} { < }T$$
(23)
Substituting Eq. (1) in Eq. (23) one has
$$T > \frac{{\sum\nolimits_{i = 1}^{L} {S_{i} } }}{{1 - \sum\nolimits_{i = 1}^{L} {\left( {\left( {\lambda_{i} /P_{1i} } \right) + \left( {x_{i} \lambda_{i} /P_{2i} } \right)} \right)} }} = T_{\hbox{min} .}$$
(24)
Therefore, when setup time becomes a significant factor in production planning, one must choose the optimal common cycle length from the maximum of [T*, T min].
## Numerical example
With the purpose of easing the comparison efforts for readers, this section uses the same example as in Chiu et al. (2013) to demonstrate the proposed research result. Reconsider a production plan of producing five different items on a single machine in sequence under a common cycle time policy. Annual demand rates λ i for these five items are 3000, 3200, 3400, 3600, and 3800, respectively. The production rates P 1i are 58,000; 59,000; 60,000; 61,000 and 62,000, respectively. During individual production process, there are random nonconforming rates x i for each item and they follow Uniform distribution over the intervals of [0, 0.05], [0, 010], [0, 0.15], [0, 020], and [0, 0.25], respectively. All nonconforming items produced go through a rework process at the rates P 2i of 1800, 2000, 2200, 2400, and 2600 items per year, respectively; with additional unit rework costs C Ri of $50,$55, $60,$65, and $70, respectively. During the reworking, there are failure-in-rework rates φ i of 0.05, 0.10, 0.15, 0.20, and 0.25, respectively. Additional values of system parameters are given as follows: K i = the setup costs are$3800, $3900,$4000, $4100, and$4200, respectively, C i = production costs per item are $80,$90, $100,$110, and $120, respectively, C Si = disposal costs per item are$20, $25,$30, $35, and$40, respectively, K 1i = fixed costs per delivery are $1800,$1900, $2000,$2100, and $2200, respectively, C Ti = unit transportation costs are$0.1, $0.2,$0.3, $0.4, and$0.5, respectively, n = number of shipments per cycle, it is assumed to be a constant 3 (i.e., n + 1 = 4), h i = unit holding costs are $10,$15, $20,$25, and $30, respectively, h 1i = holding costs per reworked item are$30, $35,$40, $45, and$50, respectively.
The optimal common cycle time T* = 0.7279 (years) can be obtained by applying Eq. (22). Total expected system costs E[TCU(T*)] = $2,013,956 can also be obtained from computation of Eq. (15). Variation of mean defective rate and mean failure-in-rework rate effects on the expected system cost E[TCU(T)] is illustrated in Fig. 4. It is noted that as mean defective rate increases the E[TCU(T)] increases significantly, and as mean failure-in-rework rate increases the system cost E[TCU(T)] increases slightly. As stated earlier, the proposed model aims at reducing vendor’s inventory holding cost for each product i during the production cycle. As a result from this numerical example, the percentage of overall holding cost reduction is 24.7 % (i.e., from$109,476 (Chiu et al. 2013) down to $82,431). Figure 5 demonstrates the percentage of holding cost reduction for five different products, respectively as compared to that of Chiu et al.’s work (where n-delivery policy is adopted). In summary, the proposed study realizes a significant system cost savings of$56,358 (i.e., $2,070,314 −$2,013,956) or 16.09 % of other system interrelated costs (i.e., E[TCU(T)] − λC, which is the expected system cost excludes variable manufacturing cost).
## Conclusions
With an aim at reducing producer’s inventory cost as well as minimizing the expected overall system costs per unit time, this study incorporating an enhanced n + 1 product issuing policy into Chiu et al.’s model (2013), and with the help of mathematical modeling and optimization method a closed-form optimal common production cycle time for the proposed multi-product EMQ model with rework failures was derived. A numerical example is given to demonstrate the applicability of our research result, reveal joint effects of random defective rate and failure-in-rework rate on the optimal policy (refer to Fig. 4), and confirm significant savings in producer’s inventory holding cost (Fig. 5). For future study, an interesting topic will be to include the machine breakdown factor into such a multi-product EMQ model.
## References
• Abdul-Jalbar B, Gutiérrez JM, Sicilia J (2008) Policies for a single-vendor multi-buyer system with finite production rate. Decis Support Syst 46:84–100
• Agnihothri SR, Kenett RS (1995) Impact of defects on a process with rework. Eur J Oper Res 80:308–327
• Andriolo A, Battini D, Grubbström RW, Persona A, Sgarbossa F (2014) A century of evolution from Harris’s basic lot size model: survey and research agenda. Int J Prod Econ 155:16–38
• Azzi A, Battini D, Faccio M, Persona A, Sgarbossa F (2014) Inventory holding costs measurement: a multi-case study. Int J Log Manage 25(1):109–132
• Banerjee A (1986) A joint economic-lot-size model for purchaser and vendor. Decis Sci 17:292–311
• Battini D, Grassi A, Persona A, Sgarbossa F (2010a) Consignment stock inventory policy: methodological framework and model. Int J Prod Res 48(7):2055–2079
• Battini D, Gunasekaran A, Faccio M, Persona A, Sgarbossa F (2010b) Consignment stock inventory model in an integrated supply chain. Int J Prod Res 48(2):477–500
• Battini D, Persona A, Sgarbossa F (2014) A sustainable EOQ model: theoretical formulation and applications. Int J Prod Econ 149:145–153
• Biswas P, Sarker BR (2008) Optimal batch quantity models for a lean production system with in-cycle rework and scrap. Int J Prod Res 46:6585–6610
• Caggiano KE, Jackson PL, Muckstadt JA, Rappold JA (2009) Efficient computation of time-based customer service levels in a multi-item, multi-echelon supply chain: a practical approach for inventory optimization. Eur J Oper Res 199:744–749
• Chiu YSP, Chang HH (2014) Optimal run time for EPQ model with scrap, rework and stochastic breakdowns: a note. Econ Model 37:143–148
• Chiu YSP, Lin HD, Cheng FT, Hwang MH (2013) Optimal common cycle time for a multi-item production system with discontinuous delivery policy and failure in rework. J Sci Ind Res India 72:435–440
• Chiu YSP, Wu MF, Cheng FT, Hwang MH (2014) Replenishment lot sizing with failure in rework and an enhanced multi-shipment policy. J Sci Ind Res India 73:648–652
• Federgruen A, Katalan Z (1998) Determining production schedules under base-stock policies in single facility multi-item production systems. Oper Res 46:883–898
• Glock CH (2011a) Batch sizing with controllable production rates in a multi-stage production system. Int J Prod Res 49:6017–6039
• Glock CH (2011b) A multiple-vendor single-buyer integrated inventory model with a variable number of vendors. Comput Ind Eng 60(1):173–182
• Glock CH (2012a) The joint economic lot size problem: a review. Int J Prod Econ 135(2):671–686
• Glock CH (2012b) Lead time reduction strategies in a single-vendor–single-buyer integrated inventory model with lot size-dependent lead times and stochastic demand. Int J Prod Econ 136(1):37–44
• Glock CH, Grosse EH, Ries JM (2014) The lot sizing problem: a tertiary study. Int J Prod Econ 155:39–51
• Grosfeld-Nir A, Gerchak Y (2002) Multistage production to order with rework capability. Manage Sci 48:652–664
• Hadley G, Whitin TM (1963) Analysis of Inventory Systems. Prentice-Hall, Englewood Cliffs, New Jersey
• Hishamuddin H, Sarker RA, Essam D (2014) A recovery mechanism for a two echelon supply chain system under supply disruption. Econ Model 38:555–563
• Jodlbauer H, Reitner S (2012) Optimizing service-level and relevant cost for a stochastic multi-item cyclic production system. Int J Prod Econ 136:306–317
• Katsaliaki K, Mustafee N, Kumar S (2014) A game-based approach towards facilitating decision making for perishable products: an example of blood supply chain. Expert Syst Appl 41:4043–4059
• Lin GC, Wu A, Gong DC, Huang B, Ma WN (2014) On a multi-product lot scheduling problem subject to an imperfect process with standby modules. Int J Prod Res 52:2243–2257
• Muramatsu K, Warman A, Kobayashi M (2013) A near-optimal solution method of multi-item multi-process dynamic lot size scheduling problem. JSME Int J C Mech Sy 46:46–53
• Murugan M, Selladurai V (2014) Productivity improvement in manufacturing submersible pump diffuser housing using lean manufacturing system. J Eng Res Kuwait 2:164–182
• Nahmias S (2009) Production and operations analysis. McGraw-Hill Co., Inc., New York
• Rodger JA (2014) Application of a fuzzy feasibility Bayesian probabilistic estimation of supply chain backorder aging, unfilled backorders, and customer wait time using stochastic simulation with Markov blankets. Expert Syst Appl 41:7005–7022
• Rosenblatt MJ, Finger N (1983) Application of a grouping procedure to a multi-item production system. Int J Prod Res 21:223–229
• Safaei M (2014) An integrated multi-objective model for allocating the limited sources in a multiple multi-stage lean supply chain. Econ Model 37:224–237
• Sana SS, Chedid JA, Navarro KS (2014) A three layer supply chain model with multiple suppliers, manufacturers and retailers for multiple items. Appl Math Comput 229:139–150
• Silver EA, Pyke DF, Peterson R (1998) Inventory management and production planning and scheduling. Wiley, New York
• Swenseth RS, Godfrey RM (2002) Incorporating transportation costs into inventory replenishment decisions. Int J Prod Econ 77:113–130
• Taft EW (1918) The most economical production lot. Iron Age 101:1410–1412
• Wee HM, Wang WT, Kuo TC, Cheng YL, Huang YD (2014) An economic production quantity model with non-synchronized screening and rework. Appl Math Comput 233:127–138
• Wu MF, Chiu YSP, Sung PC (2014) Optimization of a multi-product EPQ model with scrap and an improved multi-delivery policy. J Eng Res Kuwait 2:103–118
## Authors’ contributions
All authors have contributed to the manuscript equally, and all authors read and approved the final manuscript. All authors read and approved the final manuscript.
### Acknowledgements
Authors deeply appreciate Ministry of Science and Technology of Taiwan for supporting this study under grant number: MOST 102-2410-H-324-015-MY2.
### Competing interests
The authors declare that they have no competing interests.
## Author information
Authors
### Corresponding author
Correspondence to Chung-Li Chou.
## Rights and permissions
Reprints and Permissions
Chiu, YS.P., Sung, PC., Chiu, S.W. et al. Mathematical modeling of a multi-product EMQ model with an enhanced end items issuing policy and failures in rework. SpringerPlus 4, 679 (2015). https://doi.org/10.1186/s40064-015-1487-4
• Accepted:
• Published:
• DOI: https://doi.org/10.1186/s40064-015-1487-4
### Keywords
• Mathematical modeling
• Optimization
• Multi-product system
• Economic manufacturing quantity
• Multi-delivery
• Rework failures | 9,198 | 28,773 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.65625 | 3 | CC-MAIN-2023-06 | latest | en | 0.908447 |
https://assignment-daixie.com/tag/math-8806-algebra/ | 1,721,501,684,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763517515.18/warc/CC-MAIN-20240720174732-20240720204732-00618.warc.gz | 96,970,193 | 14,092 | # 代数代写 Algebra I |MATH 8806 Boston College Assignment
0
Assignment-daixieTM为您提供波士顿学院Boston College MATH 8806 Algebra I 代数学代写代考辅导服务!
## Instructions:
Algebra is a branch of mathematics that deals with symbols and the rules for manipulating these symbols to solve equations and understand relationships between quantities. It involves the use of letters and symbols to represent unknown values, and the use of mathematical operations such as addition, subtraction, multiplication, and division to solve equations and simplify expressions.
Algebra is important in many areas of mathematics, science, engineering, economics, and finance, as it provides a powerful tool for solving problems and understanding relationships between variables. Some of the key concepts in algebra include equations, inequalities, polynomials, functions, and matrices.
Algebraic techniques can be used to solve a wide range of problems, from simple arithmetic calculations to complex systems of equations and models of physical and economic phenomena. Algebra is also used extensively in computer science, cryptography, and other fields that require efficient methods for processing and manipulating large amounts of data.
Let $G$ be a group and let $H$ be a subgroup. Prove that the following are equivalent.
(1) $H$ is normal in $G$.
(2) For every $g \in G, g H^{-1}=H$.
$(1) \Rightarrow (2)$: If $H$ is normal in $G$, then for every $g \in G$ and $h \in H$, we have $ghg^{-1} \in H$. Thus, for any $g \in G$, we have $gHg^{-1} \subseteq H$. On the other hand, $Hg^{-1}g \subseteq g^{-1}Hg$, so $gH^{-1}=Hg^{-1}g \subseteq H$. Hence, $gH^{-1}=H$.
$(2) \Rightarrow (3)$: If $gH^{-1}=H$ for every $g \in G$, then for any $a \in G$ and $h \in H$, we have $ah=ag(g^{-1}h) \in aH$. Therefore, $aH \subseteq Ha$, and the reverse inclusion follows similarly. Thus, $aH=Ha$.
(3) For every $a \in G, a H=H a$.
(4) The set of left cosets is equal to the set of right cosets.
$(3) \Rightarrow (4)$: If $aH=Ha$ for every $a \in G$, then the set of left cosets is ${aH \mid a \in G}$, and the set of right cosets is ${Ha \mid a \in G}$. For any $a,b \in G$, we have $aH=Ha$ and $bH=Hb$, so $aHb=abH=(Ha)b=H(ab)=Hba=bHa$. Hence, the sets of left and right cosets coincide.
$(4) \Rightarrow (1)$: If the set of left cosets is equal to the set of right cosets, then for any $g \in G$ and $h \in H$, there exists $a \in G$ such that $ah=g$. Thus, $g^{-1}ag \in H$ and $g^{-1}hg \in H$, so $ghg^{-1} \in H$ for any $g \in G$ and $h \in H$. Therefore, $H$ is normal in $G$.
$(\Rightarrow)$ Assume that $G/N$ is abelian. Let $x,y\in G$ be arbitrary elements. Then we have $(xN)(yN)=(yN)(xN)$ in $G/N$, which means that $xyN=yxN$. Therefore, we have $xyx^{-1}y^{-1}\in N$. Since $x$ and $y$ were arbitrary, this means that $N$ contains the commutator $[x,y]=xyx^{-1}y^{-1}$ for every pair of elements $x,y\in G$.
$(\Leftarrow)$ Assume that $N$ contains the commutator of every pair of elements of $G$. Let $xN,yN$ be arbitrary elements of $G/N$. We need to show that $(xN)(yN)=(yN)(xN)$, i.e., $xyN=yxN$. Since $N$ is normal in $G$, we have $xNx^{-1}=N$ and $yNy^{-1}=N$. Therefore, we have
\begin{align*} xyN &= xNyN && \text{(since }yN=NyN\text{)} \ &= xNyNy^{-1} && \text{(since }y^{-1}Ny=N\text{)} \ &= xNNy^{-1} && \text{(since }N\text{ is normal)} \ &= xN[y^{-1},x]N && \text{(since }[y^{-1},x]\in N\text{)} \ &= y^{-1}xN[x,y]N && \text{(since }N\text{ is normal)} \ &= y^{-1}NxN[y,x] && \text{(since }[x,y]=[y^{-1},x]^{-1}\in N\text{)} \ &= yNy^{-1}xN && \text{(since }N\text{ is normal)} \ &= yN[x^{-1},y]N && \text{(since }[x^{-1},y]\in N\text{)} \ &= x^{-1}yN[x,y]N && \text{(since }N\text{ is normal)} \ &= x^{-1}NxN[y,x] && \text{(since }[x,y]=[x^{-1},y]^{-1}\in N\text{)} \ &= xNx^{-1}yN && \text{(since }N\text{ is normal)} \ &= yxN && \text{(since }xNx^{-1}=N\text{)} \ &= yNxN && \text{(since }xN=NxN\text{)} \ &= yN(xN) && \text{(since }N\text{ is normal)} \ &= yN(xN), \end{align*}
which shows that $G/N$ is abelian. | 1,370 | 4,009 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.625 | 5 | CC-MAIN-2024-30 | latest | en | 0.853076 |
https://nptel.ac.in/content/storage2/courses/103101001/downloads/Problems/ProblemSet_1/ProblemSet_1.html | 1,586,288,025,000,000,000 | text/html | crawl-data/CC-MAIN-2020-16/segments/1585371805747.72/warc/CC-MAIN-20200407183818-20200407214318-00063.warc.gz | 605,736,780 | 3,902 | ## Self Assessment Quiz
Q1.1. Decomposition of acetaldehyde represented as CH3CHO = CH4 + CO. This is an irreversible second order reaction. The velocity constant at 5200 C is 0.33 m3/kmol-s. Pure acetaldehyde enters at 5200 C and 1 gmol/hr. The product is withdrawn at a rate sufficient to keep total pressure constant at 1 atm. Calculate the following for an exit conversion of 0.5. Sourced from Denbigh
Q1.1.1
Reactor volume space time, in CSTR
Q1.1.2. Reactor volume, space time and actual residence time in PFR
Q1.1.3. Reaction time for a constant volume batch reactor
Q1.1.4. Reaction time for a constant pressure batch reactor
Q1.2. Pure liquid A is fed to a CSTR where it reacts in the liquid phase to form B (liquid) and C (gas). The reaction is elementary with activation energy of 50 kJ/mol. At the temperature of the reaction A has a vapor pressure of 0.1 atm, B is a liquid and C is gas. Gases A and C exit through the top CSTR while liquid A and B exit from bottom. We want 70% conversion of a feed stream 4.54 Kmol/hr of pure A. What is the liquid volume in the reactor?Sourced from Fogler
Hint : Data for Q1.2 : k = 0.4/ hr, ρA= ρB = 800 kg/m3
P = 1.0 atm, MA = 200, MB = 175, MC = 25
Q1.3. An isothermal constant pressure plug flow reactor is designed to give a conversion of 63.2 percent of A to B for the first order gas phase decomposition A = B for a feed of pure A entering at a rate of 150 lit/hr. At the chosen operating temperature the first order rate constant is 5.0/hr.
After installation it was found that conversion of 92.7 percent is desired conversion was reached. This difference was brought to be due to a zone of intense back mixing. Assuming that this zone behaves like a perfectly mixed stirred tank in series and in between two plug reactors. What fraction of the total reactor volume is occupied by this zone?
Q1.4. You are designing a tubular reactor in which a reaction is taking place between a solid A and a liquid B to form a liquid product C.
A(s) + B (liq) = C( liq) , rB = - k CACB
The reaction takes place in the liquid phase. The solid keeps the liquid saturated with A. The solid particles are carried up the reactor at the same velocity as the liquid .
You do not want unreacted solids in the product. What minimum length of reactor would you specify?
DATA for Q1.4:
Diameter reactor: 30 cm ; Density of solid A : 1.6 g/mL ; Density of liquid: 0.8 g/mL
Solubility of A in liquid: 0.75 gmol / L ; Molecular weight: A = 200, B = 100
Velocity constant k = 0.062 lit/gmol s ; Liquid flow rate = 280 lit/s ; Solid rate = 226 kg/s
Q1.5. Bioreactors
Alcoholism is among the major causes of accidents on roads, railways, industries, air traffic etc. The removal of alcohol from the intestinal fluid (CB) into the blood stream (CA) can be modelled as series processes where the rate of absorption of the alcohol into blood is given as :
r = k (CA – CB) with k = 10/ h
and the rate of physiological elimination of alcohol in blood is by a zero order process with rate of β gram alcohol per unit volume blood fluids per unit time.
Typically a bottle of alcohol contains 120g alcohol which is consumed quickly. Estimate the resting time required for person prior to resuming duty after consuming alcohol if the intestinal fluid (CB) and blood fluid (CA) are not to contain concentration of alcohol more than 1.0 g/l as required by safety norms. DATA:Volume of blood fluids: 5.0 lit ; Volume of intestinal fluids: 5.0 lit ;Blood fluid β = 0.19 g/lit-hr. Sourced from Fogler
Q1.6. Natural Bioreactors
The gut volume, gut residence time, body weight and digestion efficiency of two animals are estimated and given below. Study the data and comment on the underlying logic in the design of these natural bioreactors.
Animal W, kg Gut volume, lit Residence time, hr Efficiency Cow 250 40 60 70 Tiger 250 10 6 70
Q1.7. An irreversible first order gas phase reaction A = B is carried out in a packed plug flow reactor . The pressure gradient along the length of the packed bed is constant at
dP/dW = - 0.2 atm/kg
This system gives 86.5 percent conversion. The entering pressure is 20atm and catalyst weight is 60kg. Note as a guide the pressure drop in Metres could be typically 50-150 (v**2/2g). Sourced from Fogler.
Q1.7.1. If a CSTR in which there is no pressure drop is used, what is the conversion expected?
Q1.7.2. If packed bed pressure drop is neglected what is the conversion? | 1,191 | 4,462 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.046875 | 3 | CC-MAIN-2020-16 | latest | en | 0.909498 |
https://www.orthobullets.com/hand/6005/wrist-ligaments-and-biomechanics?hideLeftMenu=true | 1,723,699,581,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722641151918.94/warc/CC-MAIN-20240815044119-20240815074119-00362.warc.gz | 726,852,627 | 44,783 | Please confirm topic selection
Are you sure you want to trigger topic in your Anconeus AI algorithm?
You are done for today with this topic.
Would you like to start learning session with this topic items scheduled for future?
Updated: Mar 7 2024
# Wrist Ligaments & Biomechanics
Images
• Wrist Planes of Motion
• Joints involved
• intercarpal
• Three axes of motion
• flexion-extension
• prono-supination
• Normal and function motion
• flexion (65 normal, 10 functional)
• 40% radiocarpal, 60% midcarpal
• extension (55 normal, 35 functional)
• 66% radiocarpal, 33% midcarpal
• radial deviation (15 normal, 10 functional)
• 90% midcarpal
• ulnar deviation (35 normal, 15 functional)
• 50% radiocarpal, 50% midcarpal
• Wrist Biomechanics
• Three biomechanic concepts have been proposed:
• three links in a chain composed of radius, lunate and capitate
• head of capitate acts as center of rotation
• proximal row (lunate) acts as a unit and is an intercalated segment with no direct tendon attachments
• distal row functions as unit
• efficient motion (less motion at each link)
• strong volar ligaments enhance stability
• more links increases instability of the chain
• scaphoid bridges both carpal rows
• resting forces/radial deviation push the scaphoid into flexion and push the triquetrum into extension
• ulnar deviation pushes the scaphoid into extension
• Column concept
• lateral (mobile) column
• comprises scaphoid, trapezoid and trapezium
• scaphoid is center of motion and function is mobile
• central (flexion-extension) column
• comprises lunate, capitate and hamate
• luno-capitate articulation is center of motion
• motion is flexion/extension
• medial (rotation) column
• comprises triquetrum and distal carpal row
• motion is rotation
• Rows concept
• comprises proximal and distal rows
• scaphoid is a bridge between rows
• motion occurs within and between rows
• Carpal Relationships
• Carpal collapse
• normal ratio of carpal height to 3rd metacarpal height is 0.54
• Ulnar translation
• normal ratio of ulna-to-capitate length to 3rd metacarpal height is 0.30
• distal radius bears 80% of load
• distal ulna bears 20% of load
• ulna load bearing increases with ulnar lengthening
• ulna load bearing decreases with ulnar shortening
• Wrist Ligaments
• The ligaments of the wrist include
• extrinsic ligaments
• bridge carpal bones to the radius or metacarpals
• include volar and dorsal ligaments
• intrinsic ligaments
• originate and insert on carpal bones
• the most important intrinsic ligaments are the scapholunate interosseous ligament and lunotriquetral interosseous ligament
• Characteristics
• volar ligaments are secondary stabilizers of the scaphoid
• volar ligaments are stronger than dorsal ligaments
• dorsal ligaments converge on the triquetrum
• Space of Poirier
• center of a double "V" shape convergence of ligaments
• central weak area of the wrist in the floor of the carpal tunnel at the level of the proximal capitate
• between the volar radioscaphocapitate ligament and volar long radiolunate ligament (radiolunotriquetral ligament)
• wrist palmar flexion
• area of weakness disappears
• wrist dorsiflexion
• area of weakness increases
• in perilunate dislocations, this space allows the distal carpal row to separate from the lunate
• in lunate dislocations, the lunate escapes into this space
• Extrinsic Ligaments
• Volar radiocarpal ligaments
• at risk for injury with excessively large radial styloid
• from radial styloid to capitate, creating a sling to support the waist of the scaphoid
• preserve when doing proximal row carpectomy
• acts as primary stabilizer of the wrist after PRC and prevents ulnar drift
• also called radiolunotriquetral or volar radiolunate ligament
• counteracts ulnar-distal translocation of the lunate
• abnormal in Madelung's deformity
• referred to as Vickers ligament
• Ligament of Testut and Kuentz
• only functions as neurovascular conduit
• not a true ligament
• does not add mechanical strength
• stabilizes lunate
• Volar ulnocarpal ligaments
• ulnotriquetral
• ulnolunate
• ulnocapitate
• Dorsal ligaments
• also referred to as dorsal radiocarpal ligament (DRC)
• must also be disrupted for VISI deformity to form (in combination with rupture of lunotriquetral interosseous ligament rupture)
• dorsal intercarpal (DIC)
• Intrinsic (Interosseous) ligaments
• Proximal row
• scapholunate ligament
• primary stabilizer of scapholunate joint
• composed of 3 components
• dorsal portion
• thickest and strongest
• prevents translation
• volar portion
• prevents rotation
• proximal portion
• no significant strength
• disruption leads to lunate extension when the scaphoid flexes
• creating DISI deformity
• lunotriquetral ligament
• composed of 3 components
• dorsal
• volar
• strongest
• proximal
• disruption leads to lunate flexion when the scaphoid is normally aligned
• creating VISI deformity (in combination with rupture of dorsal radiotriquetral rupture)
• Distal row
• trapeziotrapezoid ligament
• trapeziocapitate ligament
• capitohamate ligament
• Palmar midcarpal
• scaphotrapeziotrapezoid
• scaphocapitate
• triquetralcapitate
• triquetralhamate
Card
1 of 7
Question
1 of 4
Private Note
Attach Treatment Poll
Treatment poll is required to gain more useful feedback from members.
Please enter Question Text
Please enter at least 2 unique options
Please enter at least 2 unique options
Please enter at least 2 unique options | 1,446 | 5,427 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.53125 | 3 | CC-MAIN-2024-33 | latest | en | 0.814943 |
https://nrich.maths.org/public/leg.php?code=-93&cl=3&cldcmpid=2425 | 1,432,917,262,000,000,000 | text/html | crawl-data/CC-MAIN-2015-22/segments/1432207930256.3/warc/CC-MAIN-20150521113210-00195-ip-10-180-206-219.ec2.internal.warc.gz | 865,718,010 | 8,571 | # Search by Topic
#### Resources tagged with Making and proving conjectures similar to Salinon:
Filter by: Content type:
Stage:
Challenge level:
### There are 37 results
Broad Topics > Using, Applying and Reasoning about Mathematics > Making and proving conjectures
### Rotating Triangle
##### Stage: 3 and 4 Challenge Level:
What happens to the perimeter of triangle ABC as the two smaller circles change size and roll around inside the bigger circle?
### To Prove or Not to Prove
##### Stage: 4 and 5
A serious but easily readable discussion of proof in mathematics with some amusing stories and some interesting examples.
### Janine's Conjecture
##### Stage: 4 Challenge Level:
Janine noticed, while studying some cube numbers, that if you take three consecutive whole numbers and multiply them together and then add the middle number of the three, you get the middle number. . . .
### DOTS Division
##### Stage: 4 Challenge Level:
Take any pair of two digit numbers x=ab and y=cd where, without loss of generality, ab > cd . Form two 4 digit numbers r=abcd and s=cdab and calculate: {r^2 - s^2} /{x^2 - y^2}.
### Curvy Areas
##### Stage: 4 Challenge Level:
Have a go at creating these images based on circles. What do you notice about the areas of the different sections?
### Multiplication Square
##### Stage: 3 Challenge Level:
Pick a square within a multiplication square and add the numbers on each diagonal. What do you notice?
### What's Possible?
##### Stage: 4 Challenge Level:
Many numbers can be expressed as the difference of two perfect squares. What do you notice about the numbers you CANNOT make?
### Polycircles
##### Stage: 4 Challenge Level:
Show that for any triangle it is always possible to construct 3 touching circles with centres at the vertices. Is it possible to construct touching circles centred at the vertices of any polygon?
### On the Importance of Pedantry
##### Stage: 3, 4 and 5
A introduction to how patterns can be deceiving, and what is and is not a proof.
### Always a Multiple?
##### Stage: 3 Challenge Level:
Think of a two digit number, reverse the digits, and add the numbers together. Something special happens...
### Pericut
##### Stage: 4 and 5 Challenge Level:
Two semicircle sit on the diameter of a semicircle centre O of twice their radius. Lines through O divide the perimeter into two parts. What can you say about the lengths of these two parts?
### Tri-split
##### Stage: 4 Challenge Level:
A point P is selected anywhere inside an equilateral triangle. What can you say about the sum of the perpendicular distances from P to the sides of the triangle? Can you prove your conjecture?
### Happy Numbers
##### Stage: 3 Challenge Level:
Take any whole number between 1 and 999, add the squares of the digits to get a new number. Make some conjectures about what happens in general.
### A Little Light Thinking
##### Stage: 4 Challenge Level:
Here is a machine with four coloured lights. Can you make two lights switch on at once? Three lights? All four lights?
### Loopy
##### Stage: 4 Challenge Level:
Investigate sequences given by $a_n = \frac{1+a_{n-1}}{a_{n-2}}$ for different choices of the first two terms. Make a conjecture about the behaviour of these sequences. Can you prove your conjecture?
### Multiplication Arithmagons
##### Stage: 4 Challenge Level:
Can you find the values at the vertices when you know the values on the edges of these multiplication arithmagons?
### How Old Am I?
##### Stage: 4 Challenge Level:
In 15 years' time my age will be the square of my age 15 years ago. Can you work out my age, and when I had other special birthdays?
### Problem Solving, Using and Applying and Functional Mathematics
##### Stage: 1, 2, 3, 4 and 5 Challenge Level:
Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information.
### Pentagon
##### Stage: 4 Challenge Level:
Find the vertices of a pentagon given the midpoints of its sides.
### Alison's Mapping
##### Stage: 4 Challenge Level:
Alison has created two mappings. Can you figure out what they do? What questions do they prompt you to ask?
### Charlie's Mapping
##### Stage: 3 Challenge Level:
Charlie has created a mapping. Can you figure out what it does? What questions does it prompt you to ask?
### Close to Triangular
##### Stage: 4 Challenge Level:
Drawing a triangle is not always as easy as you might think!
### Few and Far Between?
##### Stage: 4 and 5 Challenge Level:
Can you find some Pythagorean Triples where the two smaller numbers differ by 1?
##### Stage: 4 Challenge Level:
Explore the relationship between quadratic functions and their graphs.
### Triangles Within Squares
##### Stage: 4 Challenge Level:
Can you find a rule which relates triangular numbers to square numbers?
##### Stage: 4 Challenge Level:
The points P, Q, R and S are the midpoints of the edges of a non-convex quadrilateral.What do you notice about the quadrilateral PQRS and its area?
### An Introduction to Magic Squares
##### Stage: 4 Challenge Level:
The points P, Q, R and S are the midpoints of the edges of a convex quadrilateral. What do you notice about the quadrilateral PQRS as the convex quadrilateral changes?
##### Stage: 4 Challenge Level:
Points D, E and F are on the the sides of triangle ABC. Circumcircles are drawn to the triangles ADE, BEF and CFD respectively. What do you notice about these three circumcircles?
### Helen's Conjecture
##### Stage: 3 Challenge Level:
Helen made the conjecture that "every multiple of six has more factors than the two numbers either side of it". Is this conjecture true?
### Triangles Within Triangles
##### Stage: 4 Challenge Level:
Can you find a rule which connects consecutive triangular numbers?
### Triangles Within Pentagons
##### Stage: 4 Challenge Level:
Show that all pentagonal numbers are one third of a triangular number.
### Consecutive Negative Numbers
##### Stage: 3 Challenge Level:
Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers?
### Center Path
##### Stage: 3 and 4 Challenge Level:
Four rods of equal length are hinged at their endpoints to form a rhombus. The diagonals meet at X. One edge is fixed, the opposite edge is allowed to move in the plane. Describe the locus of. . . .
### Dice, Routes and Pathways
##### Stage: 1, 2 and 3
This article for teachers discusses examples of problems in which there is no obvious method but in which children can be encouraged to think deeply about the context and extend their ability to. . . .
### Epidemic Modelling
##### Stage: 4 and 5 Challenge Level:
Use the computer to model an epidemic. Try out public health policies to control the spread of the epidemic, to minimise the number of sick days and deaths.
### Exploring Simple Mappings
##### Stage: 3 Challenge Level:
Explore the relationship between simple linear functions and their graphs. | 1,601 | 7,073 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.40625 | 4 | CC-MAIN-2015-22 | longest | en | 0.840357 |
https://www.st-lukes.notts.sch.uk/tuesday-24th/ | 1,718,480,327,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198861606.63/warc/CC-MAIN-20240615190624-20240615220624-00309.warc.gz | 894,894,422 | 69,603 | # Tuesday 24th
## Maths
Spend the first 15 minutes on Times Tables Rockstars. Is there a times table that you need to pay extra attention to?
Today we are going to learn how to add fractions with different decimals. If two fractions have different decimals we should start to change them into the same denominator. To do this we can find a common multiple or times both denominators together. Remember whatever we do to the denominator we need to do to the numerator.
e.g.
4/5 + 2/ 8
We should times the 5 and 8 together to make 40. Our denominator is now 40.
- What did we times the 5 by to make 40? 8. So we should times the 4 by 8 to make 32.
- What did we times the 8 by to make 40? 5. So we should times the 2 by 5 to make 10.
Now we have 32/40 + 10/40 = 42/40= 1 2/40
Try the following questions.
## Science
Today we are going to continue learning about the heart. I would like you to write about the importance of the heart and how blood moves around the body. You could choose to do this as a poster, powerpoint or an information text. | 279 | 1,055 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.96875 | 4 | CC-MAIN-2024-26 | latest | en | 0.928724 |
https://estudyassistant.com/physics/question1527940 | 1,679,741,245,000,000,000 | text/html | crawl-data/CC-MAIN-2023-14/segments/1679296945323.37/warc/CC-MAIN-20230325095252-20230325125252-00044.warc.gz | 280,665,974 | 16,663 | , 29.01.2020 08:07 neash19
# Imagine an astronaut in space at the midpoint between two stars of equal mass. if all other objects are infinitely far away, what is the weight of the astronaut? explain your answer. step by step?
### Another question on Physics
Physics, 22.06.2019 05:00
Aperson stands on a platform, initially at rest, that can rotate freely without friction. the moment of inertia of the person plus the platform is ip. the person holds a spinning bicycle wheel with its axis horizontal. the wheel has moment of inertia iw and angular velocity ฯw. take the ฯw direction counterclockwise when viewed from above. part a what will be the angular velocity ฯp of the platform if the person moves the axis of the wheel so that it points vertically upward?
Physics, 22.06.2019 12:40
Estimate the schwarzschild radius (in kilometers) for a mini-black hole formed when a superadvanced civilization decides to punish you (unfairly) by squeezing you until you become so small that you disappear inside your own event horizon. (assume that the your weight is 50 kg) | 256 | 1,074 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.0625 | 3 | CC-MAIN-2023-14 | latest | en | 0.899913 |
https://www.hellovaia.com/textbooks/math/finite-mathematics-for-the-managerial-life-and-social-sciences-9-edition/chapter-8/problem-1-during-the-first-year-at-a-university-that-uses-a-/ | 1,701,269,764,000,000,000 | text/html | crawl-data/CC-MAIN-2023-50/segments/1700679100112.41/warc/CC-MAIN-20231129141108-20231129171108-00853.warc.gz | 901,568,136 | 26,150 | Suggested languages for you:
Americas
Europe
Problem 1
# During the first year at a university that uses a 4 -point grading system, a freshman took ten 3 -credit courses and received two As, three Bs, four Cs, and one D. a. Compute this student's grade-point average. b. Let the random variable $$X$$ denote the number of points corresponding to a given letter grade. Find the probability distribution of the random variable $$X$$ and compute $$E(X)$$, the expected value of $$X$$.
Expert verified
a. The student's grade-point average is 2.6. b. The probability distribution of the random variable X is: P(X=1) = 1/10, P(X=2) = 4/10, P(X=3) = 3/10, P(X=4) = 2/10, and the expected value of X is 2.6.
See the step by step solution
## Step 1: Compute the student's grade-point average
Recall the 4-point grading system: - A = 4 points - B = 3 points - C = 2 points - D = 1 points The student took ten 3-credit courses and received: - two As (2 × 4 points × 3 credits) - three Bs (3 × 3 points × 3 credits) - four Cs (4 × 2 points × 3 credits) - one D (1 × 1 point × 3 credits) To find the grade-point average, we will add up the total number of grade points earned and divide it by the total number of credits.
## Step 2: Calculate the total grade points and total credits
Total grade points = (2 × 4 × 3) + (3 × 3 × 3) + (4 × 2 × 3) + (1 × 1 × 3) = 24 + 27 + 24 + 3 = 78 Total credits = 10 × 3 = 30
## Step 3: Find the grade-point average
Grade-point average = Total grade points / Total credits = 78 / 30 = 2.6. So, the student's grade-point average is 2.6.
## Step 4: Find the probability distribution of the random variable X
Probability distribution of X: P(X=1) = Probability of getting D = 1/10 P(X=2) = Probability of getting C = 4/10 P(X=3) = Probability of getting B = 3/10 P(X=4) = Probability of getting A = 2/10
## Step 5: Compute the expected value of X
Expected value of X, E(X) = ∑ [x × P(X=x)] E(X) = (1 × 1/10) + (2 × 4/10) + (3 × 3/10) + (4 × 2/10) = 1/10 + 8/10 + 9/10 + 8/10 = 26/10 The expected value of X is E(X) = 2.6. a. The student's grade-point average is 2.6. b. The probability distribution of the random variable X is: P(X=1) = 1/10, P(X=2) = 4/10, P(X=3) = 3/10, P(X=4) = 2/10, and the expected value of X is 2.6.
We value your feedback to improve our textbook solutions.
## Access millions of textbook solutions in one place
• Access over 3 million high quality textbook solutions
• Access our popular flashcard, quiz, mock-exam and notes features
## Join over 22 million students in learning with our Vaia App
The first learning app that truly has everything you need to ace your exams in one place.
• Flashcards & Quizzes
• AI Study Assistant
• Smart Note-Taking
• Mock-Exams
• Study Planner | 860 | 2,744 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.8125 | 5 | CC-MAIN-2023-50 | latest | en | 0.882745 |
https://forum.piedao.org/answers/1174464-what-is-17-30-simplifying-please-please-please-help-me | 1,708,599,858,000,000,000 | text/html | crawl-data/CC-MAIN-2024-10/segments/1707947473738.92/warc/CC-MAIN-20240222093910-20240222123910-00600.warc.gz | 268,442,981 | 6,170 | Answer: you can't simplify the 30 has no common factors
## Related Questions
Mary covered her kitchen floor with 10 tiles. The floor measures 6 feet long by 5 feet wide, The tiles are each 3 feet long and w feet wide. Write an equation to represent the situation.
Number of tiles used forcovering the kitchen floor = 10
Area covered by the floor = 6* 5 feet^2
= 30 square feet
Length of each tiles = 3 feet
Width of each tiles = w feet
Area covered by each tiles = 3w square feet
Then the equation can be written as
3w * 10 = 30
30w = 30
w= 30/30
= 1 feet
So the width of each tiles is 1 feet.I hope the procedure is clear enough for you to understand. The equation to determine the solution is
10 *3w = 30
There are 10 tiles used and the area covered by each tiles is 3w. Also the area of the floor is 30 square feet. All those information's have been used to get to the equation.
The answer for the first quartile would be 51.5
How do you write 159 million million million
159 million = 159,000,000
159 million million = 159,000,000,000,000
159 million million million = 159,000,000,000,000,000,000
---------------------------------
159 million million million = 159 million trillion
The requried 159 million million million is equivalent to and in standard 159,000,000,000,000,000,000,000.
To write 159 million million million, we can use scientific notation to represent the large number more conveniently. In scientific notation, the number is written as a coefficient multiplied by a power of 10.
159 million million million is equivalent to 159 multiplied by 1 million, which itself is equal to . Therefore, we can write 159 million million million as:
159 × 1,000,000,000,000,000,000,000 (6 zeros for million, 6 zeros for million, and 6 zeros for million)
In scientific notation, this is written as:
So, 159 million million million is equivalent to .
brainly.com/question/30196144
#SPJ6
Rewrite the function f(x)=-4(x-3)^2+16 in the form f(x)=ax^2+bx+c
f(x) = - 4x² + 24x - 20
Step-by-step explanation:
Given
f(x) = - 4(x - 3)² + 16 ← expand the factor using FOIL
= - 4(x² - 6x + 9) + 16 ← distribute parenthesis by - 4
= - 4x² + 24x - 36 + 16 ← collect like terms
= - 4x² + 24x - 20
Mixed numbers between 5 and 7, with an interval of 1 / 3 between each pair of mixed numbers
Step-by-step explanation:
To find : Mixed numbers between 5 and 7, with an interval of between each pair of mixed numbers ?
Solution :
Mixed numbers between 5 and 7.
The interval is between each pair of mixed numbers.
So, The number form as
In mixed fraction,
Therefore, The mixed numbers between 5 and 7, with an interval of between each pair of mixed numbers are
5 1/3, 5 2/3, 6 1/3, 6 2/3....mixed numbers between 5 and 7.....intervals of 1/3
C. The graph would shift five units to the left.
Step-by-step explanation:
y = x² + 5x - 2
We can write this equation in the form
y = a(x - h)² + k
If you change the value of h, you are translating the parabola horizontally.
If you increase h by five units, the graph shifts five units to the left.
In the figure below, the red parabola represents the original equation.
The blue parabola shows the graph shifting five units to the left
(from x = -2.5 to x = -7.5) when you replace x by x + 5. | 927 | 3,271 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.25 | 4 | CC-MAIN-2024-10 | latest | en | 0.920986 |
https://goodcalculators.com/combinations-permutations-calculator/ | 1,575,806,824,000,000,000 | text/html | crawl-data/CC-MAIN-2019-51/segments/1575540508599.52/warc/CC-MAIN-20191208095535-20191208123535-00174.warc.gz | 392,699,721 | 28,435 | # Combinations and Permutations Calculator
You can use this combinations and permutations calculator to quickly and easily calculate the number of potential combinations and permutations of r elements within a set of n objects.
Calculate Combinations and Permutations in Five Easy Steps:
• 1. Select whether you would like to calculate the number of combinations or the number of permutations using the simple drop-down menu
• 2. Enter the total number of objects (n) and number of elements taken at a time (r)
• 3. Select whether repeat elements are permitted
• 4. Input a list of elements, separated by commas (optional)
• 5. Press the "Calculate" button to compute the results.
Combinations (nCr) & Permutations (nPr) Calculator
## Combinations vs. Permutations
Mathematics and statistics disciplines require us to count. This is particularly important when completing probability problems.
Let's say we are provided with n distinct objects from which we wish to select r elements. This type of activity is required in a mathematics discipline that is known as combinatorics; i.e., the study of counting. Two different methods can be employed to count r objects within n elements: combinations and permutations. These two concepts are very similar and are frequently confused.
## The Difference Between a Combination and a Permutation
When considering the differences between combinations and permutations, we are essentially concerned with the concept of order. A permutation relates to the order in which we choose the elements. When the same set of elements are taken in a different order, we will have different permutations. When we are not concerned with order in which we select r elements from a set of n objects, the order is not taken into consideration.
## An Example of Permutations
It's worth looking at an example to better differentiate between these two concepts.
First, let's consider how many permutations of two letters there are within the following set of four letters: {A,B,C,D}.
In this case, we list all object pairs from the set while also taking order into consideration. We find that there are 12 permutations in total: AB, AC, AD, BA, BC, BD, CA, CB, CD, DA, DB, and DC.
It is important to recognize that the AB and BA permutations are dissimilar because, in the first case, A was selected first while, in the second, B was selected first; i.e., the order is of significance.
## An Example of Combinations
Let's calculate the number of combinations of two letters from the same set: {A,B,C,D}.
As we are calculating combinations, we are no longer interested in the order of the elements. As such, we can quickly and easily identify all the combinations by looking at the permutations and deleting all those that include the same letters.
In this regard, AB and BA are treated as being the same. As such, we have six combinations: AB, AC, AD, BC, BD, and CD.
## Formulas
Permutations and Combinations with / without Repetition
Type Is Repetition Allowed? Formula
r-permutations Yes P(n, r) = nr
r-permutations No P(n, r) = n! / (n - r)!
r-combinations Yes C(n, r) = (r + n - 1)! / (r! * (n - 1)!)
r-combinations No C(n, r) = n! / (r! * (n - r)!)
Rating: 3.7/5 (76 votes) | 728 | 3,220 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.53125 | 5 | CC-MAIN-2019-51 | longest | en | 0.915087 |
http://math.stackexchange.com/questions/72487/does-the-sorgenfrey-line-have-a-group-operation-compatible-with-its-order-topolo | 1,469,556,770,000,000,000 | text/html | crawl-data/CC-MAIN-2016-30/segments/1469257825048.60/warc/CC-MAIN-20160723071025-00300-ip-10-185-27-174.ec2.internal.warc.gz | 157,729,724 | 17,999 | # Does the Sorgenfrey Line have a group operation compatible with its order topology?
The title is the question, but let me explain. Let $\mathbb{L}$ denote the Sorgenfrey line. I and a friend were trying to develop some of the properties of the sorgenfrey line. (if it's metrizable, paracompact, or whatevs.) And we've stumbled upon the following problem:
Can one define a metric and a group operation in $\mathbb{L}$ that yields the usual order-induced topology and that makes it a topological group? What about semitopological group or something weaker?
So far we've introduced the metric as follows:
$\mathbb{L}$ = $(0,1)$ x $\mathbb{R}$ and $x = (t,r), \ y = (t',r')$ $\ \in$ $\mathbb{L}$
$D(x,y) = 1$ if $t \neq t'$
$D(x,y) =$ $\frac{\parallel x - y \parallel}{1 + \parallel x - y \parallel}$ if $t = t'$
We were trying to find countinous group operations unsuccessfully.
Many thanks.
-
The Sorgenfrey line is not metrizable, since it's separable but not second-countable. – Qiaochu Yuan Oct 14 '11 at 1:28
I see. Still any continuous group operations? – Henrique Tyrrell Oct 14 '11 at 1:34 | 320 | 1,104 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.96875 | 3 | CC-MAIN-2016-30 | latest | en | 0.873044 |
https://uk.mathworks.com/matlabcentral/cody/problems/1435-basics-divide-integers-to-get-integer-outputs-in-all-cases/solutions/584146 | 1,600,853,063,000,000,000 | text/html | crawl-data/CC-MAIN-2020-40/segments/1600400210616.36/warc/CC-MAIN-20200923081833-20200923111833-00134.warc.gz | 668,166,444 | 16,514 | Cody
# Problem 1435. Basics: Divide integers to get integer outputs in all cases
Solution 584146
Submitted on 18 Feb 2015 by Pruthvi teja
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
### Test Suite
Test Status Code Input and Output
1 Pass
%% a = [-2 2];b=3; y = [0 1]; assert(isequal(divide_differently(a,b),y))
2 Pass
%% a = [-5 0];b=3; y = [-1 0]; assert(isequal(divide_differently(a,b),y)) | 150 | 465 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.671875 | 3 | CC-MAIN-2020-40 | latest | en | 0.710998 |
https://math.stackexchange.com/questions/1092995/are-eigen-spaces-orthogonal | 1,656,201,499,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656103036176.7/warc/CC-MAIN-20220625220543-20220626010543-00068.warc.gz | 435,371,651 | 66,160 | # Are eigen spaces orthogonal?
Let $A$ be a $N$ x $N$ matrix which has $k < N$ distinct eigenvalues. Are eigenspaces corresponding to different eigenvalues orthogonal in general ? I know it is true if $A$ is normal matrix. But can't prove in general.
Counterexample: $\begin{pmatrix}1&0&0\\0&1&1\\0&0&2\end{pmatrix}$ has eigenspaces $\{(t,u,0)^T\}$ with eigenvalue $1$ and $\{(0,t,t)^T\}$ with eigenvalue $2$, and they are not orthogonal.
• nice example. rank one matrix $\pmatrix{0 & 1\\0 & 1}$ will work too.
– abel
Jan 6, 2015 at 15:12
• @abel: No, that doesn't satisfy the condition of having $<N$ distinct eigenvalues. The matrix needs to be at least 3×3. Jan 6, 2015 at 15:51
• you are right. that seems a peculiar requirement, perhaps there is a reason for it.
– abel
Jan 6, 2015 at 15:55
You can't prove it in general because it's not true. In fact, for any linearly independent set of vectors $v_1,v_2,\dots, v_n\in\mathbb R^n$, you can define a matrix
$$P=[v_1,v_2,\dots,v_n]$$
and a matrix $D$ which is a diagonal matrix with pairwise distinct diagonal entries $\lambda_1, \lambda_2,\dots, \lambda_n$.
Now, you know that $$(PDP^{-1})v_i = PD(P^{-1}v_i) = PDe_i = \lambda_i Pe_i = \lambda_i v_i.$$ This means that the vectors $v_1,\dots, v_n$ are eigenvectors, each spanning its distinct eigenspace (because the eigenvalues are pairwise distinct), and they are not, in general, orthogonal.
• A counterexample would be useful, in an answer. Jan 6, 2015 at 13:30
• am i correct about normal matrices ? Jan 6, 2015 at 13:30
• About normal matrices, yes, you are. Jan 6, 2015 at 13:31
• @sasha You are correct, normal matrices have orthogonal eigenspaces.
– 5xum
Jan 6, 2015 at 13:33
• @FedericoPoloni It has now been provided.
– 5xum
Jan 6, 2015 at 13:33 | 584 | 1,769 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.03125 | 4 | CC-MAIN-2022-27 | latest | en | 0.839733 |
https://web2.0calc.com/questions/please-help-me_49 | 1,547,734,852,000,000,000 | text/html | crawl-data/CC-MAIN-2019-04/segments/1547583658981.19/warc/CC-MAIN-20190117123059-20190117145059-00330.warc.gz | 674,720,158 | 6,796 | +0
0
114
3
Let $\triangle ABC$ be a right triangle, with the point $H$ the foot of the altitude from $C$ to side $\overline{AB}$.
Prove that
Oct 29, 2018
#1
+3726
+1
Try to expand the terms!
Oct 30, 2018
#2
+971
+3
Expanding the terms,
$$(x^2+2xh+h^2)+(y^2+2yh+y^2)=(a^2+2ab+b^2)$$
Using Pythagorean Theorem,
$$x^2+h^2=a^2\\ y^2+h^2=b^2$$
We could subsitute the values in, and rewrite the equation
$$a^2+2xh+b^2+2yh=a^2+2ab+b^2\\ 2xh+2yh=2ab\\ h(x+y)=ab$$
$$[ABC]=\frac12 AB\cdot CH = \frac12 BC \cdot AC\\ \frac12 (x+y)h=\frac12 ab\\ h(x+y)=ab$$
I hope this helped,
Gavin
Oct 30, 2018
edited by GYanggg Oct 30, 2018
#3
+94321
+2
Note that triangle BCA is similar to triangle BHC
Which implies that
HC / BC = CA / BA..... so...
h / a = b / ( x + y) (1)
Now expand ( x + h)^2 + ( y + h)^2 = ( a + b)^2 (2)
x^2 + 2xh + h^2 + y^2 + 2yh + h^2 = a^2 + 2ab + b^2 (3)
And since x^2 + h^2 = a^2 and y^2 + h^2 = b^2
We can subtract these equal parts from (3) and we are left with
2xh + 2yh = 2ab divide through by 2
xh + yh = ab factor out h on the left
h ( x + y) = ab rearrange as
h / a = b / ( x + y) but, by (1)....this is true
So (2) must be true, as well
Oct 31, 2018 | 572 | 1,262 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.375 | 4 | CC-MAIN-2019-04 | longest | en | 0.510161 |
https://www.mcadcentral.com/threads/style-surfaces-help.6443/ | 1,722,853,258,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722640436802.18/warc/CC-MAIN-20240805083255-20240805113255-00147.warc.gz | 708,805,651 | 21,777 | Continue to Site
# Style & Surfaces.. HELP!!!
#### james.lynch
##### New member
Everybody! I have a problem..
I have extruded (separately) 2 circular surfaces, I then wanted to build a complex (not as in difficult, but the curvature varies in x,y&z)surface to join those surfaces
View attachment 834
now I have no problem with the curve creation, but here is the problem! I created the curves by projecting them onto the surface from an original planar curve on a plane mid distance in between the 2 cylinders, but for some reason whenextruding the initial cylinder(from a circular sketch) it effectively halves the complete surface into 2 semi-circularsurface's.. (see the highlighted edges above)
now the problem arises when I project or drop the curves on towhat are effectively TWO surfaces so it splits theprojected curve in to TWO curves! which kinda messes up the creation of the surface due to the fact that it becomes a 5 sided surface, not a 4..
now I know the workaround of simply creating a few more curves and patching it up, which I have no problem doing BUT why should I have to!
I also was thinking of instead of projecting the 2 curves on to the surface I could create them on a plane inside the cylinder then just trim it. but again I dont want to ..
Surely there must be a proper workaround!
James
Edited by: james.lynch
pro cant create a complete cylinder.
create the half of the cylinder that you want to project onto then either mirror it later or create the other half in another feature.
miked
Miked,
why is this? is the limit 180 degrees? or could I create say a 350degree arc and extrude this?
Thanks,
James
the limit is 180 degrees, Iam not surewhy but it has been that way for as long as I can remember. I have complained about it before but I dont recall what the reason is.
miked
that's a bit ridiculous all right!maybe it a mathematical thing? there are easy enough workarounds, but I had a couple of angles there defined in a Layout which would change where the projected curve would hit the surface(s) and I was hoping to be able to change these in the layout and it would all update nicely..
guess that's impossible now.. unless you have any suggestions?
James
It was explained to me years ago from PTC that the reason for the 2 halves was for model referencing. i.e. if you wanted to create a datum axis through a point on the arc of the cylinder you wouldn't have to create a point first, again it's been a long time.
I personally think it's bunkand that its probably some thing linked towards Unix,but I got to say it has helped me more than hindered me in doing advanced modeling and cutting down on features.
Surely that can't be the reason! or at least if it was they could have put in a check box or something to say "make surface as two halves" or something like that..
I agree that it is sometimes helpful, but surly one should have the choice, at the end of the day it's not that hard to create a cylinder in two halves if you actually want them, but it seems that it is impossible to create it as one!
where is the logic?
James
James, sorry, I don't have an answer for you. But I'm curious to
know your workaround. I've found myself in similar situations
with a tangent curve chain that is actually comprised of separate curve
features. I tried the "Copy Curve" command but it seemed like the
copied curve was a little off the parent curves and my subsequent
surfaces wouldn't merge.
its a pain in the butt some times maybe this thread should be moved
to the "wish list" along with a wish that someone in ptc that can do
Big Joe,
I had tried the "Copy Curve" feature before all right, but like yourself didn't have much luck with it either!
My"workaround"I suppose really isn't a workaround, as in it still won't work in all situation - if I redefine the angle at which my style feature intersects the cylindrical surface, it will only work for some situations..
all that I did was create 1 extra curve and make the surface in two steps..
View attachment 835
But again this is quite limiting and if anybody know a better workaround, please let me know
Puppet, how do I move it to the Wish List forum? Or will I simply repost a similar post anddescription?
Thanks,
James
Edited by: james.lynch
James,
I would try one of these:
1-Rotate your cylindrical surfaces upon creation 90 degree so you have full 180 degree sides on the side of bridge surface or
2-Create a composite approximate curve from the projected curve.If the surfaces wont merge after extend the bridge surface
or you don't even have to rotate the cylindrical surfaces just reorient the sketch so that 180 split is in the place you need
Pedja,
Rotating the surfaces (or sketch) would not work in this case. I have a similar style feature to create on the opposite side (but not at 180 degrees to it)
View attachment 837
What I am trying to create is a top level bounding skeletonsurface for a finger (and eventual hand), I have been using some simplified geometry until now.
I have all the parameters such as bone lengths, mating angles etc set up in a layout and I was hoping that any and all changes made in the layout wouldupdate in the model tree.. but now, if I change the angle that it intersects at, it would change the curve definitions within the style features and probably cause the feature to fail..
as for merging the surfaces, I wasn' thinking of merging them all, only the various style features for each part of the finger, not to the cylinders..isit important to merge them all? I was hoping I wouldn't have to use the composite curve approach...
Also, I have a few questions about referencing with skeletal models, would you mind if Isent you a pm about it?
James
My idea is to rotate each cylindrical surface separatelyaround their own axes,you have 3 of them and each can be adjusted that both curves are in 180 degree half.<?:namespace prefix = o ns = "urn:schemas-microsoft-comfficeffice" />
As for rotation if you create a composite approximate curve after projection I don
Just forgot one thing.<?:namespace prefix = o ns = "urn:schemas-microsoft-comfficeffice" />
If the angle between 1st and 2nd or 2nd and 3rd bone is too big and you cannot have both curves not falling into split at the same time use two cylindrical surfaces and rotate them separatly
Pedja, thanks for the help..
which method would you think would createthe more robust model? I'd really like to create a very robust model, I suppose that should be the idea all of the time..
, but hey, I'm still learning!
Thanks,
James
James, <?:namespace prefix = o ns = "urn:schemas-microsoft-comfficeffice" />
I would not build the motion flexibility into curve and surface creation.
Create an initial model as you have it right now and than rotate finger surfaces themselves. You have 3 joints and 3 surfaces. Rotating them around the cylinder axes will give you any angle position that you want and it will work 10 out 10 times | 1,567 | 6,986 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.703125 | 3 | CC-MAIN-2024-33 | latest | en | 0.957087 |
https://en.wikipedia.org/wiki/Greenberger%E2%80%93Horne%E2%80%93Zeilinger_state | 1,721,359,210,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763514866.33/warc/CC-MAIN-20240719012903-20240719042903-00045.warc.gz | 206,706,134 | 24,905 | # Greenberger–Horne–Zeilinger state
In physics, in the area of quantum information theory, a Greenberger–Horne–Zeilinger state (GHZ state) is a certain type of entangled quantum state that involves at least three subsystems (particle states, qubits, or qudits). The four-particle version was first studied by Daniel Greenberger, Michael Horne and Anton Zeilinger in 1989, and the three-particle version was introduced by N. David Mermin in 1990.[1][2][3] Extremely non-classical properties of the state have been observed. GHZ states for large numbers of qubits are theorized to give enhanced performance for metrology compared to other qubit superposition states.[4]
## Definition
The GHZ state is an entangled quantum state for 3 qubits and its state is
${\displaystyle |\mathrm {GHZ} \rangle ={\frac {|000\rangle +|111\rangle }{\sqrt {2}}}.}$
### Generalization
The generalized GHZ state is an entangled quantum state of M > 2 subsystems. If each system has dimension ${\displaystyle d}$, i.e., the local Hilbert space is isomorphic to ${\displaystyle \mathbb {C} ^{d}}$, then the total Hilbert space of an ${\displaystyle M}$-partite system is ${\displaystyle {\mathcal {H}}_{\rm {tot}}=(\mathbb {C} ^{d})^{\otimes M}}$. This GHZ state is also called an ${\displaystyle M}$-partite qudit GHZ state. Its formula as a tensor product is
${\displaystyle |\mathrm {GHZ} \rangle ={\frac {1}{\sqrt {d}}}\sum _{i=0}^{d-1}|i\rangle \otimes \cdots \otimes |i\rangle ={\frac {1}{\sqrt {d}}}(|0\rangle \otimes \cdots \otimes |0\rangle +\cdots +|d-1\rangle \otimes \cdots \otimes |d-1\rangle )}$.
In the case of each of the subsystems being two-dimensional, that is for a collection of M qubits, it reads
${\displaystyle |\mathrm {GHZ} \rangle ={\frac {|0\rangle ^{\otimes M}+|1\rangle ^{\otimes M}}{\sqrt {2}}}.}$
## Properties
There is no standard measure of multi-partite entanglement because different, not mutually convertible, types of multi-partite entanglement exist. Nonetheless, many measures define the GHZ state to be maximally entangled state.[citation needed]
Another important property of the GHZ state is that taking the partial trace over one of the three systems yields
${\displaystyle \operatorname {Tr} _{3}\left[\left({\frac {|000\rangle +|111\rangle }{\sqrt {2}}}\right)\left({\frac {\langle 000|+\langle 111|}{\sqrt {2}}}\right)\right]={\frac {(|00\rangle \langle 00|+|11\rangle \langle 11|)}{2}},}$
which is an unentangled mixed state. It has certain two-particle (qubit) correlations, but these are of a classical nature. On the other hand, if we were to measure one of the subsystems in such a way that the measurement distinguishes between the states 0 and 1, we will leave behind either ${\displaystyle |00\rangle }$ or ${\displaystyle |11\rangle }$, which are unentangled pure states. This is unlike the W state, which leaves bipartite entanglements even when we measure one of its subsystems.[citation needed]
The GHZ state is non-biseparable[5] and is the representative of one of the two non-biseparable classes of 3-qubit states which cannot be transformed (not even probabilistically) into each other by local quantum operations, the other being the W state, ${\displaystyle |\mathrm {W} \rangle =(|001\rangle +|010\rangle +|100\rangle )/{\sqrt {3}}}$.[6] Thus ${\displaystyle |\mathrm {GHZ} \rangle }$ and ${\displaystyle |\mathrm {W} \rangle }$ represent two very different kinds of entanglement for three or more particles.[7] The W state is, in a certain sense "less entangled" than the GHZ state; however, that entanglement is, in a sense, more robust against single-particle measurements, in that, for an N-qubit W state, an entangled (N − 1)-qubit state remains after a single-particle measurement. By contrast, certain measurements on the GHZ state collapse it into a mixture or a pure state.
The GHZ state leads to striking non-classical correlations (1989). Particles prepared in this state lead to a version of Bell's theorem, which shows the internal inconsistency of the notion of elements-of-reality introduced in the famous Einstein–Podolsky–Rosen article. The first laboratory observation of GHZ correlations was by the group of Anton Zeilinger (1998), who was awarded a share of the 2022 Nobel Prize in physics for this work.[8] Many more accurate observations followed. The correlations can be utilized in some quantum information tasks. These include multipartner quantum cryptography (1998) and communication complexity tasks (1997, 2004).
## Pairwise entanglement
Although a measurement of the third particle of the GHZ state that distinguishes the two states results in an unentangled pair, a measurement along an orthogonal direction can leave behind a maximally entangled Bell state. This is illustrated below.
The 3-qubit GHZ state can be written as
${\displaystyle |\mathrm {GHZ} \rangle ={\frac {1}{\sqrt {2}}}\left(|000\rangle +|111\rangle \right)={\frac {1}{2}}\left(|00\rangle +|11\rangle \right)\otimes |+\rangle +{\frac {1}{2}}\left(|00\rangle -|11\rangle \right)\otimes |-\rangle ,}$
where the third particle is written as a superposition in the X basis (as opposed to the Z basis) as ${\displaystyle |0\rangle =(|+\rangle +|-\rangle )/{\sqrt {2}}}$ and ${\displaystyle |1\rangle =(|+\rangle -|-\rangle )/{\sqrt {2}}}$.
A measurement of the GHZ state along the X basis for the third particle then yields either ${\displaystyle |\Phi ^{+}\rangle =(|00\rangle +|11\rangle )/{\sqrt {2}}}$, if ${\displaystyle |+\rangle }$ was measured, or ${\displaystyle |\Phi ^{-}\rangle =(|00\rangle -|11\rangle )/{\sqrt {2}}}$, if ${\displaystyle |-\rangle }$ was measured. In the later case, the phase can be rotated by applying a Z quantum gate to give ${\displaystyle |\Phi ^{+}\rangle }$, while in the former case, no additional transformations are applied. In either case, the result of the operations is a maximally entangled Bell state.
This example illustrates that, depending on which measurement is made of the GHZ state is more subtle than it first appears: a measurement along an orthogonal direction, followed by a quantum transform that depends on the measurement outcome, can leave behind a maximally entangled state.
## Applications
GHZ states are used in several protocols in quantum communication and cryptography, for example, in secret sharing[9] or in the quantum Byzantine agreement.
5. ^ A pure state ${\displaystyle |\psi \rangle }$ of ${\displaystyle N}$ parties is called biseparable, if one can find a partition of the parties in two nonempty disjoint subsets ${\displaystyle A}$ and ${\displaystyle B}$ with ${\displaystyle A\cup B=\{1,\dots ,N\}}$ such that ${\displaystyle |\psi \rangle =|\phi \rangle _{A}\otimes |\gamma \rangle _{B}}$, i.e. ${\displaystyle |\psi \rangle }$ is a product state with respect to the partition ${\displaystyle A|B}$. | 1,819 | 6,845 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 30, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.8125 | 3 | CC-MAIN-2024-30 | latest | en | 0.840243 |
https://phvu.net/2014/02/28/minimum-edit-distance-in-python/ | 1,500,964,646,000,000,000 | text/html | crawl-data/CC-MAIN-2017-30/segments/1500549425082.56/warc/CC-MAIN-20170725062346-20170725082346-00160.warc.gz | 707,370,686 | 38,342 | # Minimum Edit Distance in Python
Minimum Edit Distance algorithm
This is my quick-and-dirty implementation of the Minimum Edit Distance algorithm in Python.
```class EditDistance(object):
def __init__(self):
pass
def costInsert(self, s):
return 1
def costDelete(self, s):
return 1
def costSubstitution(self, target, source):
return (0 if target == source else 2)
def minEditDistance(self, target, source):
n = len(target)
m = len(source)
assert n > 0 and m > 0
distance = np.zeros((n+1, m+1))
backPt = np.zeros((n+1, m+1), dtype=int)
opCount = np.zeros((n+1, m+1), dtype=int)
backPt[0, 1:] = 3
backPt[1:, 0] = 1
opCount[0, 1:] = xrange(1, m+1)
opCount[1:, 0] = xrange(1, n+1)
for i in xrange(1, n+1):
distance[i, 0] = distance[i-1, 0] + self.costInsert(target[i-1])
for j in xrange(1, m+1):
distance[0, j] = distance[0, j-1] + self.costDelete(source[j-1])
for i in xrange(1, n+1):
for j in xrange(1, m+1):
d = [distance[i-1, j] + self.costInsert(target[i-1]), \
distance[i-1, j-1] + self.costSubstitution(source[j-1], target[i-1]), \
distance[i, j-1] + self.costDelete(source[j-1])]
distance[i, j] = min(d)
op = [opCount[i-1, j] + 1, \
opCount[i-1, j-1] + (0 if source[j-1] == target[i-1] else 1), \
opCount[i, j-1] + 1]
backPt[i, j] = 1 + np.argmin(op)
opCount[i, j] = min(op)
# backtrace
counts = [0, 0, 0]
alignedStrings = [[], []]
i = n
j = m
while i != 0 or j != 0:
pt = backPt[i, j]
assert pt in [1, 2, 3]
if pt == 1:
counts[pt-1] += 1
alignedStrings[0].append(target[i-1])
alignedStrings[1].append('*')
i -= 1
elif pt == 2:
counts[pt-1] += (0 if target[i-1] == source[j-1] else 1)
alignedStrings[0].append(target[i-1])
alignedStrings[1].append(source[j-1])
i -= 1
j -= 1
else:
counts[pt-1] += 1
alignedStrings[0].append('*')
alignedStrings[1].append(source[j-1])
j -= 1
alignedStrings = [s[::-1] for s in alignedStrings]
return (counts, distance[n, m], alignedStrings)
```
Note that I am using weights of 1, 1 and 2 for the cost of insertion, deletion and substitution. Standard packages might use other values (like 7, 7, 10 in HTK), but it is easy to change.
The algorithm can be found in the image. In the Python implementation, I already modified it to compute the back-traced minimum distance alignment. Since there might be several alignments that have the same edit distance, I select the one that minimize the sum of deletion, insertion and substitution operations. This is eventually corresponding to the alignment that minimizes Word Error Rate. The “best” alignment is also returned, so this implementation can be a good demonstration for studying this algorithm.
As always, the full source code can be found on github. | 855 | 2,653 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.9375 | 3 | CC-MAIN-2017-30 | longest | en | 0.427294 |
https://www.physicsforums.com/threads/thermodynamics-how-big-and-heavy-do-steam-turbines-need-to-be.715951/ | 1,660,877,513,000,000,000 | text/html | crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00059.warc.gz | 790,410,616 | 17,890 | # [Thermodynamics] How big and heavy do steam turbines need to be?
FireStorm000
I'm curious how large(dimensions) and how heavy/massive a steam(or other gas) turbine needs to be to extract a given fraction and amount of the energy from the fluid. For example if a power plant is producing 1GW of thermal power, how much of that can realistically be converted to electrical power, and what kind of turbines would it take? Could the turbines be made more compact and lighter if they were to go on a submarine or ship (and how would that impact efficiency)?
-FireStorm
Homework Helper
I'm curious how large(dimensions) and how heavy/massive a steam(or other gas) turbine needs to be to extract a given fraction and amount of the energy from the fluid. For example if a power plant is producing 1GW of thermal power, how much of that can realistically be converted to electrical power, ...
The efficiency of a thermal power plant depends on the temperatures that it operates between. The theoretical maximum efficiency would be with a Carnot cycle: η = 1-Th/Tc where Th is the temperature of the thermal source and Tc is the ambient temperature at which heat flow from the plant is discharged. In practice thermal efficiency does not get much better than 1/3 except some nuclear plants that achieve very high operating temperatures where it might approach 40%. Turbines are used because they are efficient at converting heat flow into useful work.
AM
1 person
willem2
The efficiency of a thermal power plant depends on the temperatures that it operates between. The theoretical maximum efficiency would be with a Carnot cycle: η = 1-Th/Tc where Th is the temperature of the thermal source and Tc is the ambient temperature at which heat flow from the plant is discharged. In practice thermal efficiency does not get much better than 1/3 except some nuclear plants that achieve very high operating temperatures where it might approach 40%. Turbines are used because they are efficient at converting heat flow into useful work.
AM
The efficiency can be about 60%, when using combining a turbine with a steam power plant (a combined cycle plant). A single gas turbine could get 40%.
The efficiency of nuclear power plants is much lower, because they work with liquid water as coolant, so the Th can't be higher than the critical temperature of water (647 K)
FireStorm000
Thanks for the answers. That Carnot equations tells quite a bit and mostly answers the efficiency part. Any idea on how much one of those turbines weigh though?
Last edited:
Staff Emeritus
Homework Helper
The thermal efficiency of the turbine by itself is relatively high. It's not unusual for turbines to have efficiencies of about 85%. The efficiency of the combined plant is much lower, as noted above.
Of course, there is more than one steam turbine in a power plant. The smallest turbines use the highest pressure steam, and as steam pressures drop, the size of the turbine increases because a given amount of steam occupies more volume as it expands.
It's hard to say what turbines weigh, because they come in different sizes and have different ratings. A small turbine capable of producing 1000 kW might weigh a couple hundred kg with the rotor and casing included. A large power plant turbine can weigh hundreds of tonnes.
To give you an idea of what a large power plant turbine looks like, here is a brochure from Siemens:
http://www.energy.siemens.com/hq/po...eam-turbines/SST-9000/SST-9000_Data_sheet.pdf
1 person
Homework Helper
Could the turbines be made more compact and lighter if they were to go on a submarine or ship (and how would that impact efficiency)?
Of course they can, and your OP missed the biggest motivation for weight reduction in gas turbines, namely aircraft engines. As others have said the efficiency depends mostly on the thermodynamic cycle of the engine, not its weight.
Actually the question is a bit backwards. A stationary engine in a power station doesn't have to be light weight. Very high reliability and low maintenance costs are more important., and a "lower tech" heavy machine is "better" for that than a high tech light one.
As an extreme example, some military jet engines need overhaul every 200 hours of running time, and at any one time maybe 25% or even more of those planes are being maintained and not available for use. But you wouldn't want to operate a power station that had to be shut down for maintenance for four days every two weeks!
1 person
FireStorm000
It's impressive the range of designs you guys have pointed out. I hadn't really considered jet engines since I was mainly considering extracting thermal energy from a fluid, rather than adding it, but in hindsight, I suppose aircraft engines are relevant.
I guess the take away would be that it's all about trade-offs. Mass & Size vs. Maintenance vs. Efficiency vs. Power Output etc. I'm sure operating temperature and pressure factors in as well.
(How much) would it make a difference if the working fluid is, for example, LH2 or LOX in a rocket turbo-pump, or the high temperature air in a jet engine?(as compared to the steam turbines we've mainly been discussing)
Staff Emeritus
Homework Helper
It's impressive the range of designs you guys have pointed out. I hadn't really considered jet engines since I was mainly considering extracting thermal energy from a fluid, rather than adding it, but in hindsight, I suppose aircraft engines are relevant.
It's not clear what you mean here. For a steam turbine, the working fluid (water) absorbs the heat of combustion of fuel in a boiler, which converts liquid water to steam. The turbine extracts heat energy from the steam and turns that into work. This is an external combustion arrangement.
In a gas turbine, there are three key components: 1. a compressor to raise the pressure of the combustion air, 2. a combustion section where fuel is mixed with the compressed air and burned, generating hot exhaust gasses, and 3. the actual turbine section, where the heat energy of the combustion gasses drives a turbine.
In aircraft gas turbines, the work produced by the turbine is used to drive the compressor. The rest of the heat energy of the hot gasses is turned into thrust by passing these gasses through a nozzle as they exit the turbine.
Both turbines extract work from a fluid: a steam turbine uses steam produced by burning a fuel, a gas turbine uses the hot gasses from burning fuel directly, without the use of another fluid.
Now, gas turbines can be modified by adding turbine sections to extract more work from the hot gasses of combustion, and consequently reducing the thrust produced. The additional turbine sections are connected to an output shaft, which can turn a propeller or a generator. A gas turbine driving an aircraft propeller is called a turbo-prop. It works just as well with a ship propeller, with the addition of a reduction gear to the turbine so the propeller is driven at its most efficient speed, which is in the range of 100 RPM or so.
One of the limiting factors in the design of a gas turbine is the material used to construct the turbine blades. These blades must be very strong, so they do not break when being spun at high RPM, and they must remain strong while they are very hot. Due to various factors, it's not practical to use any cooling system for the blades.
Staff Emeritus | 1,542 | 7,382 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.578125 | 4 | CC-MAIN-2022-33 | latest | en | 0.934185 |
https://extatica.com/blog/viewtopic.php?id=f36af3-jmp-scatterplot-matrix | 1,653,483,927,000,000,000 | text/html | crawl-data/CC-MAIN-2022-21/segments/1652662587158.57/warc/CC-MAIN-20220525120449-20220525150449-00660.warc.gz | 300,063,396 | 15,122 | # jmp scatterplot matrix
Click OK to generate the scatterplot matrix. A lab technician wants to explore the following questions: ⢠Is there a relationship between any pair of chemicals? A larger circle indicates a more significant relationship. Have questions? Points are plotted for all observations in the data table. 3. The Scatterplot Matrix Window. Quantile Range outliers: Values farter than some quantile range from the tail quantile. This site works best with JavaScript enabled. If you select a point in one scatterplot, it is selected in all the other scatterplots. Take O’Reilly online learning with you and learn anywhere, anytime on your phone and tablet. Shows or hides correlation circles in the upper right triangle of the scatterplot matrix. 3. Select one of the colors in the palette, or select Other to use another color. A lab technician wants to explore the following questions: ⢠Is there a relationship between any pair of chemicals? To answer these questions, use a scatterplot matrix of the four solvents. The Scatterplot Matrix command invokes the Scatterplot Matrix platform in a separate window containing a lower triangular scatterplot matrix for the covariates.
1. … - Selection from JMP 11 Essential Graphing [Book] Once detected before exclude it, domain knowledge is required to determine if it is an error, or truly extreme. The data points in the scatterplot for Benzene and Chloroform are the most tightly clustered along an imaginary line. Scatterplot Matrix View Multiple Bivariate Relationships Simultaneously. The scatterplot matrix provides these answers: ⢠All six pairs of variables are positively correlated. Select Help > Sample Data Library and open Solubility.jmp. Figure 4.14 Example of a Scatterplot MatrixÂ. It contains the following variables: • Name: individuals name • Sex: (M)ale or (F)emale • Weight: individuals weight in lbs. Select one of the default levels, or select Other to enter a different value. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers. In a regression context, collinearity can make it difficult to determine the effect of each predictor on the response, and can make it challenging to determine which variables to include in the model. Ellipses with 90% coverage are … 3. The color of each circle represents the correlation between each pair of variables on a scale from red (+1) to blue (-1). Shows or hides the 95% density ellipses in the scatterplots. Want more information? To answer these questions, use a scatterplot matrix of the four solvents. Select Ether, Chloroform, Benzene, and Hexane, and click Y, Columns. Contours for the 10% and 50% quantiles of the nonparametric surface are shown. The default value is 0.2. Scatterplot Matrix A scatterplot matrix helps you visualize the correlations between each pair of response variables. Open the Students.jmp sample data table. Exercise your consumer rights by contacting us at [email protected]. Figure 4.16 Scatterplot Matrix Interpret the Scatterplot Matrix. Use the Ellipses Transparency and Ellipse Color menus to change the transparency and color. ⢠Which pair has the strongest relationship? Have questions? Example of a Scatterplot Matrix This example shows you how to create a scatterplot matrix. ⢠Re-sizing any cell resizes all the cells. For the Linear discriminant method, these are based on the pooled within covariance matrix. Shows or hides horizontal or vertical histograms in the label cells. この散布図行列から、次のことがわかります。 • 6つのペアすべてについて、正の相関がある。 片方の変数が増加すると、他方の変数も増加しています。 Figure 5.17 Scatterplot Matrix for Iris.jmpÂ. Once histograms have been added, select Show Counts to label each bar of the histogram with its count. 散布図行列の解釈. Select Ether, Chloroform, Benzene, and Hexane, and click Y, Columns. ⢠Drag a label cell to another label cell to reorder the matrix. Scatterplot matrices are a great way to roughly determine if you have a linear correlation between multiple variables. 1. 2. Select Help > Sample Data Library and open Solubility.jmp. Select Graph > Scatterplot Matrix. A scatterplot matrix is a collection of scatterplots organized into a grid (or matrix). Select Graph > Scatterplot Matrix. Outliers can have disproportionate influences on models. there some methods in JMP to detect outliers. (There are six possible pairs.). Using the Scatterplot Matrix platform, you can assess the relationships between multiple variables simultaneously. Get JMP 11 Essential Graphing now with O’Reilly online learning. Each scatterplot shows the relationship between a pair of variables. | 1,019 | 4,616 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.515625 | 3 | CC-MAIN-2022-21 | latest | en | 0.820484 |
https://wentao-shao.gitbook.io/leetcode/queue/862.shortest-subarray-with-sum-at-least-k | 1,708,852,212,000,000,000 | text/html | crawl-data/CC-MAIN-2024-10/segments/1707947474594.56/warc/CC-MAIN-20240225071740-20240225101740-00761.warc.gz | 635,576,450 | 151,233 | # 862.Shortest-Subarray-with-Sum-at-Least-K
## 题目描述
Return the length of the shortest, non-empty, contiguous subarray of A with sum at least K.
If there is no non-empty subarray with sum at least K, return -1.
Example 1:
Input: A = [1], K = 1
Output: 1
Example 2:
Input: A = [1,2], K = 4
Output: -1
Example 3:
Input: A = [2,-1,2], K = 3
Output: 3
Note:
1 <= A.length <= 50000
-10 ^ 5 <= A[i] <= 10 ^ 5
1 <= K <= 10 ^ 9
## 代码
### Approach #1 Sliding Window
Time: O(N) && Space: O(N)
class Solution {
public int shortestSubarray(int[] A, int K) {
int N = A.length;
long[] P = new long[N+1];
for (int i = 0; i < N; i++)
P[i+1] = P[i] + (long)A[i];
int ans = N+1; // n+1 is impossible
for (int y = 0; y < P.length; y++) {
// want opt(y) = largest x with P[x] <= P[y] - K
while (!monoq.isEmpty() && P[y] - P[monoq.getLast()] <= 0) {
monoq.removeLast();
}
while (!monoq.isEmpty() && P[y] - P[monoq.getFirst()] >= K) {
ans = Math.min(ans, y - monoq.removeFirst());
}
// 0 < sum region < k | 373 | 986 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.046875 | 3 | CC-MAIN-2024-10 | latest | en | 0.451114 |
http://mathoverflow.net/questions/131061/what-is-the-ring-structure-of-the-complex-topological-k-theory-of-a-non-singular/138699 | 1,469,667,879,000,000,000 | text/html | crawl-data/CC-MAIN-2016-30/segments/1469257827781.76/warc/CC-MAIN-20160723071027-00139-ip-10-185-27-174.ec2.internal.warc.gz | 159,627,585 | 17,183 | # What is the ring structure of the complex topological K-theory of a non-singular complex quadric?
I would like to know the ring structure of $K(Q_n)$ explicitly where $Q_n \subset \mathbb{P}^{n+1}$ is the non-singular $n$-dimensional complex quadric and $K(Q_n) = K^0(Q_n)$ is the complex topological $K$-theory of $Q_n$ (with analytic topology). What is it? Can anyone provide a reference?
Ideally, I would like an expression for $K(Q_n)$ with generators for which I know how to compute the Chern character. Also, as it happens, I am most interested in the case where $n$ is odd.
To my great surprise, I have not been able to find this in the literature (with one qualification, see below). I am certain that such a basic calculation must be well-known to experts but I have been unable to find it. Hence this question.
While I have been aware of the basics of $K$-theory for years, this is the first time I have really had to work with it so I am very inexpert. In the last week I grokked relevant-seeming chunks of the books of Karoubi, Atiyah, Hatcher but I'm still quite green.
Motivation
My interest in this ring arose in the course of a problem I have been thinking about but I am hoping that the fact that $K(Q_n)$ is such a basic object will be sufficient motivation to justify this question.
What I do know
• Since the quadric has a cell decomposition with only even-dimensional cells, $K^1$ vanishes and $K = K^0$ is free Abelian with rank equal to the number of cells (and generators corresponding to the cells). For $n$ even this rank is $n+2$ (because there is an extra cell in middle dimension) for $n$ odd, it is $n+1$.
• The ordinary cohomology $H^* = H^{\rm even}$ is of course free Abelian of the same rank (Lefshetz tells us the restriction map from $H^*(\mathbb{P}^{n+1}, \mathbb{Z})$ is an isomorphism in all dimensions below $2n+2$ except middle dimension for $n$ even). The Chern character thus embeds $K$ as a maximal-rank lattice inside $H^*(Q_n, \mathbb{Q})$. However it is not the same as the lattice $H^*(Q_n, \mathbb{Z})$.
• Since we know $H^*(Q_n, \mathbb{Q})$ as a ring, it might be satisfactory to know the images of the Chern character on a set of generators of $K(Q_n)$.
• The cases $n=1, 2, 4$ are easy since $Q_n$ is respectively $S^2$, $S^2\times S^2$, $G(2, 4)$ (the complex Grassmannian).
• $Q_n$ is diffeomorphic to the real oriented Grassmannian $\tilde G(2, n+2)$ and so is a homogeneous space $SO(n+2)/SO(2)\times SO(n)$. There are tools for calculating $K$-theory for homogeneous spaces pioneered by Atiyah and Hirzebruch (I believe). Subsequently Hodgkin introduced a spectral sequence which seems to allow relatively straightforward (if lengthy) calculation in many cases, including $Q_n$.
• I managed to find a paper where the above technique is apparently used to calculate $K(\tilde G(k, n))$ for general $k, n$: Sankaran, Zvengrowski "K-theory of Oriented Grassmann Manifolds", Math. Slovaca 47(3). It looks right though I would probably be tempted to work from first principles myself than to specialize their results to my $k=2$ case.
Bottom line
Surely I am missing the obvious here? I find it astonishing that I should need to use the methods of Atiyah-Hirzebruch-Hodgkin for such a simple space. Perhaps if I thought more carefully about $\mathbb{P}^{n+1}/Q_n$ or $Q_{n+1}/Q_n$ (bearing in mind natural cell decompositions) then I could use the exact sequences either for the pairs $Q_n \subset \mathbb{P}^{n+1}$ or $Q_{n} \subset Q_{n+1}$ to work this out?
I am tempted to believe the reason I cannot find this in the literature is that it is so trivial. What am I missing?
-
Three approaches: (i) Adams computed the $K$ theory of real projective spaces a while ago, and there is a Serre-type spectral sequence for any cohomology theory (take ordinary cohomology of the total space with coefficients in the $K$-theory of the fiber, in this case), so you can write down some fiber sequences to try and compute the $K$ theory of unoriented Grassmannian $G(2, n)$, and then go after the oriented one. (ii) You could use the fact that we already know what the $K$-theory of $BSO(2)$ is, and try to understand the difference between this and the approximations by Grassmannians, – Dylan Wilson May 18 '13 at 18:03
(iii) There's always the Atiyah-Hirzebruch SS. – Dylan Wilson May 18 '13 at 18:03
Thanks Dylan, all three of these are very helpful remarks. If I don't receive an answer by tomorrow I think I'll just bash it out myself, most likely using Atiyah-Hirzebruch as you suggest in (iii). – Oliver Nash May 19 '13 at 11:40
I guess I might as well answer my own question as it might help somebody in the future. I ended up working this out using the methods of Hodgkin. (The Atiyah-Hirzebruch SS leaves one with a series of extension problems and so only gives the Abelian group structure.)
In fact I have written up a careful proof of this in the appendix to this paper: http://arxiv.org/abs/1308.0949 so I will just give the statement here and refer to the paper for the proof (especially as the proof, though fairly routine, is quite long).
Hodgkin's results allow one to compute the K-theory of homogeneous spaces $G/H$ when $\pi_1(G)$ is torsion-free. For this reason we represent the quadric $Q$ as: $$Q = \frac{Spin(n+2)}{Spin^c(n)}$$ where $Spin^c(n)$ is the double cover of $SO(2)\times SO(n) \subset SO(n+2)$ in $Spin(n+2)$.
Given this, representations of $Spin^c(n)$ give vector bundles on $Q$ and hence classes in $K(Q)$. There is one representation (and hence class in $K(Q)$) which I wish to highlight. To define it, we consider the double cover $Spin(2)\times Spin(n)$ of $Spin^c(n)$. Now if $RSO(2) \simeq \mathbb{Z}[t, t^{-1}]$ is the representation ring of $SO(2)$ then $RSpin(2) \simeq \mathbb{Z}[t^{1/2}, t^{-1/2}]$. Also, for $n$ odd, $Spin(n)$ has the unique spin representation $\delta$ (of dimension $2^{(n-1)/2}$). Neither $t^{-1/2}$ nor $\delta$ descends to a representation of $Spin^c(n)$ but their product does. We let $X$ be the bundle on $Q$ associated to the representation $t^{-1/2}\delta$ of $Spin^c(n)$. With this defined, we can state:
Proposition
Let $Q \subset \mathbb{P}^{n+1}$ be an $n$-dimensional non-singular quadric ($n \ge 3$, odd). Let $L = \mathcal{O}(1)-1 \in K(Q)$ and let $X$ be the bundle defined above, then:
• $1, L, L^2, \cdots, L^{n-1}, X$ are a $\mathbb{Z}$-basis for the torsion-free ring $K(Q)$
• $L^{n+1} = 0$
• $LX = 2^{(n+1)/2} - 2X$
• $2^{(n+1)/2}X = 2^n - 2^{n-1}L + \cdots + 2L^{n-1} - L^n$ (this is equivalent to the previous bullet but shows why we need $X$ instead of $L^n$)
There is also a slightly-complicated formula for $X^2$ which I will suppress and a similar statement for $n$ even except that now there are two bundles $X^+$, $X^-$.
- | 1,988 | 6,785 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.84375 | 3 | CC-MAIN-2016-30 | latest | en | 0.943585 |
http://www.linear-equation.com/graphing-and-writing-linear-functions.html | 1,521,796,656,000,000,000 | text/html | crawl-data/CC-MAIN-2018-13/segments/1521257648205.76/warc/CC-MAIN-20180323083246-20180323103246-00374.warc.gz | 386,720,240 | 10,255 | Try the Free Math Solver or Scroll down to Tutorials!
Depdendent Variable
Number of equations to solve: 23456789
Equ. #1:
Equ. #2:
Equ. #3:
Equ. #4:
Equ. #5:
Equ. #6:
Equ. #7:
Equ. #8:
Equ. #9:
Solve for:
Dependent Variable
Number of inequalities to solve: 23456789
Ineq. #1:
Ineq. #2:
Ineq. #3:
Ineq. #4:
Ineq. #5:
Ineq. #6:
Ineq. #7:
Ineq. #8:
Ineq. #9:
Solve for:
Please use this form if you would like to have this math solver on your website, free of charge. Name: Email: Your Website: Msg:
# Graphing and Writing Linear Functions
## Linear Functions
Definition: A l i n e a r equation can be written in the form Ax + By = C.
Note: These equations are called linear because their graphs are always lines.
Question: Which linear equations are functions?
Answer: All lines except vertical lines are functions.
## Slope-Intercept Form
Definition: A l i n e a r function can be written in the form f(x) = mx + b. This is also referred to as the slope-intercept form of a line, where m is the slope and b is the y-intercept.
Note: The slope is the ratio (or comparison) of the change in y divided by the change in x. A good way to remember slope is to
use .
## Graphing Linear Functions
To graph a linear function we need to recall a fact from geometry. We will use the fact that any two points define a line. Thus to graph a linear function we need only find two solution points to the linear equation. There are many ways to accomplish this.
Method 1: Use any Two Points
Example1: f(x) = 2 x - 3
Solution: We need to find two solution points.
Example 2:
Solution:
Method 2: Use the Intercepts
The intercepts are the points where the graph crosses each axis. There are x-intercepts and y-intercepts. We find the x-intercepts by letting y = 0 and the y-intercepts by letting x = 0.
Example 1: h(x) = -3 x - 6
Solution: The y-intercept is (0, -6).
To get the x-intercept let y = 0.
0 = -3 x - 6
6 = -3 x
x = -2
The x-intercept is(-2, 0).
Example 2:
Solution: The y-intercept is (0, 4).
To get the x-intercept.
So the x-intercept is (-6, 0).
Method 3: Use the Slope and y-intercept
We can use the slope and intercept of the line to sketch it. Start by plotting the y-intercept and then use the slope as a map to find a second point.
Example 1: s(x) =-4 x + 1
Solution: Start with the y-intercept (0, 1), them from move down 4 and right 1. This gives the point (1,-3).
Example 2:
Solution:
## Writing Linear Functions
To write the linear function that passes through two points, we'll use the slope-intercept form f(x) = m x + b. This means we must first find the slope of the line and then find the y-intercept.
Example 1: Find the equation of the linear function passing through the two points (2,1) and (0,3). Then write the equation
using function notation.
Solution: First we need to find the slope.
So this tells us that the function is of the form y = -x + b.
To get b, we'll note that the point (0, 3) is the y-intercept and so b = 3.
So our linear function is f(x) = -x + 3
Example 2: Find the equation of the linear function passing through the two points (3,6) and (-3,2). Then write the equation
using function notation.
Solution:
So this tells us that the function is of the form .
To find b, take one of the points and substitute and then solve.
So the linear function is
Note: There is an alternate way to find the equation of a line. We can use point-slope form of a line. y - y1 = m(x - x1)
Example 3: Find the equation of the linear function passing through the two points (5, -3) and (-10, 3). Then write the
equation using function notation.
Solution: .
So the equation is given by .
Now to get this into function form we need to solve for y and simplify.
## Parallel and Perpendicular Lines
Theorem:
Two lines are parallel if they have equal slopes.
Two lines are perpendicular if they have opposite reciprocal slopes.
i.e. If the first line has slope , then the second has slope - .
Example 1: Find the equation of the line passing through the point (1,5) and parallel to the line 4 x + 2 y = 1.
Solution: First we need to find the slope. Since the new is parallel to 4 x + 2 y = 1, we need to find the slope of this line. We can do this by solving for y and getting slope-intercept form.
So m = -2.
So our new line is of the form y = -2 x + b.
5 = -2(1) + b
b = 7
So the new linear function is f(x) = -2 x + 7.
Example 2: Find the equation of the line passing through the point (2,1) and perpendicular to the line 2 x + 3 y = 15.
Solution: We need to find the slope of 2 x + 3 y = 15.
So m =, since the new line is perpendicular. So it is of the form
So the new function is. | 1,303 | 4,693 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.4375 | 4 | CC-MAIN-2018-13 | latest | en | 0.902729 |
https://www.astronomyclub.xyz/double-stars/example.html | 1,582,406,236,000,000,000 | text/html | crawl-data/CC-MAIN-2020-10/segments/1581875145729.69/warc/CC-MAIN-20200222211056-20200223001056-00288.warc.gz | 659,613,634 | 5,974 | ## Example
The position angle for alpha Centauri for the year 2000 is 222°3. Calculate the change in the position angle over the fifty-year period 2000 to 2050. We have a = 14h 39m 35.885s (= 219°89952) 8 = -60° 50' 07'.'44 (= -60°8354) Ha = -0.49826 s (= -0°00207608) 6 = 222°3 t0 = 2000 t = 2050.
Carrying out the calculations gives A6 = -0°.3 and hence 60 = 222°0. So there is a change of -0°.3 over the 50-year period due simply to procession and proper motion. (Note: 222°0 is not the position angle for the year 2050, but the position angle for the year 2000 referred to the pole of the year 2050.) | 211 | 606 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.5625 | 4 | CC-MAIN-2020-10 | latest | en | 0.741057 |
https://puzzlemadness.co.uk/nonograms/2012/05/06 | 1,576,190,107,000,000,000 | text/html | crawl-data/CC-MAIN-2019-51/segments/1575540547165.98/warc/CC-MAIN-20191212205036-20191212233036-00417.warc.gz | 506,765,371 | 14,421 | # Daily Nonogram Puzzle
SCORE:
## How to play
This is an example of a completed Nonogram puzzle. A Nonogram puzzle consists of a grid with clues along the top and the left side.
Your aim in these puzzles is to colour the whole grid in to purple and white squares. At the top of each column, and at the side of each row, you will notice a set of one or more numbers. These numbers tell you the runs of purple squares in that row/column. So, if you see '5 6', that tells you that there will be a run of exactly 5 purple squares, followed by one or more white squares, followed by exactly 6 purple square. There may be more white squares before/after this sequence.
This is an example of a starting Nonogram grid. This particular puzzle is 20x20 in size. As a general rule you can assume that the bigger the grid, the harder the puzzle.
The starting point in any size puzzle will be to look for row/columns that will be mostly purple squares. Halfway down the left-hand side is the clue '7 9', i.e. we will have 7 purple squares, a break of one or more white squares, and then a further 9 squares. There's only a limited numbers of ways this combination can be arranged in to the grid. Let's look at those.
These are the two extremes for that row. You will see that there is a great deal of overlap between the two extremes of possibility for this row, so we can fill those in purple. We don't know exactly where those purple blocks will start/end, but we know where some of them will go.
We now have this as our current work-in-progress. Notice that there is some overlap between the two extremes in the middle of the grid, but that white cell in the middle is free to move between those two extremes. It could also be more than just a single white square - all 4 white squares could be in the middle for example.
There's a few more opportunities for this kind thinking in this puzzle. We have three columns that are interesting to us:
1. The column with the clues of '4 15'.
2. The column with the single clue of '13'.
3. The column with the clue of '12 1'.
You may notice that there is only one way to satisfy the first of our columns - so we can completely fill that column in. We can also apply the same process to our other two columns.
There are a few other rows/columns that we can apply a similar technique with, but we will start seeing diminishing returns fairly soon with this technique.
Look at the row with the 4 highlighted squares. The first number in the clue for that row is '8'. The first purple block in that row gives an 'anchor' for that '8', and gives two extremes for that block of 8. It can either start in the cell in the row, or the latest it can start is on the first purple block. For both those extremes, and everything in between, those highlighted are going to be purple.
We can apply the same line of reasoning to many cells in the same way.
By applying the same line of reasoning to other we arrive at this point. Look now at the highlighted cell. The first clue for this row is a '3', there is no way for the highlighted cell to be purple because of where the existing purple cells are; if this cell was purple it would create a first block of 4+ cells, which wouldn't satisfy the first clue. This cell then must be white.
We have made great progress in solving this puzzle. You now know the main lines of reasoning for solving Nonograms, happy puzzling!
This page will automatically load the puzzle for today. If you want to play a different puzzle, go to the archive page and choose your puzzle.
There are two ways to play a Sudoku puzzle, you can just use the mouse/touchscreen, or you can use the mouse and keyboard. You can switch between the two methods any time you like, and can use a combination of both.
### Playing with a mouse/touchscreen.
• When you have found a square where you can enter a number, click/touch that square. The square will turn light blue.
Above and below the puzzle is the number selection. Click/touch the number you want to enter in to that cell. If there is already a number in that square, it will be over-written.
• If you want to enter a pencil mark, click/touch the square you want to put in a pencil mark. It will turn light blue. Click/touch the pencil icon above or below the puzzle. This icon will turn light blue, and you are now in pencil marks mode.
Whenever you click/touch a number now, a pencil mark will be put in the square instead. To remove a number as a pencil mark, make sure you are in pencil marks mode, and click/touch the number again.
You can exit pencil mark mode by clicking/touching the pencil icon, it will turn back to normal.
• If you want to clear a particular square, make sure that square is selected and is in light blue. Click/touch the eraser icon. If there is a number in that square, it will be removed. If you click/touch it again, any pencil marks in that square will be removed.
### Playing with a mouse and keyboard.
• You will need to select a square by clicking on it with the mouse, it will turn light blue. You can change the current square by using the cursor keys on your keyboard.
• To enter a number, press that number on the keyboard. If there is already a number in that square, it will be overwritten. To remove a number, press the backspace or delete key on your keyboard.
• To enter a pencil mark, press control, shift, or alt on your keyboard at the same time as pressing a number key. Do the same thing again to remove that pencil mark.
Any mistakes you make will be hilighted in red. The website will know when you have completed a puzzle and will tell you. If you have an account and are logged in, the website will remember that you have completed that puzzle. You will also take part in out leaderboards. It's free to create an account! | 1,302 | 5,772 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.75 | 4 | CC-MAIN-2019-51 | latest | en | 0.951348 |
https://www.vedantu.com/question-answer/which-of-the-following-isare-true-for-b2-and-c2-class-11-chemistry-cbse-5f5fb67768d6b37d16360940 | 1,718,866,106,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198861883.41/warc/CC-MAIN-20240620043158-20240620073158-00079.warc.gz | 918,767,110 | 28,089 | Courses
Courses for Kids
Free study material
Offline Centres
More
Store
# Which of the following is/are true for ${{B}_{2}}$ and ${{C}_{2}}$ molecules according to M.O.T? (This question has multiple correct options).(a)- Both are having $1-\sigma$ and $1-\pi$ bond(b)- Both are having the same bond length.(c)- Both are having different bond orders.(d)-${{B}_{2}}$ is paramagnetic and${{C}_{2}}$ is diamagnetic.
Last updated date: 20th Jun 2024
Total views: 413.4k
Views today: 12.13k
Verified
413.4k+ views
Hint: If the compound has an unpaired electron then it is paramagnetic, and if it has all paired electrons then it is diamagnetic. We can calculate the bond order of the compound by dividing the sum of electrons in bonding orbital and antibonding orbital with 2.
Let us study all the options one by one:
(a)- Both are having $1-\sigma$ and $1-\pi$ bond
${{B}_{2}}$ molecule has a single bond between both boron atoms. Hence, it has a $\sigma$ bond.
In ${{C}_{2}}$ molecules there are 2 bonds. One is $\sigma$ bond and the other is $\pi$ bond.
Hence, this option is incorrect.
(b)- Both are having the same bond length.
The bond length of the molecule is related to the bond order of the molecule. Bond length is inversely proportional to the bond order.
In ${{B}_{2}},$ molecule the bond order is 1 and the bond order of ${{C}_{2}}$ is 2. Hence, the bond length of ${{C}_{2}}$ is shorter than ${{B}_{2}}$.
Hence, this option is also incorrect.
(c)- Both are having different bond orders.
The bond order of the compound is calculated by dividing the sum of electrons in bonding orbital and antibonding orbital with 2.
${{B}_{2}}$ has 6 electrons in its outermost shell . Configuration is : $\sigma 2{{s}^{2}}\text{ }\sigma *2{{s}^{2}}\text{ }\pi 2{{p}_{x}}^{1}\text{ }\pi 2{{p}_{y}}^{1}$
Bond order = $\dfrac{b.o-a.b.o}{2}=\dfrac{4-2}{2}=1$
${{C}_{2}}$ has 8 electrons in its outermost shell . Configuration is: $\sigma 2{{s}^{2}}\text{ }\sigma *2{{s}^{2}}\text{ }\pi 2{{p}_{x}}^{2}\text{ }\pi 2{{p}_{y}}^{2}$
Bond order = $\dfrac{b.o-a.b.o}{2}=\dfrac{6-2}{2}=2$
Hence, this option is correct.
(d)- ${{B}_{2}}$ is paramagnetic and ${{C}_{2}}$ is diamagnetic in nature.
${{B}_{2}}$ has two unpaired electrons hence, it is paramagnetic. ${{C}_{2}}$ has all paired electrons hence, it is diamagnetic.
Hence, this is also correct.
So, the correct answer is “Option C and D”.
Note: The orbital $\text{ }\pi 2{{p}_{x}}\text{ and }\pi 2{{p}_{y}}$ gets degenerate after mixing hence they have same energy. So, if there are 2 electrons left each of them gets one-one electron each after that only pairing is done. | 800 | 2,615 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.953125 | 4 | CC-MAIN-2024-26 | latest | en | 0.89053 |
http://sudocue.net/forum/viewtopic.php?t=836&start=0&postdays=0&postorder=asc&highlight= | 1,623,616,967,000,000,000 | text/html | crawl-data/CC-MAIN-2021-25/segments/1623487610841.7/warc/CC-MAIN-20210613192529-20210613222529-00384.warc.gz | 47,515,319 | 12,232 | SudoCue Users
A forum for users of the SudoCue programs and the services of SudoCue.Net
Author Message
Sudtyro
Hooked
Joined: 16 Jan 2007
Posts: 49
Posted: Fri Nov 02, 2007 8:55 pm Post subject: 7/15/07 Nightmare...0 grams ALS
This exercise was undertaken to see if, in addition to the standard “basics” toolbox of solving methods, it might be possible to manually solve a difficult puzzle using only simple Alternating Inference Chains (AIC). And, just for the sake of argument, allowing no other solving techniques, such as patterns (like wings and fish), ALS’s, Nice loops, coloring, uniqueness arguments, APE, subset counting, etc.
The procedure to develop the AIC’s works mainly with the puzzle’s individual single-digit grids, using only the strong internal links in bivalue cells to allow for transfers among the grids. The technique may be of some interest to manual solvers who wish to broaden their AIC search skills and strategies.
The detailed solution procedure follows:
1. Apply the “basics” to the puzzle’s current grid position.
2. Separately construct the puzzle’s unresolved single-digit grids. In each grid, form X-chains (groups allowed) to look for eliminations. Process all grids and update as needed.
3. The trick at this point is to slightly modify the single-digit grids by including both digits of all of the puzzle’s bivalue cells. In other words, for a bivalue cell containing digits A and B, simply add digit A to digit B’s grid, and vice versa. The bivalue cells are then used to form both visual and actual links among the single-digit grids, as described below.
4. Select an initial “target” digit for possible eliminations.
5. Examine the target’s single-digit grid for two bivalue cells having different digits. If a multi-digit AIC can be formed that provides a strong-inference link between the target digits in those two bivalue cells, then eliminations may follow.
6. Repeat step 5 until no further eliminations are found.
7. Select a new target digit and return to step 5, until all of the grids have been processed.
With a little practice, it becomes quite straightforward (and almost fun) to form the AIC in step 5 by using other intermediate bivalue cells to move among the single-digit grids. There are also some interesting shortcuts available to help construct the AIC.
The Sudocue Nightmare for July 15, 2007, will be used to illustrate the solution procedure.
Code: 000208000000000040905006010010009000200040830308600091100005087700000000036000000 --------------------- . . . | 2 . 8 | . . . . . . | . . . | . 4 . 9 . 5 | . . 6 | . 1 . ------+-------+------ . 1 . | . . 9 | . . . 2 . . | . 4 . | 8 3 . 3 . 8 | 6 . . | . 9 1 ------+-------+------ 1 . . | . . 5 | . 8 7 7 . . | . . . | . . . . 3 6 | . . . | . . . ---------------------
The grid position below results after an initial application of the “basics,” defined here as the first seven steps of Andrew Stuart’s on-line “Sudoku Solver.”
Code: ------------------------------------------------------ 46 467 13 | 2 13579 8 | 35679 567 3569 68 2678 13 | 13579 13579 137 | 235679 4 235689 9 278 5 | 4 37 6 | 237 1 238 -------------+-------------------+-------------------- 456 1 47 | 38 38 9 | 24567 2567 2456 2 569 79 | 157 4 17 | 8 3 56 3 45 8 | 6 257 27 | 457 9 1 -------------+-------------------+-------------------- 1 49 249 | 39 2369 5 | 23469 8 7 7 58 249 | 1389 123689 1234 | 1234569 256 234569 58 3 6 | 1789 12789 1247 | 12459 25 2459 ------------------------------------------------------
We note right off that the XYZ-wing, (137)r2c36|r3c5, would neatly provide for two quick eliminations, r2c45 <> 3. However, in keeping with the rules of this exercise, wings are not allowed. Per step 2, we look instead at the 3’s unresolved single-digit grid, as shown below, with the strong links indicated between the grid’s three conjugate pairs.
Code: ---------------------- . . 3 | . 3 . | *3 . 3 | | | . . 3 | 3 3 3 | *3 . 3 . . . | . 3 | | 3 . 3 ------+-----|-+------- . . . | 3-3 | | . . . . . . | . . | | . . . . . . | . . | | . . . ------+-----|-+------- . . . | 3 3 | | 3 . . . . . | 3 3 3 | 3 . 3 . . . | . . . | . . . ----------------------
Two grouped X-chains provide for the (starred) eliminations:
(3): r1c3 = r2c3 – r2c6 = r8c6 – r8c9 = r78c7 => r1c7 <> 3.
(3): r2c6 = r8c6 – r8c9 = r78c7 => r2c7 <> 3.
An observant reader may have also noticed the grid’s finned Swordfish, in columns 3,6,9 (or rows 3,4,7), where r3c9 is the fin. This (disallowed) fish pattern also implies r12c7 <> 3.
No other eliminations are found in the 3’s grid. And, in fact, no X-chain-based eliminations were found in any of the eight additional single-digit grids (admittedly, I may have missed some). So, we move next to step 3 of the solution procedure to include the bivalue cells in all nine grids. Strong links in the grids (shown below) are again indicated between conjugate pairs.
Code: . . 13----1 . | . . . . . . | . . . | . . . . . 31| . 3 . | . . 3 | | | | | | | | . . 13| 1 1 1 | . . . . 2 . | . . . | 2 . 2 . . 31| 3 3 3 | . . 3 | | | | | | | | . . . | . . . | . . . . 2 . | . . . | 2 . 2 . . . | . 37 | | 3 . 3 ------+-------+------ ------+-------+------- ------+-------|-+------ . . . | . . . | . . . . . . | . . . | 2 2 2 . . . | 38-38 | | . . . | | | | | | | . . . | 1---17| . . . . . . | . . . | . . . . . . | . . | | . . . | | | | | | | . . . | . . . | . . . . . . | . 2-27| . . . . . . | . . | | . . . ------+-------+------ ------+-------+------- ------+-------|-+------ . . . | . . . | . . . . . 2 | . 2 . | 2 . . . . . | 39 3 | | 3 . . | | | | | | | | . . . | 1 1 1 | 1 . . . . 2 | . 2 2 | 2 2 2 . . . | 3 3 3 | 3 . 3 | | | | | | | . . . | 1 1 1 | 1 . . . . . | . 2 2 | 2 25 2 . . . | . . . | . . .
Code: 46-4 . | . . . | . . . . . . | . 5 . | 5 5 5 64 6 . | . . . | 6 6 6 | | | | | | | | . . | . . . | . . . . . . | 5 5 . | 5 . 5 68 6 . | . . . | 6 . 6 | | | | | | | | | . . | . . . | . . . . . . | | . . | . . . . . . | . . . | . . . |-------+-------+------ -------+-|-----+------- -------+---------+------ 4 . 47| . . . | 4 . 4 5 . . | | . . | 5 5 5 6 . . | . . . | 6 6 6 | | | | | | \ | | . . . | . . . | . . . | 5 . | 5 . . | . . 56 . 6-------------------65 | | | | \ | | | . 45-------------4 . . | 54 . | . 5 . | 5 . . . . . | . . . | . . . --------+-------+------ |------+-------+------- -------+---------+------ . 49 4 | . . . | 4 . . | . . | . . . | . . . . . . | . 6 . | 6 . . | | | | | | | | . . 4 | . . 4 | 4 . 4 | 58 . | . . . | 5 5 5 . . . | . 6 . | 6 6 6 | | | |/ | | | | . . . | . . 4 | 4 . 4 58 . . | . . . | 5 52 5 . . . | . . . | . . .
Code: . 7 . | . 7 . | 7 7 . . . . | . . . | . . . . . . | . 9 . | 9 . 9 | | | | | | | . 7 . | 7 7 7 | 7 | . 86 8 . | . . . | . . 8 . . . | 9 9 . | 9 . 9 | | | | | | | | | . 7 . | . 73 . | 7 | . | 8-------------------8 . . . | . . . | . . . ------+--------+---|-- |------+---------+------ -------+--------+------ . . 74| . . . | 7 7 . | . . | 83-83 . | . . . . . . | . . . | . . . | | | | | | | | . . 79| 7 . 71| . . . | . . | . . . | . . . . 9--97| . . . | . . . | | | | | | | | . . . | . 7 72| 7 . . | . . | . . . | . . . . | . | . . . | . . . ------+--------+------ |------+---------+------- --|----+--------+------ . . . | . . . | . . . | . . | . . . | . . . . 94 9 | 93 9 . | 9 . . | | | | | | | . . . | . . . | . . . | 85 . | 8 8 . | . . . . . 9 | 9 9 . | 9 . 9 | | |/ | | | | . . . | 7 7 7 | . . . 85 . . | 8 8 . | . . . . . . | 9 9 . | 9 . 9
We move on to step 4 and choose the 1-digit as the first “target.” Applying step 5 to the 1’s grid, only two bivalue cells could lead to an elimination. If we can show a derived strong inference (DSI) between the 1’s in cells (13)r2c3 and (17)r5c6, then (1)r2c6 can be eliminated.
Starting with cell r2c3, we first use the strong link, (1=3)r2c3, to move to the 3’s grid, where we need to connect to another bivalue cell. The only suitable chain available is
(1=3)r2c3 – (3)r2c6 = (3)r8c6 – (3=9)r7c4.
Moving next to the 9’s grid, we can now form the chain,
(3=9)r7c4 – (9)r7c2 = (9)r5c2 – (9=7)r5c3.
Moving finally to the 7’s grid, we can immediately spot the short chain,
(9=7)r5c3 – (7=1)r5c6,
and we’re done! Simply combine the three chains for the full AIC:
(1=3)r2c3 – (3)r2c6 = (3)r8c6 – (3=9)r7c4 – (9)r7c2 = (9)r5c2 – (9=7)r5c3 – (7=1)r5c6 => r2c6 <> 1.
To recap, a “grid path” was developed as 1 -> 3 -> 9 -> 7 -> 1 by forming three intermediate AIC’s. The first and last digits of each chain correspond to every other digit in the grid path, with the intervening digit identifying the host grid. Each sequential pair of digits in the grid path represents a bivalue-cell transfer point between grids. The grid path is bi-directional, so one can search in either direction or even search simultaneously from each end and try to “meet in the middle.”
We next choose the 2-digit as the target. Applying step 5 to the 2’s grid, we again see only two bivalue cells that could lead to an elimination. As before, if we can show a DSI between the 2’s in bivalue cells (27)r6c6 and (25)r9c8, then (2)r9c6 can be eliminated.
Starting this time with cell r9c8, we first use the strong link, (2=5)r9c8, to move to the 5’s grid, where we need to connect to another bivalue cell and move along to the next grid. The grid path develops as 2 -> 5 -> 4 -> 7 -> 2, and the reader can easily verify that the full AIC is
(2=5)r9c8 – (5)r9c1 = (5)r8c2 – (5=4)r6c2 – (4=7)r4c3 – (7)r3c78 = (7)r6c7 – (7=2)r6c6 => r9c6 <> 2.
We next choose the 3-digit as the target. Applying step 5 to the 3’s grid, we now see that multiple bivalue-cell pairings could potentially lead to eliminations. Surprisingly, however, no suitable grid paths are found.
We move on to digit 4 as the next target. Multiple bivalue-cell pairings are again possible, and cells (46)r1c1 and (45)r6c2 could provide a very productive double elimination. The short grid path, 4 -> 5 -> 6 -> 4, can be developed beginning with cell (45)r6c2. The full AIC is
(4=5)r6c2 – (5)r6c5 = (5)r5c4 – (5=6)r5c9 –(6)r5c2 = (6)r4c1 – (6=4)r1c1 => r1c2,r4c1 <> 4.
Of the remaining grids (5 to 9), suitable paths were found only for the 5’s and the 8’s.
For the 5-digit as the target, the short grid path, 5 -> 6 -> 8 -> 5, leads to the AIC,
(5=6)r5c9 – (6)r5c2 = (6)r4c1 –(6=8)r2c1 – (8=5)r9c1 => r9c9 <> 5.
For the 8-digit as the target, the grid path, 8 -> 3 -> 9 -> 4 -> 5 -> 8, leads to the AIC (also an XY-chain),
(8=3)r4c4 – (3=9)r7c4 – (9=4)r7c2 – (4=5)r6c2 – (5=8)r8c2 => r8c4 <> 8.
Having now completed the first pass through all nine single-digit grids, we pause at this point (step 8) to adjust the grid positions. The newly formed (37) naked pair removes all other 3’s and 7’s in b2. The adjusted 6’s grid (r1c1 <> 6) now provides a new X-chain,
(6): r5c9 = r5c2 – r4c1 = r2c1 => r2c9 <> 6.
After follow-up (updated single-digit grids are not shown):
Code: ------------------------------------------------------- 4 67 13 | 2 159 8 | 5679 567 3569 -68 2678 13 | 159 159 37 | 25679 4 23589 9 278 5 | 4 37 6 | 237 1 238 --------------+-------------------+-------------------- 56 1 47 | 38 38 9 | 24567 2567 2456 2 5-69 79 | 157 4 17 | 8 3 56 3 45 8 | 6 257 27 | 457 9 1 --------------+-------------------+-------------------- 1 49 249 | 39 2369 5 | 23469 8 7 7 58 249 | 139 123869 1234 | 1234569 256 234569 58 3 6 | 1789 12789 147 | 12459 25 2459 -------------------------------------------------------
New bivalue cells, (67)r1c2 and (56)r4c1, provide a new grid path,
6 -> 7 -> 4 -> 5 -> 6.
The AIC is given by
(6=7)r1c2 – (7)r1c8 = (7)r4c8 - (7=4)r4c3 – (4=5)r6c2 – (5=6)r4c1 => r2c1,r5c2 <> 6.
After follow-up:
Code: ------------------------------------------------------ 4 67 13 | 2 159 8 | 5679 567 359 8 267 13 | -159 159 37 | 25679 4 2359 9 27 5 | 4 37 6 | 237 1 8 -------------+---------------------+------------------ 6 1 47 | 38 38 9 | 2457 57 245 2 59 79 | 157 4 17 | 8 3 6 3 45 8 | 6 257 27 | 47 9 1 -------------+---------------------+------------------ 1 49 249 | 39 2369 5 | 3469 8 7 7 8 249 | 1-39 -12-369 1234 | 1-34569 56 3459 5 3 6 | 1789 -1789 147 | 149 2 49 ------------------------------------------------------
The updated 3’s grid next yields the X-chain,
(3): r8c6 = r2c6 – r3c5 = r3c7 => r8c7 <> 3,
followed by
(3): r8c6 = r2c6 – r3c5 = r3c7 – r7c7 = r8c9 => r8c45 <> 3.
Then, newly formed bivalue cell, (19)r8c4, quickly provides the new grid path,
1 -> 3 -> 9 -> 1. The AIC is
(1=3)r2c3 – (3)r2c6 = (3)r8c6 – (3=9)r7c4 – (9=1)r8c4 => r2c4 <> 1,
which further implies r89c5 <> 1 via the new pointing pair in b2.
After follow-up:
Code: -------------------------------------------------- 4 67 13 | 2 159 8 | 5679 567 359 8 267 13 | 59 159 37 | 25679 4 2359 9 27 5 | 4 37 6 | 237 1 8 -------------+------------------+----------------- 6 1 47 | 38 3-8 9 | 2457 57 245 2 59 79 | 157 4 17 | 8 3 6 3 45 8 | 6 257 27 | 47 9 1 -------------+------------------+----------------- 1 49 249 | 39 2369 5 | 3469 8 7 7 8 249 | 19 269 1234 | 14569 56 3459 5 3 6 | 17-89 -789 147 | 149 2 49 --------------------------------------------------
At this point, the rote solution procedure begins to run low on suitable grid paths. One new path, 7 -> 3 -> 9 -> 7, moves from bivalue cell (37)r3c5 to (79)r5c3, but no direct eliminations are available. However, one can simply add two more of the 7-grid’s cells to the end of the path to form the AIC,
(7=3)r3c5 – (3)r4c5 =(3)r4c4 – (3=9)r7c4 – (9)r7c2 = (9)r5c2 – (9=7)r5c3 – (7)r5c4 = (7)r9c4 => r9c5 <> 7.
Then, newly formed bivalue cell, (89)r9c5, provides for the short grid path, 8 –> 9 –> 3 -> 8, and the corresponding very productive AIC (also an XY-chain),
(8=9)r9c5 – (9=3)r7c4 – (3=8)r4c4 => r4c5,r9c4 <> 8.
After follow-up:
Code: -------------------------------------------------- 4 67 3 | 2 1 8 | 5679 567 59 8 67 1 | 59 59 3 | 67 4 2 9 2 5 | 4 7 6 | 3 1 8 -------------+-------------------+---------------- 6 1 47 | 8 3 9 | 2 57 45 2 59 79 | 157 4 17 | 8 3 6 3 45 8 | 6 25 27 | 47 9 1 -------------+-------------------+---------------- 1 49 249 | 3 269 5 | 469 8 7 7 8 249 | 19 269 124 | 14569 56 3 5 3 6 | 179 8 147 | 149 2 -49 --------------------------------------------------
Once again, no suitable grid paths are to be found. However, the non-procedure AIC,
(4)r9c6 = (4-2)r8c6 = (2-7)r6c6 = (7-4)r6c7 = (4)r4c9 => r9c9 <> 4,
now breaks the puzzle and allows one to move quickly to the solution with cascading singles.
Summary:
This exercise has shown (for one sample puzzle, at least) that it is indeed possible to solve a difficult puzzle using only simple AIC’s. The idea for the “grid-path” approach stems from my own solution strategy of routinely constructing a puzzle’s single-digit grids in order to more easily apply the standard suite of single-digit solving methods. However, once those methods have been exhausted, one must then move on to the multi-digit techniques. By simply appending the puzzle’s bivalue cells to the single-digit grids, it becomes relatively easy to see and exploit some of the available multi-digit (strong) links, without the need for any additional plots or graphs.
With a little practice the grid-path approach is readily implemented as a simple visual search procedure. The intermediate chains that connect the bivalue cells in each single-digit grid can be developed ahead of time and jotted down for later use. There are initially only two or three of these chains per grid (some grids may have none). They are easily spotted, and in actual use it is necessary to remember only the first and last digit in each chain.
Eventually, when the supply of suitable grid paths runs out (as it did late in this exercise), one must move on to different solution methods. In practice, of course, the grid-path approach should never be used as a stand-alone technique, but rather simply as another choice available from the “beyond-the-basics” toolbox.
Questions, comments and suggestions are always welcome!
[Edited 11/10/07 to correct XYZ-wing name]
Display posts from previous: All Posts1 Day7 Days2 Weeks1 Month3 Months6 Months1 Year Oldest FirstNewest First
All times are GMT Page 1 of 1
Jump to: Select a forum SudoCue - the Website----------------Daily Sudoku Nightmare & ArchiveClueless SpecialsClueless ExplosionsWeekly AssassinsTexas Jigsaw KillersSudoku LiteX-FilesDaily WindokuDaily Jigsaw SudokuSolving Guide & GlossarySamurai ContestGeneral Website Comments Sudoku - the Community----------------Help Me! I'm stuck!Solving Techniques & TipsWebsitesSoftwarePuzzlesPublicationsOff-Topic SudoCue - the Software----------------SupportWishlistCommentsReleases
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum | 6,971 | 19,336 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.734375 | 3 | CC-MAIN-2021-25 | latest | en | 0.874895 |
https://hydro.ac/d/codeforces/p/P1392A | 1,656,287,626,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656103322581.16/warc/CC-MAIN-20220626222503-20220627012503-00645.warc.gz | 361,988,538 | 7,744 | ID: 806 Type: RemoteJudge 2000ms 256MiB 尝试: 1 已通过: 1 难度: 3 上传者: 标签>greedymath*800
## Description
Lord Omkar has permitted you to enter the Holy Church of Omkar! To test your worthiness, Omkar gives you a password which you must interpret!
A password is an array $a$ of $n$ positive integers. You apply the following operation to the array: pick any two adjacent numbers that are not equal to each other and replace them with their sum. Formally, choose an index $i$ such that $1 \leq i < n$ and $a_{i} \neq a_{i+1}$, delete both $a_i$ and $a_{i+1}$ from the array and put $a_{i}+a_{i+1}$ in their place.
For example, for array $[7, 4, 3, 7]$ you can choose $i = 2$ and the array will become $[7, 4+3, 7] = [7, 7, 7]$. Note that in this array you can't apply this operation anymore.
Notice that one operation will decrease the size of the password by $1$. What is the shortest possible length of the password after some number (possibly $0$) of operations?
Each test contains multiple test cases. The first line contains the number of test cases $t$ ($1 \le t \le 100$). Description of the test cases follows.
The first line of each test case contains an integer $n$ ($1 \leq n \leq 2 \cdot 10^5$) — the length of the password.
The second line of each test case contains $n$ integers $a_{1},a_{2},\dots,a_{n}$ ($1 \leq a_{i} \leq 10^9$) — the initial contents of your password.
The sum of $n$ over all test cases will not exceed $2 \cdot 10^5$.
For each password, print one integer: the shortest possible length of the password after some number of operations.
## Input
Each test contains multiple test cases. The first line contains the number of test cases $t$ ($1 \le t \le 100$). Description of the test cases follows.
The first line of each test case contains an integer $n$ ($1 \leq n \leq 2 \cdot 10^5$) — the length of the password.
The second line of each test case contains $n$ integers $a_{1},a_{2},\dots,a_{n}$ ($1 \leq a_{i} \leq 10^9$) — the initial contents of your password.
The sum of $n$ over all test cases will not exceed $2 \cdot 10^5$.
## Output
For each password, print one integer: the shortest possible length of the password after some number of operations.
## Samples
2
4
2 1 3 1
2
420 420
1
2
## Note
In the first test case, you can do the following to achieve a length of $1$:
Pick $i=2$ to get $[2, 4, 1]$
Pick $i=1$ to get $[6, 1]$
Pick $i=1$ to get $[7]$
In the second test case, you can't perform any operations because there is no valid $i$ that satisfies the requirements mentioned above. | 780 | 2,549 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3 | 3 | CC-MAIN-2022-27 | latest | en | 0.771729 |
http://www.nelson.com/mathfocus/grade5/quizzes/ch03/mf5_ch.3_lesson_4try.htm | 1,516,522,546,000,000,000 | text/html | crawl-data/CC-MAIN-2018-05/segments/1516084890394.46/warc/CC-MAIN-20180121080507-20180121100507-00703.warc.gz | 500,951,683 | 7,128 | Name: Lesson 4: Adding Decimals Using Mental Math
1.
Jasmine was training for a cross-country race. On the first day she ran 1.464 km. She increased this distance by 0.673 km on the second day. Use mental math to estimate how far she ran on the second day.
a. 2.000 km b. 2.200 km c. 2.100 km d. 2.300 km
2.
Use mental math to estimate how far Jasmine ran on both days combined.
a. 3.500 km b. 3.600 km c. 3.700 km d. 3.400 km
3.
David was also training for a cross-country race. On the first day he ran 1.312 km and on the second day he ran 1.793 km. Use mental math to estimate the total distance he ran on both days combined.
a. 3.100 km b. 3.000 km c. 3.200 km d. 3.300 km
4.
The diagram of a park is shown.
Use mental math to calculate the distance around the park. Which shows the estimate and the correct answer?
a. 7.700 km and 6.742 km c. 6.000 km and 6.742 km b. 5.700 km and 6.742 km d. 6.700 km and 6.742 km
5.
Briana has a new puppy, Murphy. His birth mass was 0.514 kg. Briana tracked the increase in Murphy’s mass over the next 4 months and recorded it in the table shown.
Month Increase in mass January 0.172 kg February 0.204 kg March 0.283 kg April 0.321 kg
Use mental math to calculate the total mass that Murphy gained over the 4 months. Which shows the estimate and the correct answer?
a. 1.100 kg and 0.980 kg c. 0.900 kg and 0.980 kg b. 1.200 kg and 0.980 kg d. 1.000 kg and 0.980 kg
6.
Use mental math to calculate Murphy’s mass at the end of April. Which shows the estimate and the correct answer?
a. 1.400 kg and 1.494 kg c. 1.500 kg and 1.494 kg b. 2.000 kg and 1.494 kg d. 1.600 kg and 1.494 kg
7.
Use mental math to calculate the sum. Which shows the estimate and the correct answer?
1.489 + 0.204
a. 1.600 and 1.693 c. 1.800 and 1.693 b. 1.700 and 1.693 d. 1.500 and 1.693
8.
Use mental math to determine the missing number.
3.82 + a = 4
a. 0.18 b. 0.2 c. 0.16 d. 0.08
9.
Use mental math to determine the missing number.
b + 9.025 = 10
a. 0.875 b. 1.075 c. 0.075 d. 0.975
10. | 703 | 2,041 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.59375 | 5 | CC-MAIN-2018-05 | latest | en | 0.788642 |
http://www.stata.com/statalist/archive/2004-04/msg00079.html | 1,438,105,594,000,000,000 | text/html | crawl-data/CC-MAIN-2015-32/segments/1438042982013.25/warc/CC-MAIN-20150728002302-00044-ip-10-236-191-2.ec2.internal.warc.gz | 717,738,576 | 2,798 | # st: comparison of "heckman" and "tobit" outputs
From "Takako Yuki" To Subject st: comparison of "heckman" and "tobit" outputs Date Mon, 5 Apr 2004 23:07:17 +0900
```Dear everyone:
I would like to know, with STATA commands, whether Tobit TYPE-I and
Type-II (i.e., heckman selection model) will provide the same estimation
results if in the Type-II the regression equation has the same set of
variables with the selection equation. For example, as described below,
I compared the means of predicted wages with "tobit" command for TYPE I
and "heckman" command for Type II. However, they provide the different
results.
I would appreciate if anyone can tell me: whether my understanding for
the relation between Tobit TYPE I and TYPE II is wrong, or whether the
following STATA commands are wrong for my purpose. If the latter is the
case, could you also tell me how to do?
.use http://stata-press.com/data/r8/womenwk
(a) For Type I
.gen wage1=0
.replace wage1=wage if wage~=.
.tobit wage1 educ age, ll(0)
. predict yhatwage1
. sum yhatwage1
Variable | Obs Mean Std. Dev. Min Max
-------------+--------------------------------------------------------
yhatwage1 | 2000 12.70266 6.876089 -.4998228 34.74677
(b) For Type II
.heckman wage educ age, select (educ age)
. predict yhatwage
. sum wage yhatwage
Variable | Obs Mean Std. Dev. Min Max
-------------+--------------------------------------------------------
wage | 1343 23.69217 6.305374 5.88497 45.80979
yhatwage | 2000 20.93415 3.953325 14.20508 33.01097
Thank you very much for any help.
Takako Yuki
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
``` | 532 | 1,847 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.515625 | 3 | CC-MAIN-2015-32 | longest | en | 0.781981 |
https://www.goonhammer.com/battletech-main-gun-combatmath/ | 1,701,199,142,000,000,000 | text/html | crawl-data/CC-MAIN-2023-50/segments/1700679099942.90/warc/CC-MAIN-20231128183116-20231128213116-00301.warc.gz | 892,777,981 | 97,708 | # Battletech: Main Gun Combatmath
Welcome back to more Battletech. Inspired by Hammer of Math, I have decided to compare out the various weapons that are commonly used as primary weapons on mechs. These weapons all tend to be mid-long range guns, but have very little in common otherwise. They vary wildly in damage and form, and several of them have interesting special rules or variations. The cutoff date for this article was 3067, as I had to draw a line somewhere for the sake of my own sanity, and most of the Dark Ages weapons are just exaggerated versions of previous ones. The AC20 and all missile weapons will also not be here, because those have fundamentally different roles to these weapons in my mind. I am sure that I missed one of the many, many weapons in this game, and I am very sorry if I did. I tried to stick to the Advanced tech level, so no experimental weapons.
Main guns in Battletech are a bit of a can of worms, because several of them are just kind of terrible and are incredibly disappointing to have on a mech instead of a more exciting weapon. Every weapon has its defenders, and while I am going to back my opinion up with math, there are always other things to consider.
# Methodology
For the math in this article I calculated out the expected damage of a given weapon at a series of ranges, that being 1, 5, 10, 15, and 20 hexes. This doesn’t line up perfectly with the range brackets of every weapon, but it should provide a reasonable range of data points. Some weapons are better in the transitional zones between these values than others, and, for example, if a weapon had a short range of 9, it would be the same from 1 out to 9. To calculate the expected damage, we are assuming that the firer is a Skill 4 pilot in a mech who walked, and the target is in the open with a TMM of 2 due to movement, as this is a very common set of modifiers and gives us a nice, average 7 as our starting TMM at short range. We then take the damage the weapon deals and multiply it by the chance to hit, giving us our expected damage. If a weapon can not hit at a specific range it is counted as 0 expected damage. This gives a good number for comparisons, as it allows weapons with a hit bonus, such as Pulse Lasers, or a hit penalty, such as Heavy Lasers, to be accounted for in the math.
In addition, for Cluster Table weapons, I did my best to calculate out the average damage they would do. With the Ultra AC this came out to about 40% more expected damage than a non-Ultra one, and with RACs it came out to 4 projectiles hitting. Bear in mind that the Ultras and RACs can sometimes do significantly more or less damage than the average presented here.
After all of the math, Ill add up all of the damage values at each range bracket, and then divide by the BV of the weapon, and get a Damage per BV value that should hopefully represent the advantage that longer ranged weapons get. 0.083 Damage per BV is the average of the whole data set, so that is the number to beat.
# Weapons
I have decided to separate the table out into weapon families for going over, as the unsorted chart is a mess to look at and weapons can fairly easily be grouped. Ill mention other categories whenever it is relevant to make a point. In addition, some of these weapons have special rules or considerations, such as the Plasma Rifle’s heat damage or the Snub PPCs long short range, that make them a bit harder to reduce down to a raw damage calculation. Also, there is a breakpoint at 12 damage, any weapon that can deal 12 or more damage is capable of taking a Mech’s head clean off in 1 shot, meaning that there is a small chance each time you shoot that you one hit kill the target outright. Most weapons pay pretty dearly for that advantage, but it can be enough to boost a weapon up in power by a lot.
In addition, some weapons are explosive while others are not. In a world with CASE that isn’t a huge deal, but not every mech has Case. Generally, all ACs have explosive ammo, and all energy weapons do not explode. Gauss Rifles explode themselves but their ammo isn’t explosive. This is a huge advantage for energy weapons in general, and I have known players who write off Ballistic weapons entirely due to this.
Another interesting wrinkle is that Energy weapons can sink all their heat for “Free”, assuming it is the first one you have put on your mech and you have “Free” engine heat sinks left over. A PPC or Large Laser is a lot better as a weapon if you are not paying for all the heat sinks that they take to fire, so mechs that can bracket fire well or simply don’t carry enough weapons to use all of their engine heat sinks get fantastic use out of those weapons, which the BV ratio doesn’t reflect super well. They also are considerably better with Double Heat Sinks, with some weapons, such as the Heavy Large Laser, being total insanity to imagine using without them. Clan DHS are better, but even Inner Sphere ones are a huge upgrade and go a long way towards evening things out across the board.
If you are using Single Heat Sinks, or you have already used all of your “Free” engine double heat sinks, an Autocannon or Gauss Rifle becomes more tempting, as they don’t generate a lot of heat, and can even be added without any heat sinks, if you are willing to build a few heat a turn and ride your heat index. If we assume CASE and Double Heat Sinks exist and are being used, the balance between Ballistic weapons and Energy weapons is fairly good, with a few exceptions.
## AC2s
Weapon1 Hex5 Hexes10 Hexes15 Hexes20 HexesBVDamage Per BV
AC20.161.160.560.560.16370.070
LBX AC20.161.160.560.560.16420.062
Clan LBX AC20.161.161.160.560.56470.077
Ultra AC20.481.650.80.80.23560.071
Clan Ultra AC20.81.650.80.80.23620.069
Rotary AC24.644.642.240.6401180.103
Light AC21.161.160.560.160300.101
Starting out with the smallest and most piddly of the weapon types, AC2s are in my opinion mostly bad. The damage per BV is fairly low, with the Rotary AC2 and Light AC2 both being better than average. This is mostly due to the huge damage of the RAC, and the low BV and lack of minimum range on the Light AC2. These weapons do have a niche in having pretty extreme range on average, and are pretty good at shooting down Aircraft if you are playing with Aerospace rules. There are better options for both of those roles though. 2 Damage is a massive hill to climb and for the most part these weapons don’t do it. An interesting note for all the AC families is that the LBX autocannons can fire cluster shot, which is very good if you are trying to crit your opponent in a specific location as they generate more hit location rolls than any other weapon. Cluster Shot also has a to-hit bonus, which makes that crit seeking even easier.
## AC5s
Weapon1 Hex5 Hexes10 Hexes15 Hexes20 HexesBVDamage Per BV
AC50.852.91.40.40700.079
LBX AC 50.852.91.40.40.4830.072
Clan LBX AC50.852.91.41.40.4930.075
Ultra AC51.994.121.990.570.571120.083
Clan Ultra AC51.994.121.990.570.571220.076
Rotary AC511.611.65.61.602470.123
Light AC52.92.91.40.40620.123
The genesis of this article is an argument I had years ago with someone over whether the AC5 was a good weapon or not. I have never been a fan of it, because it feels like such a small amount of damage for such a heavy, large weapon. According to the math, it definitely shakes out to be worse than average, though not by as much as I personally feel it is. The RAC5 and Light AC5 continue the trend from the AC2 versions of being way, way better than the other AC5s, being tied for third best weapon in the whole set, with the RAC being capable of some truly horrific damage spikes from time to time.
The Ultra AC5 is amusingly exactly an average weapon, which checks out. AC5s do have slightly longer ranges than their closest competitors, but in my mind that is not a big enough advantage to make up for the small hit and mediocre Damage to BV ratio. Another big problem with AC5s is that they are really heavy for a weapon that does 5 damage, weighing 8 tons, 9 with ammo. This means that a mech really can’t have that many of them, and you spend a lot of BV on the chassis of the mech, its Armor, and its Engine, and putting an AC5 on it feels like a waste of the BV of the “Naked” mech compared to something like a Large Laser.
## PPCs
Weapon1 Hex5 Hexes10 Hexes15 Hexes20 HexesBVDamage Per BV
PPC1.75.82.80.801760.063
ER PPC5.85.82.80.80.82290.070
CERPPC (Praise Be)8.78.74.21.21.24120.058
Heavy PPC2.558.74.21.203170.053
Light PPC0.852.91.40.40880.063
Snub Nosed PPC5.85.82.240.401650.086
Plasma Rifle5.85.82.80.802100.072
This Dataset was an absolute flashbang to my soul. I have always been of the opinion that PPCs were some of the best weapons in the game from a pure damage standpoint, and that is blatantly untrue according to the data. The fact that the AC5 has a better damage to BV than any of the PPCs save the Snub Nose feels so wrong on such a fundamental level. PPCs are also incredibly heat intensive and thirsty for heat sinks, which can rapidly drive their weight up towards insanity depending on heat sink tech and whether you have engine heat sinking capacity left over from your other equipment. If you have engine heat sinks left over, the PPC will be better compared to if you have to put each and every heat sink on the mech to support it.
That all being said, PPCs do have several good traits over the AC5 that go a long way towards pulling them back towards better. For one, the Clan ER PPC and Heavy PPC are both Head Choppers and have a chance at instantly killing anything they shoot at, with the Clan PPC being one of the conventional “Best” weapons in the game, even with its huge BV cost. PPCs overall do a single 10 to 15 damage hit, which is powerful, as it can open up hit locations for crits a lot better than two 5 damage hits. Single powerful hits are generally more powerful than multiple small hits in Battletech, unless you are trying to crit a location that has already been opened up. These weapons also don’t run into the issue of lost BV from the mech’s chassis, as anything with a PPC can be expected to perform pretty well compared to an AC5. Personally I am a Marik at heart and prefer the Large Laser family of weapons to these, but the long ranges and large hits are very strong.
The Snub PPC needs a specific call out, as it has an incredibly long short range of 9 hexes, but falls off hard at longer ranges, having small medium and large range brackets, as well as literally doing less damage at longer ranges. If it is on a fast mech that can stay up close, it can be one of the most efficient weapons on the list.
The Plasma Rifle is another weirdo weapon that got stuck in the PPC category mostly because of damage, as it is an energy weapon that requires ammo. Its ammo is not explosive though. It also can inflict heat on enemy mechs, which, if combod with Inferno SRMs, can let you overheat enemy mechs and make them easy pickings, which is incredibly interesting and can be very fun.
## Large Lasers
Weapon1 Hex5 Hexes10 Hexes15 Hexes20 HexesBVDamage Per BV
Large Laser4.644.642.240.6401230.099
ER Large Laser4.644.642.240.6401630.075
Clan ER LL5.85.82.82.80.82480.073
Heavy Large Laser6.726.722.720.4802440.068
Large Pulse Laser7.475.222.52001190.128
Clan LPL (Praise Be)8.38.35.82.82.82650.106
Large Lasers are my favorite category of weapon personally, and from a BV perspective that is a justified opinion. They do have the same issue as PPCs where their weight can get out of control after you have used all your engine heat sinks, but to a lesser extent for the most part. I love the basic Large Laser to pieces, and it is, indeed, better than an AC5 from an expected damage standpoint everywhere but 16-18 hexes, where the Large Laser gets outranged. In my mind this is a reasonable trade for significantly better damage, no minimum range, and a better ratio.
The Large Pulse Laser is, according to the math, the best weapon in the data set in terms of BV to Damage dealt, as its hit bonus allows it to boost it’s expected damage up by a lot compared to other weapons. The Clan version is, in my mind, better even though it has a worse ratio, as the range on the basic LPL is pretty pathetic by comparison. The Clan LPL has a pretty good damage to BV ratio, and a pretty good range, outranging most other weapons and delivering a nasty 10 damage hit with a gigantic hit bonus. In my opinion it is the overall “Best” weapon in the game, though arguments can be made for a variety of other weapons.
The Heavy Large Laser here is an interesting case, as it is a head chopper, but it has a to hit penalty, so you deliver hits less often than with other weapons. In addition, the heat that comes out of a Heavy Large Laser is some of the most extreme in the entire game, a humongous 18 points, which makes its weight spiral into the stratosphere once you account for heat sinks. The ER Large Laser is a somewhat bad upgrade to the basic Large Laser, as the basic Large Laser is one of the more efficient weapons in the game, getting basically nothing but 4 hexes of extra range at the cost of 50% more heat and a good chunk of extra BV. It is longer range, but in my mind that trade is not worth it. The Clan Variant does more damage and has 10 hexes of extra range, making it one of the premier sniping weapons in the game. You do pay for it though, with it having a pretty bad damage to BV ratio, but the extra range here matters in a pretty huge way, and it is still better than most of the PPCs from a BV standpoint while doing nearly the same amount of damage.
## Gauss
Weapon1 Hex5 Hexes10 Hexes15 Hexes20 HexesBVDamage Per BV
Gauss Rifle4.28.74.24.21.23200.070
Clan Gauss Rifle4.28.74.24.21.23200.070
Light Gauss Rifle1.364.642.242.240.641590.070
Heavy Gauss214.55.60.80.83460.068
Gauss Weapons are the most efficient headchoppers in the game, and are generally great weapons. They have good range and horrifying damage across the board. They are quite heavy no matter what, and unlike energy weapons, they don’t get any lighter from having Double Heat Sinks. The Light Gauss is not a headchopper, which is a bummer, and overall is by far the worst Gauss weapon, only really having a niche on some IS light mechs, as it does 7 less damage than a Gauss rifle and weighs the same as a Clan Gauss Rifle.
The Heavy Gauss Rifle has the same damage falloff as the Snub Nosed PPC, and has a minimum range which massively hurts it, but it puts the hurt on things like no other weapon in the game can if used perfectly. I generally favor more consistent weapons and would rate the normal Gauss weapons a lot higher, but there is an argument to be made here. According to the math, if you want to one shot mechs from extreme ranges, these are the best weapons to do that with.
## AC10s
Weapon1 Hex5 Hexes10 Hexes15 Hexes20 HexesBVDamage Per BV
AC105.85.82.80.801230.124
LBX AC105.85.82.80.801480.103
Clan LBX AC105.85.82.80.801480.103
Ultra AC108.248.243.91.1402100.102
Clan Ultra AC108.248.243.91.1402100.102
Oh God oh no House Davion was right the whole time, the AC 10 really is the ultimate weapon. This one shocked me a bit aswell, as I was completely unprepared for AC10s to be, on average, the best weapon category in the game. The basic AC10 is the second best weapon in this dataset at converting BV into Damage, and all the other variants trade a worse (But still much better than average) ratio for good secondary/special effects. The Ultra AC10 has the chance to just rip it and do 20 damage, and the LBXs are excellent at killing infantry, helicopters, and planes with their cluster shot. LBX and Ultra AC10s also have the same range as an AC5 or PPC, putting them in that longer range belt, which is an advantage. The Base AC/10 is also probably the best weapon to give the Special Autocannon Ammo to, which is really fun and you should use it.
# Takeaways
Overall the general takeaway from this article is that Small Bore Autocannons suck in their base state, with only the Light versions and Rotary versions being good, and they could use a BV reduction at some point if Catalyst is interested in adjusting the BV algorithm. The AC5 is almost a median weapon from a Damage to BV standpoint, and it has good range, but the small hit and opportunity cost of spending 10 tons and a mech chassis to do 5 damage in my opinion holds it back from being decent. Other takeaways are that PPCs have a kind of bad BV to Damage ratio, and are heavy and heat-intensive compared to the AC-10, which is apparently a godly weapon we should all kneel before and pray for salvation.
I was shocked by the PPCs not doing well here, as they had kind of been in the back of my head for the last decade as the “Best” weapons in the game. Large Lasers are very solid and reliable, with basically nothing in that category being super disappointing, my distaste for the IS ER Large Laser aside. If you want to Head Chop someone, you are best off going for a Gauss Rifle rather than a Clan ER PPC, as they cost less BV for similar damage. The ER PPC might be better if it is your first weapon and you can sink it completely with your engine heat sinks as it will be much lighter than a Gauss, and does have the advantage of not blowing up and having unlimited ammo.
My personal picks for the “Best” weapon in the game after looking at all of this would be either of the Large Pulse Lasers, and the AC10s. None of them can head chop, but the AC10 is very cheap and efficient, and the Large Pulse Lasers are super consistent due to their hit bonus cutting through negative hit modifiers from terrain and movement. I personally don’t like counting on Headchopping as a tactic, as in my opinion it doesn’t happen often enough to be worth sacrificing more efficient and reliable damage for, but you can spam enough headchopping weapons to make it work.
None of this math really matters though, because I’ll still get shot in the Center Torso by a “Weak” and “Pathetic” AC2 on the first turn and get TAC’d to the Gyro, but we can at least try to get some optimization out of this horribly unoptimized game. | 4,431 | 18,152 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.171875 | 3 | CC-MAIN-2023-50 | latest | en | 0.976066 |
https://prezi.com/zefwe007yt3b/gear-panel-discussion-notes/ | 1,521,935,660,000,000,000 | text/html | crawl-data/CC-MAIN-2018-13/segments/1521257651465.90/warc/CC-MAIN-20180324225928-20180325005928-00265.warc.gz | 712,503,483 | 24,078 | Present Remotely
Send the link below via email or IM
• Invited audience members will follow you as you navigate and present
• People invited to a presentation do not need a Prezi account
• This link expires 10 minutes after you close the presentation
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
GEAR Panel Discussion Notes
No description
by
on 28 November 2016
Report abuse
Transcript of GEAR Panel Discussion Notes
GEAR RETREAT PANEL DISCUSSION
First GEAR Retreat, University of Illinois
August 2012
Geometric Structures
Special Representations
Dynamics on Moduli Spaces
3-manifolds
Higgs bundles
Danciger
GEAR Panel Discussion
Take are you're in, find area you're interested in, connect them. I work in geometric structures, rich and well appreciated connections between that and all but Higgs bundles, want to explore these connections.
Specifically:
(1) What are geometric structures corresponding to the Hitchin component- build on Guichard-Wienhard
(2) What is geometric meaning of Anosov representations in complex Lie groups?
(3) Understand better the Non-Abelian Hodge theorem.
(4) Understand Hitchin's Riemannian metric on Teichmuller space
(5) Special loci of character variety leads to special loci of Higgs bundles. Can understand this for real reps. What about other reps?
Dumas
Best math comes from interactions of two fields
Personally, hyperbolic 3-manifolds and special representations
Apply `low-tech' ideas from hyperbolic 3-manifolds to special representations
Bring ideas from dynamics into hyperbolic 3-manifolds (Minsky-Gelander)
Find really smart co-authors, ride coattails.
CANARY
Coming from geometric structures and 3-manifolds
All are geometry
Associating geometric structures to other objects.
Coming together and discussing leads to progress
Figure out what other categories can do for your area. In my case, 3-manifolds.
Low-d topology/hyp geometry and relation to Heegard-Floer and gauge theory.
Eg: every integer homology 3-sphere that's not S^3 has a representation into SU(2) or SL(2, R).
Look at character variety, action.
Torsion invariants
Dunfield
Again, being helped by others, trying to help other fields
coming from Higgs bunldes, one particular area is special representations, in particular representations into groups of Hermitian type, maximal Toledo type.
What is the role of Hitchin equations? What do they say about geometric structures?
What do subvarieties of moduli spaces of Higgs Bundles (perhaps with geometric meaning) correspond to in special reps.
Sp(4, R) program- potential WP-metric on moduli space of Sp(4, R) reps, in particular components where image is Zariski dense.
Fock-Goncharov: higher mapping class group- what is it geometrically?
Guichard-Wienhard- is there an analogy with Anosov reps to the 3-manifold story of hyperbolic manifolds and the conformal structure on the boundary.
Labourie
Lukyanenko: what physics?
Labourie: General background, for example, what is relationship of Higgs bundles to physics? Perhaps a summer school on TQFT, etc.?
Bradlow: what opportunities do you see, Andy?
Neitzke: My work started by trying to understand math, in particular, Kontsevich-Soibelman. Continue to interact. Kevin Costello has produced great math/physics.
Labourie: use physics as a `sieve' for good questions.
Dumas: what prevents us from talking to physicists?
Me; different languages
Wienhard: we're not the usual group of mathematician who talks to physicists.
Garcia-Prada: Higgs bundles arose from physics, from instantons, so it's already benefited a lot from physics. Physics is a source of inspiration, good problems.
Neitzke: more physicists would be interested in attending.
Danciger: how do we find them?
Bradlow: never been at a meeting with dynamicists/low-d topologists, how do we break down more barriers? Which talks fired you up?
Me: Nietzke's talk connected, through physics?
Bradlow: What fills in the middle of the diagram? quadratic differentials? What do quadratic differentials mean to me?
Kerckhoff: how can we set up meeting to encourage more interactions? One topic/day is good for first meeting, but how to get more interactions?
Canary: there's plenty of bumping and self-bumping, the circles are not really disjoint.
Wentworth (via email): Good to have survey talks on topics, but might be better to focus on the edges instead of vertices.
Wienhard: again, mixing topics/changing structure for retreat may be useful.
Wright: for junior speakers, one topic/day was very useful
Audience Questions:
Resources at `edge, not nodes'
Practical Matters
Current programs:
Short term visits
exchanges
workshops
Lots of resources for young people, visits to other places
Can GEAR help students go on long-term visits/eliminate teaching?
Wienhard: semesters are shifted, so can arrange visits during breaks
Create a GEAR overflow 'Forum' for questions/references/jobs, managed by grad student/postdoc
Labourie
Concentrate on `edges' of subjects, rather than nodes.
Focus courses by having two experts in related subjects give a joint course, for example.
Small mini-workshops (8-10 people), young people involved, but getting into the guts of something, not necessarily motivated by an immediate collaboration.
Sp(4)-lab, for example!
Wienhard: exchanges can be people coming together to meet at a different (non-GEAR node) place. Small workshops can be funded via applications, can be very speculative, can be area-based, can be geography based. Applications can come from all levels.
Dunfield
Used GEAR resources, again not sure what, maybe visits/exchanges.
Apply and use money!
Can be intimidating to ask for money to faculty directly, use GEAR!
Wienhard/Bradlow: can't buy out teaching, but can provide stipend to postdoc to visit other places
Danciger
I've used a lot, feel like a grad student again.
use the money!
Senior people like to talk about their work.
Canary
Work against the tendency to cluster on the things you're doing.
Bring people from different areas together in informal settings.
Make it easy to find out who's doing what, adding a paragraph to each directory entry- souped up member's list, with links to websites.
Make it like a department faculty list.
Keep sending emails about funding opportunities.
Dumas:
If a department has a GEAR member, you are at a GEAR node, and any student/postdoc there is also eligible for GEAR funding.
If you are a faculty, you can be nominated by a current GEAR member.
Wienhard
Roger: how long in advance do we need to plan events?
Audience Questions:
Sur-Wei Fu: keep advertising GEAR on a regular basis. | 1,557 | 6,706 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.53125 | 3 | CC-MAIN-2018-13 | latest | en | 0.888399 |
http://www.hometheaterhifi.com/technical-articles/technical-articles-and-editorials/understanding-contrast-ratios-in-video-display-devices/part-iv.html | 1,441,293,186,000,000,000 | text/html | crawl-data/CC-MAIN-2015-35/segments/1440645315643.73/warc/CC-MAIN-20150827031515-00308-ip-10-171-96-226.ec2.internal.warc.gz | 505,797,227 | 14,717 | # Secrets Q & A
## Understanding Contrast Ratios in Video Display Devices
ARTICLE INDEX
How Much Simultaneous CR Can We See at Once?
I don't have an absolute answer to this one, but I know the number isn't as low as I've seen some claim. I have seen several people say that projectors don't need to be able to do more than 100:1 CR at once since humans can't see more than that. I don't know where that number came from, and I'm somewhat baffled since I can easily prove that people can see transitions much higher than that to anybody with close to normal vision in less than 5 minutes in my theater, but for some reason it seems to persist. If we could only see transitions with a 100:1 CR or less range at once it would make projector design easier, but that isn't the case. When bright and dark objects are very close in proximity, our ability to discern levels can be low, but with images on projection screens, levels of very different intensities do not have to be right next to each other.
I have talked with one company who works on High Dynamic Range displays that can do way more than 100:1 at once and can been seen by people viewing their displays, and they don't know where that number came from either. I was told that vision specialists have said that the human eye can perceive somewhere in the range 100,000:1 without any adaptation of the iris. I haven't devised any test for measuring the most CR I can see this way since I have trouble measuring things close to that high with any measuring equipment I have, but I know that transitions that are beyond 2000:1 simultaneous CR from the brightest objects can be easy to see. This doesn't mean that I can see beyond 2000:1 in every image, but there are conditions where I can see beyond that easily. One test I've run is using an image I have of a skeleton on a "black" background. The skeleton takes up only a small part of the image, so the simultaneous CR is much higher than the ANSI CR from my projector and room, and lower than the On/Off CR. By zooming the image from my projector down to smaller than the screen area, I can get transitions from the projected "black" to an area that is lit from washout effects and then from that to the screen border. In this test, it is easy to see some detail in the whites on the skeleton, then the projected "black", then the area beyond the projected border, then the screen border. I cannot see the difference between the screen border and the black velvet behind it though. While I can do measurements to find that the CR from the white on the skeleton to the area inside the screen border is more than a couple thousand to one in my mostly black velvet room, I don't have anything that can measure the light level off the screen border accurately.
For a while I thought that the 100:1 simultaneous CR limit floating around came from confusion with the Contrast Sensitivity Function, but I'm not sure if that is the main source of this. In a book by Poynton, he says that humans can discern different luminances across about a 1000:1 range at a particular state of adaptation, but there are references on the Internet to him claiming a 100:1 limit. This may have ultimately come from his discussion of CSF and threshold of 1.01:1 though.
In any case, I will explain why the CSF does not tell us the upper limit for how much simultaneous CR we can see. The main reason is that CSF doesn't measure the upper limit, it is a measure of the lower limit, or how we perceive low contrast cycles, not high contrast cycles. The Contrast Sensitivity is the inverse or reciprocal of the Contrast Threshold. That is, as the Contrast Threshold gets smaller (like off-white on white), the Contrast Sensitivity goes up. A Contrast Sensitivity score of about 100:1 means that a person started to be able to differentiate levels when they got to around 1% or CR of 1.01:1 (although those aren't exact), and as the separations got larger than 1.01:1, they would continue to be able to see them. Because some people miss the inverse or reciprocal part here, they may falsely assume that higher Contrast Sensitivity is higher CR when the opposite is true. Or that Contrast Sensitivity going down as our eyes age means we would want less CR, when in fact lower Contrast Sensitivity scores mean that we need more differentiation (or more CR) between levels to be able to see them as our eyes age. However, we also might want brighter whites as we age and our eye's ability to pick up light decreases.
Readers are welcome to try their own experiments putting black posterboard over part of their screens with images of some white levels and a lot of video black or by using their hands or something else to create shadows in those kinds of images and see if they have any trouble seeing those dark transitions and white transitions at the same time. There is a spot in Sin City in chapter 20 at 1:28:45 on the regular DVD that can be useful for this, although it will probably require lowering the brightness setting a few notches since the background isn't encoded completely as video black. Here is a shot of that scene:
A projector with just 100:1 ANSI CR and 1000:1 On/Off CR could do much more than 100:1(and less than 1000:1) simultaneous CR in a scene like that, and most of the projectors discussed here can beat both of those. A white room could hurt the simultaneous CR off the screen, but the brightest part of this scene only takes up part of the image, and part of that could be blocked with dark material (leaving some white with detail) while testing to reduce the effect of reflections around the room, if desired. If the simultaneous CR off the screen is too high, a person might not see the difference between the screen and black poster-board or a shadow created in the darkest parts of the image, but I don't know of any projectors which would create that problem at the moment. Note that shadows with setups which have light coming from more than source (like dual projectors or projectors with three lenses) are not the same as shadows created from a single light source.
Why On/Off CR Matters More Than Absolute Black Level From a Projector
A common question I see is how projectors compare for absolute black level. I steer people toward On/Off CR even if what they ultimately care about is absolute black level, because front projectors cannot give you images on their own. They require a surface. Without a surface there are no images to see. And the user gets to determine what surface is used to a large degree, along with whether they will use any kind of filtering (like a neutral density filter) on the projector, although these mostly apply to digital projectors and not to CRTs. Until a surface (usually a screen) and filtering (or not) are chosen, there is no absolute black level in ft-lamberts or cd/m2 (values for light coming off the screen), but rather only in lumens (values for light going toward the screen). And if filtering and screens are chosen such that two setups produce the same ft-lamberts for white, then the one with the higher On/Off CR will have the lower absolute black level. The math just works out that way with the following equations which are all forms of the same thing:
(On/Off CR) = (white level) / (black level)
(white level) = (On/Off CR) * (black level)
(black level) = (white level) / (On/Off CR)
With any two of the above, we can determine the third, and the last one shows that for the same white level, a higher On/Off CR means a lower black level.
As an example, let's consider two projectors with the following properties:
Projector A: 1000 lumens and 2000:1 On/Off CR
Projector B: 500 lumens and 1500:1 On/Off CR
If we just look at the absolute black levels we get:
Projector A: 0.5 lumens
Projector B: 0.33 lumens
We can see that projector B has the better absolute black level if we just consider what is coming out of the projector. However, if we look at a complete solution which includes a screen (and possibly a filter), the results can change. Projector A is fairly bright, so let's put it on a screen with a gain of 0.5. And we'll pick a size which results in 15 ft-lamberts for white. Projector B isn't bright enough to use with that screen and have the whites be bright enough if somebody wants in the 15 ft-lamberts range, so for that one we will use a 1.0 gain screen. The results here are the following:
Projector A: 15 ft-lamberts for white and 0.0075 ft-lamberts for absolute black
Projector B: 15 ft-lamberts for white and 0.01 ft-lamberts for absolute black
As can be seen, the projector with the higher On/Off CR ended up with the lower absolute black level once things were set up to the same white levels, even though it had the brighter black level coming out of the projector lens. This is also not even counting that the darker screen that could be used with projector A will help kill reflections off the walls and help the ANSI CR. Even if a person was happy with 7.5 ft-lamberts for white and would have used the 0.5 gain screen for projector B, they could have put a 2x neutral density filter on projector A (if it was a single lens projector like a digital) for the same white level and once again that lower absolute black level off the screen.
People do not have infinite choices for screen gain among commercial screens out there, but even so, I would encourage giving the On/Off CR more weight than the absolute black level out of the projector lens (or lenses). | 2,137 | 9,485 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.796875 | 3 | CC-MAIN-2015-35 | latest | en | 0.979851 |
https://dsp.stackexchange.com/questions/73078/how-do-i-perform-2d-fourier-domain-multiplication-if-the-filter-mask-doesnt-mat?noredirect=1 | 1,638,031,095,000,000,000 | text/html | crawl-data/CC-MAIN-2021-49/segments/1637964358208.31/warc/CC-MAIN-20211127163427-20211127193427-00241.warc.gz | 278,437,213 | 33,377 | # How do I perform 2D Fourier domain multiplication if the filter mask doesn't match the image size?
Let's say I have an image that is 512 x 512 pixels. I've been tasked with creating two ideal half-band low-pass filters that will filter the image. The first filter is 8 x 8, and the second one is 16 x 16.
Each filter will represent the same frequency range (0 to Nyquist) and possess the same shape (half-band ideal filter). The bigger filter will feature better frequency resolution resulting in improved roll-off at the cut-off frequency.
However, let's say I wanted to do the filtering in the Fourier domain. How would multiplication work if the filters and images are different sizes?
It seems like the answer is to increase the size of the filter. Two options come to mind:
• Zero pad the filter's fourier domain. However, this would result in shifting the cut-off frequency such that is no longer a half-band filter.
• Interpolate the filter fourier domain. Essentially stretch out the pass-region by adding more "ones" to alter the frequency resolution. However, this would seem to alter the roll-off of the filter. Essentially the 8x8 and 16x16 filters would become the same.
So how exactly should multiplication occur if the filters don't match the input size? It seems like there are flaws in both zero padding and interpolation.
• Does this answer your question? Implementing Convolution in Frequency Domain? Feb 7 at 20:58 | 313 | 1,443 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.71875 | 3 | CC-MAIN-2021-49 | latest | en | 0.94859 |
https://www.lotterypost.com/thread/247020/270 | 1,495,875,694,000,000,000 | text/html | crawl-data/CC-MAIN-2017-22/segments/1495463608877.60/warc/CC-MAIN-20170527075209-20170527095209-00128.warc.gz | 1,137,182,143 | 22,168 | Welcome Guest
You last visited May 27, 2017, 3:54 am
All times shown are
Eastern Time (GMT-5:00)
# Do some number combinations have better odds?
Topic closed. 5280 replies. Last post 4 years ago by rdgrnr.
Page 270 of 353
United States
Member #116268
September 7, 2011
20244 Posts
Offline
Posted: January 30, 2013, 11:09 am - IP Logged
A couple of sets to play for 01-30-2012 Powerball drawing.
Set 1: 04-12-13-20-22-24-27-33-37-39-47-50-52-57-58
Set 2: 03-04-07-09-11-12-13-14-15-17-18-21-22-23-25-26-28-29-31-33-34-37-42-43-44-45-48-52-53-54-55-58-59
Jimmy
Nice work jimjwright
Here are mine for tonight.......
01 02 03 06 07 08 09 11 13 14 16 17 18 19 20 21 22 23 24 25 26 27 29 30 32 35 36 38 39 40 44 45 46 49 51 54 55 56 59
bonus ball 12
New Jersey
United States
Member #99032
October 18, 2010
1439 Posts
Offline
Posted: January 30, 2013, 12:20 pm - IP Logged
1) YES
2) No
Wow.... You answered with yes and no, but neither of these were yes or no questions.....
That's so dumb it's ridiculous.
New Jersey
United States
Member #99032
October 18, 2010
1439 Posts
Offline
Posted: January 30, 2013, 12:34 pm - IP Logged
"Roulette pays \$35 to \$1 so the house edge on any winning 35 to 1 bet is 5.26% and the house should win 5.26% of all bets. If player bets and loses \$100, the house doesn't charge then an extra \$5.26 so the edge is only applied when paying off winning bets. If a roulette table has a volume of \$1 million, the house should keep \$52,600 and the winners should collect \$947,400."
Your logic holds for roulette, but not many of the other casino games. The house edge is applied to all bets placed, not just winning ones.
"If there is an advantage, it's thoroughly understand how to play the game. I won't stay very long at a Blackjack table if the other players are not using basic hitting strategies unless I'm benefiting from their poor play. You might think it's a better game until you see many other players effecting your outcomes."
In Blackjack, the other players decisions do not affect you. There play is just as likely to hurt or help you, and will even out. Mathemetically, it doesn't matter to you whether they play basic strategy or not. If you can't enjoy the game with people playing badly, that's one thing, but know that the math doesn't change.
"Today poker is all about no limit tourneys and the best players in the world can't overcome a run of bad cards. Even the cash games are no limit. How can you grind it out when the other players constantly take you all in?"
It's actually pretty simple. Don't put down your whole bankroll, just a piece of it. Use proper bankroll management, and play strong poker against weaker players. That's how you grind out any advantage, in most games you place bets using the Kelly Criterion (theoretically you could with poker, but you can't actually quantify what you need to know) and in Poker you generally just play with X buy ins, depending on how much risk you are willing to take on.
"You can get the advantage more often playing poker if your good enough, but you still won't win every time. Gambling is about timing and looking for an advantage is the idea of this topic. You can calculate the odds of pocket aces winning hands, but knowing you had the best odds means nothing when the other guy is raking in the pot."
It absolutely means something. If I got in all in with Aces preflop, I'm over an 80% favorite to win the pot. That's a great bet! And to say that it means nothing that you were able to do that if the other player wins is very short sighted. In Poker, you're going to have bad beats. If you can list 100 bad beats and not so many times when other players outplayed you then you're in good shape. The more often you get your money in good, the more often you are likely to win.
Not to mention that there's a million other things that can happen. In most cases, people don't go all in pre flop and you have a couple more rounds of betting. A lot of information can be extracted during the flop and turn, which could lead you to fold what WAS the best hand preflop, and has become the 2nd best hand. You can save A LOT of money by making information bets and making disciplined folds. And in Poker, each dollar you saved is basically the same as a dollar won.
Think about it, if I was dealt AA and you were dealt KK, and I got you all in, that's good for me. If the situation's reversed, and I'm able to get away from KK and fold it, then whose that good for? Obviously you won a small pot, but I was able to get away from a situtation where I'm about 80% to lose my stack. I'd say that folding KK to AA is a good play for KK. (Obviously only if he folds it to a really obvious tell, though. Otherwise he's throwing away KK to any of the premiuum hands, and that's not profitable.) This situation happened to me ONCE and it's the only time I've ever folded KK preflop. The other player threw up AA, and everyone else at the table was amazed that I folded.
United States
Member #93947
July 10, 2010
2180 Posts
Offline
Posted: January 30, 2013, 1:29 pm - IP Logged
Wheel: Pick 5 Abbreviated 2 if 5 of 39
Tickets: 25
Description: Minimum 2-number match, if 5 numbers drawn fall within your set of 39 numbers.
Input: 1, 2, 3, 6, 7, 8, 9, 11, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 29, 30, 32, 35, 36, 38, 39, 40, 44, 45, 46, 49, 51, 54, 55, 56, 59
1. 01-02-19-26-38
2. 01-02-24-36-51
3. 01-06-07-11-44
4. 01-07-16-17-46
5. 01-09-17-20-25
6. 01-11-13-20-30
7. 01-14-19-22-36
8. 02-03-06-16-22
9. 02-03-09-14-46
10. 03-07-13-49-54
11. 03-17-46-56-59
12. 06-14-19-24-51
13. 06-14-26-36-38
14. 07-09-16-56-59
15. 08-11-29-32-35
16. 08-13-25-30-44
17. 08-20-44-49-54
18. 11-25-30-49-54
19. 18-21-32-39-55
20. 18-23-35-39-45
21. 18-27-29-39-40
22. 21-23-29-45-55
23. 21-27-35-40-55
24. 22-24-26-38-51
25. 23-27-32-40-45
Ronnie316,
As promised, here is the winnings report for your 25 picks for Saturday's Powerball.
01/26/2013 03-22-26-41-49 (18).
You were quick to point out that 4 of the drawn numbers were in your set of 39. Were you trying to imply that you got a 4/5 hit? Unfortunately, based on your 25 selections, this was not the case.
Here is a tabulation of the results of buying 1 each of these sets for a total cost of \$50.00. You didn't mention Powerplay or any options for the Powerball, so I assumed no action there.
Selection Winnings
01. 01-02-19-26-38 \$0
02. 01-02-24-36-51 \$0
03. 01-06-07-11-44 \$0
04. 01-07-16-17-46 \$0
05. 01-09-17-20-25 \$0
06. 01-11-13-20-30 \$0
07. 01-14-19-22-36 \$0
08. 02-03-06-16-22 \$0
09. 02-03-09-14-46 \$0
10. 03-07-13-49-54 \$0
11. 03-17-46-56-59 \$0
12. 06-14-19-24-51 \$0
13. 06-14-26-36-38 \$0
14. 07-09-16-56-59 \$0
15. 08-11-29-32-35 \$0
16. 08-13-25-30-44 \$0
17. 08-20-44-49-54 \$0
18. 11-25-30-49-54 \$0
19. 18-21-32-39-55 \$0
20. 18-23-35-39-45 \$0
21. 18-27-29-39-40 \$0
22. 21-23-29-45-55 \$0
23. 21-27-35-40-55 \$0
24. 22-24-26-38-51 \$0
25. 23-27-32-40-45 \$0
Total Winnings: \$ 0.00
Cost: \$50.00
Loss: (\$50.00)
Of course, all this proves is that finding the winning numbers among an arbitrary set of 39 does not imply there will be any gain from playing a small subset of combinations of 5 of them
--Jimmy4164
Kentucky
United States
Member #32652
February 14, 2006
7482 Posts
Offline
Posted: January 30, 2013, 2:49 pm - IP Logged
You call it negativity, I call it reality.
Here's the thing: That piece of information gets us no closer to determining what those groups of 28 numbers are. If the answer to the question "do some number combinations have better odds?" is "yes", isn't the logical follow-up question "which number combinations have better odds?" Ronnie seems to get real mad whenever anyone asks that question, though. Why is that? Is it because he's afraid to admit that he has no flippin' idea? And if we're not supposed to discuss ways to narrow down the number pool, what's the point of continuing this thread? At that point it becomes an expanded version of Maddog's Challenge in which Ronnie beats his drum if he hits 3 (or more) out of 5 and taunts the people he doesn't like if they match fewer numbers. Utterly useless.
Besides, you'll notice that the post I was objecting to was one in which Ronnie proclaimed that x1kosmic's ONE line has "better odds" (than what?) due to the concept of "positive energy flow". That, in my opinion, is proof that he has given up on finding a mathematically-sound answer -- assuming that was EVER his objective, which I doubt it was -- and has gone off the rails into conjecture. Furthermore, given his interactions in this thread specifically and on Lottery Post as a whole, it is fairly obvious that he's not actually serious about any of this, that this thread is just an exercise in inflating his ego.
It's a shame because if anyone were to come in here looking for useful information on how to get "better odds", they'd walk away disappointed, wondering why the guy who started the thread has such a chip on his shoulder and doesn't offer any insights other than "intuition" and "positive energy flow".
"You call it negativity, I call it reality."
With MM, 97.5% of all the \$1 tickets will win nothing and in about 80% of the drawings, 100% of the tickets will not win the jackpot. That's the reality yet millions of people that know the reality play every drawing. As the jackpots grow, millions more people unrealistically begin playing the game.
If we were having a general discussion and the subject of playing lottery games came up, the consensus would probably be it's unrealistic to believe you can win the MM jackpot. However the discussions on this site are by people who already understand it's unrealistic, but are still willing to take the risk. In most cases the risk is about \$300 a year to get over 300 chances of winning a life changing jackpot. I just don't see any reason other than negativity to tell people what they already know or should know.
"If the answer to the question "do some number combinations have better odds?" is "yes", isn't the logical follow-up question "which number combinations have better odds?"
That's a fair question, but it's still based on the fact it's possible. And you're asking that question to people who already know they only have a 1 in 39 chance of giving you the correct answer.
"Ronnie seems to get real mad whenever anyone asks that question, though. Why is that?"
My best guess, it's out of frustration because he already did something statistical improbable and people are demanding he duplicates it.
"It's a shame because if anyone were to come in here looking for useful information on how to get "better odds"
We've already discussed the title of this thread is some what deceptive, but the discussion has evolved into looking for more effective playing strategies. Considering the fact the players already weighed the risk and reward and the playing strategies discussed here don't increase the odds against, where is the useless information?
United States
Member #116268
September 7, 2011
20244 Posts
Offline
Posted: January 30, 2013, 2:51 pm - IP Logged
Wow.... You answered with yes and no, but neither of these were yes or no questions.....
That's so dumb it's ridiculous.
That's so dumb it's ridiculous.
Like the lottery, boney??
United States
Member #116268
September 7, 2011
20244 Posts
Offline
Posted: January 30, 2013, 3:09 pm - IP Logged
"You call it negativity, I call it reality."
With MM, 97.5% of all the \$1 tickets will win nothing and in about 80% of the drawings, 100% of the tickets will not win the jackpot. That's the reality yet millions of people that know the reality play every drawing. As the jackpots grow, millions more people unrealistically begin playing the game.
If we were having a general discussion and the subject of playing lottery games came up, the consensus would probably be it's unrealistic to believe you can win the MM jackpot. However the discussions on this site are by people who already understand it's unrealistic, but are still willing to take the risk. In most cases the risk is about \$300 a year to get over 300 chances of winning a life changing jackpot. I just don't see any reason other than negativity to tell people what they already know or should know.
"If the answer to the question "do some number combinations have better odds?" is "yes", isn't the logical follow-up question "which number combinations have better odds?"
That's a fair question, but it's still based on the fact it's possible. And you're asking that question to people who already know they only have a 1 in 39 chance of giving you the correct answer.
"Ronnie seems to get real mad whenever anyone asks that question, though. Why is that?"
My best guess, it's out of frustration because he already did something statistical improbable and people are demanding he duplicates it.
"It's a shame because if anyone were to come in here looking for useful information on how to get "better odds"
We've already discussed the title of this thread is some what deceptive, but the discussion has evolved into looking for more effective playing strategies. Considering the fact the players already weighed the risk and reward and the playing strategies discussed here don't increase the odds against, where is the useless information?
We've already discussed the title of this thread is some what deceptive
Excellent observation Stack,
but it is only deceptive in the sense that different people interpret the question in different ways.
The Three Stooges are obsessing over "Which number combinations have better odds"?
While the rest of us discuss "Who or what produces number combinations that outperform the odds"?
United States
Member #116268
September 7, 2011
20244 Posts
Offline
Posted: January 30, 2013, 3:13 pm - IP Logged
In others words, we focus on creative ways to get better odds and "beat" the lottery.....
While the Stooges focus on the lottery as......... "That's so dumb it's ridiculous."
New Jersey
United States
Member #99032
October 18, 2010
1439 Posts
Offline
Posted: January 30, 2013, 3:58 pm - IP Logged
In others words, we focus on creative ways to get better odds and "beat" the lottery.....
While the Stooges focus on the lottery as......... "That's so dumb it's ridiculous."
WOWWW
Way to take my words completely out of context. I said your response to Jimmy was dumb, and you extoplated that.
I've never said the lottery is dumb.
United States
Member #116268
September 7, 2011
20244 Posts
Offline
Posted: January 30, 2013, 4:21 pm - IP Logged
WOWWW
Way to take my words completely out of context. I said your response to Jimmy was dumb, and you extoplated that.
I've never said the lottery is dumb.
I never said you said it...... In general terms it is the way you and your click act and talk about the lottery and the way some of us chose to approach it. Your curly now, so just get in lockstep with larry and moe.
New Jersey
United States
Member #99032
October 18, 2010
1439 Posts
Offline
Posted: January 30, 2013, 4:37 pm - IP Logged
"My best guess, it's out of frustration because he already did something statistical improbable and people are demanding he duplicates it."
None of the people that you are referring to are asking him to duplicate it, just stating that it's not that he had or has better odds, just that he got lucky.
Kentucky
United States
Member #32652
February 14, 2006
7482 Posts
Offline
Posted: January 30, 2013, 6:01 pm - IP Logged
Tell ya what, Stack. Despite the fact that you've chosen to be Ronnie's waterboy, I still think you're a relatively smart guy. One of the best on LP, even. If you're actually serious about pursuing research into how to narrow down the Powerball or Mega Millions matrix into a pool of 28 numbers that have a decent chance of containing the 5 winning numbers, why don't you break off and start your own thread rather than continuing to associate yourself with this mess? Leave the serious discussion to serious people and let Ronnie wallow in his own self-aggrandizement.
"If you're actually serious about pursuing research into how to narrow down the Powerball or Mega Millions matrix into a pool of 28 numbers that have a decent chance of containing the 5 winning numbers, why don't you break off and start your own thread rather than continuing to associate yourself with this mess?"
If I did that, I would be just expanding on Ronnie's idea and some of the ideas RJ and others have posted. It's easier for me to add my ideas to theirs. I've been around gambling, sports, and other things long enough to understand there is going to be bragging and never let that bother me. Most of the time it's better to congratulate them or say nothing and move on.
United States
Member #116268
September 7, 2011
20244 Posts
Offline
Posted: January 30, 2013, 6:10 pm - IP Logged
"My best guess, it's out of frustration because he already did something statistical improbable and people are demanding he duplicates it."
None of the people that you are referring to are asking him to duplicate it, just stating that it's not that he had or has better odds, just that he got lucky.
And if I define "got lucky" as "number combinations that have better odds" what then boney? Do you presume to have power to determine how I define words and the way they view things like moe does?
United States
Member #116268
September 7, 2011
20244 Posts
Offline
Posted: January 30, 2013, 6:19 pm - IP Logged
We all know you want to control how people think and the way they say things boney, but its just NOT going to work out that way for you here no matter how long you keep posting. If you want to call something "winning a bet" I can respect that, but don't try to control me into calling it that, as my view is "beating the lottery" or "beating the roulette wheel" Sorry, you are not in control of my viewpoints or definitions. Get over it.
United States
Member #116268
September 7, 2011
20244 Posts
Offline
Posted: January 30, 2013, 6:38 pm - IP Logged
In other words boney, if someone or something wins at a rate that beats the stated odds and I want to call it "better odds" why do you think you have some authority to make me call it otherwise.
You can call it "standard deviation" all you want or "getting lucky" no one is faulting you for it.....
You and moe and larry are the ones all pissy and upset and nagging and complaining and ect, ect ect......
Moe talks about "bandied about ideas" but has never posted one single time without whining and complaining and sounding like someones henpecked husband who is too afraid to leave the house.
Page 270 of 353 | 5,292 | 19,005 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.6875 | 3 | CC-MAIN-2017-22 | latest | en | 0.846867 |
https://www.assignmentexpert.com/homework-answers/programming-and-computer-science/matlab-mathematica-mathcad-maple/question-45241 | 1,597,126,937,000,000,000 | text/html | crawl-data/CC-MAIN-2020-34/segments/1596439738735.44/warc/CC-MAIN-20200811055449-20200811085449-00327.warc.gz | 578,103,774 | 290,743 | 89 092
Assignments Done
97.9%
Successfully Done
In July 2020
# Answer to Question #45241 in MatLAB | Mathematica | MathCAD | Maple for tamizhchelvan
Question #45241
Question-1:
The Birthday Problem: The birthday problem is stated as follows:
If there is a group of n people in a room, what is the probability that two or more of them having same birthday? It is possible to determine answer to this question by simulation. (Hint: You can generate random dates, n times and determine the fraction of people who born in a given day). Write a function that determines the answer to this question by simulation. The program you write can take n as the input and prints out the probability that two or more of n people will have the same birthday for n=2,3,4…. 40
Question-2:
Write a single program that calculates the arithmetic mean (average), rms average, geometric mean and harmonic mean for a set of n positive numbers. Your program should take two values xlow and xhigh and generate 10000 random numbers in the range [xlow…xhigh], and should print out arithmetic mean (average), rms average, geometric mean and harmonic mean.
The definitions of means are given as follows.
Question-
0
Need a fast expert's response?
Submit order
and get a quick answer at the best price
for any assignment or question with DETAILED EXPLANATIONS! | 307 | 1,335 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.78125 | 3 | CC-MAIN-2020-34 | latest | en | 0.928375 |
http://www.sportsbookreview.com/forum/players-talk/1235463-soccer-question.html?s=edb1f83d34eb538703c26067ee28f35d&slf=41 | 1,449,006,894,000,000,000 | text/html | crawl-data/CC-MAIN-2015-48/segments/1448398471441.74/warc/CC-MAIN-20151124205431-00154-ip-10-71-132-137.ec2.internal.warc.gz | 688,329,293 | 14,388 | View New Posts
1. ## Soccer Question.
What does a soccer handicap line of pk,-.5 mean? Do I lose this bet if the match draws? How does this differ from a -.5 line?
2. PK does not lose on draw, it pushes
-.5 loses on draw
175 pts
3-QUESTION
SBR TRIVIA WINNER 11/30/2015
175 pts
3-QUESTION
SBR TRIVIA WINNER 11/23/2015
BTP
Week 10
3-2-0 274 pts
BTP
Week 9
3-2-0 268 pts
3. Its half your wager on PK and half on -0.5. It means if its a draw, you lose half your wager and push half of it.
Points Awarded:
Sportsfan800 gave Gee 2 SBR Point(s) for this post.
4. Originally Posted by RubberKettle
PK does not lose on draw, it pushes -.5 loses on draw
pk, -.5 splits your bet into two parts
175 pts
3-QUESTION
SBR TRIVIA WINNER 11/30/2015
175 pts
3-QUESTION
SBR TRIVIA WINNER 11/23/2015
BTP
Week 10
3-2-0 274 pts
BTP
Week 9
3-2-0 268 pts
5. when you bet 2 lines in one bet it's like 50% on one line and 50% on the other
you need to know first what is pk (pick) and what is -.5
175 pts
3-QUESTION
SBR TRIVIA WINNER 11/23/2015
BTP
Week 10
4-1-0 593 pts
6. Thanks, for the help guys. My next question is tougher. How do I determine what a .5 goal is worth?
Sometimes I see different soccer handicap lines on games. One book will have pk and another book
will have pk,+.5. I need to figure out what a .5 goal is worth. I have been using the projected total
as a base. Let's say the total is listed at 2.5,3 -110. I divide 5.5 by 2 and get 2.75 total goals for the match.
Next, I multiple 2.75 goals by .5 and get 1.38. I multiply the moneyline by 1.38. Let's say the moneyline is +110.
I get 151.80. Using the odds converter, I find the win percentage -151.80 equals 60.28% and +110 equals 47.62%.
I subtract 60.28 from 47.62 to get 12.66%. Since it increases the win percentage by 12.66%, this is what the
.5 goal is worth. It is worth 12.66% of the line. Does this make sense or am I way off base?
7. This is referred to as "asian handicap"
Just fyi
BTP
Week 10
5-0-0 924 pts | 673 | 1,989 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.859375 | 4 | CC-MAIN-2015-48 | latest | en | 0.926653 |
https://au.mathworks.com/matlabcentral/cody/problems/45369-sky-full-of-stars-02/solutions/2210676 | 1,606,870,791,000,000,000 | text/html | crawl-data/CC-MAIN-2020-50/segments/1606141685797.79/warc/CC-MAIN-20201201231155-20201202021155-00684.warc.gz | 169,522,268 | 17,029 | Cody
# Problem 45369. Sky full of stars - 02
Solution 2210676
Submitted on 14 Apr 2020 by bainhome
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
### Test Suite
Test Status Code Input and Output
1 Pass
a= ['*********' ' ******* ' ' ***** ' ' *** ' ' * ']; assert(isequal(star_pattern_2(5,'t'),a))
2 Pass
a= [' * ' ' *** ' ' ***** ' ' ******* ' ' ********* ' ' *********** ' ' ************* ' ' *************** ' ' ***************** ' '*******************']; assert(isequal(star_pattern_2(10,'b'),a))
3 Pass
a= ['*************************' ' *********************** ' ' ********************* ' ' ******************* ' ' ***************** ' ' *************** ' ' ************* ' ' *********** ' ' ********* ' ' ******* ' ' ***** ' ' *** ' ' * ']; assert(isequal(star_pattern_2(13,'t'),a))
4 Pass
a= ['*']; assert(isequal(star_pattern_2(1,'t'),a))
### Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting! | 288 | 1,049 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.9375 | 3 | CC-MAIN-2020-50 | latest | en | 0.316788 |
http://docplayer.net/21772718-A-method-of-calibrating-helmholtz-coils-for-the-measurement-of-permanent-magnets.html | 1,542,296,067,000,000,000 | text/html | crawl-data/CC-MAIN-2018-47/segments/1542039742779.14/warc/CC-MAIN-20181115141220-20181115163220-00306.warc.gz | 96,799,963 | 27,290 | # A METHOD OF CALIBRATING HELMHOLTZ COILS FOR THE MEASUREMENT OF PERMANENT MAGNETS
Save this PDF as:
Size: px
Start display at page:
Download "A METHOD OF CALIBRATING HELMHOLTZ COILS FOR THE MEASUREMENT OF PERMANENT MAGNETS"
## Transcription
1 A METHOD OF CALIBRATING HELMHOLTZ COILS FOR THE MEASUREMENT OF PERMANENT MAGNETS Joseph J. Stupak Jr, Oersted Technology Tualatin, Oregon (reprinted from IMCSD 24th Annual Proceedings 1995) ABSTRACT The Helmholtz coil configuration is often used to generate a uniform magnetic field in space, and may also be used to measure the magnetic moment of bar and plate magnets. In the past, Helmholtz coils used for measurement have been calibrated by the use of standard magnets. A method is presented here for calibrating these coils using a known current source and gaussmeter, which should be easier and result in greater accuracy. 1. INTRODUCTION Manufacturers and users of permanent magnets need to measure the strengths of large numbers of magnets by means which are quick, accurate, and require a minimum of calculation and labor. Many (but not all) magnets may be measured in this way by the use of a Helmholtz coil together with an integrating fluxmeter. Helmholtz coils used in this manner in the past have usually been calibrated by use of standard magnets. Since permanent magnet materials are not perfectly homogeneous in their magnetic properties (and may vary with temperature and age, as well), and since NIST (the former National Bureau of Standards) does not maintain magnetic standards, such magnets may be difficult to obtain or confirm to sufficient accuracy. The method suggested in this paper is an indirect one, but is easily accomplished and enables coil calibration by means which may be known to high accuracy. A Helmholtz coil is actually a pair of similar coils with equal numbers of turns. The coils are short, thin-walled rings, and are mounted coaxially at a distance of one coil radius from each other. When the coils are connected (preferably in series) and current is passed through them, a highly uniform magnetic field is produced in a considerable volume of space between the two coils. At the center, the first three derivatives of field strength with position all go to zero in every direction (reference 2). It is this property, plus the fact that the region of uniform field is accessible from almost every direction, which gives the Fig.1: Helmholtz Coil
2 Helmholtz construction its special usefulness. The type of magnet measurable by a Helmholtz coil is one which can be represented by a magnet dipole, or assembly of dipoles with a common axial direction. A magnetic dipole, as shown in figure 2, consists of two magnetic poles of equal and opposite strength, separated by a distance l p. The North pole is a magnetic flux source, radiating straight lines of flux outward with equal distribution in all direction. The opposite South pole is a flux sink, into which flux disappears at the same density per solid angle, from all direction. Since the surface of a sphere is equal to 4πr 2, a single magnetic pole of strength 4π produces a unit magnetic field at unit radius. The magnetic moment of the dipole is equal to the pole strength (in units of flux density times area, or flux) times the pole spacing l p. For a real magnet, this is equal to the integral of flux density times a differential of volume, over the entire volume of the magnet. Long, thin magnets are reasonably well represented by a magnetic dipole, except close to the magnet, and plate magnets may be approximated by a series of dipoles with a common direction. With suitable corrections, arc segments may also be represented in this way (reference 1). Cylindrical magnets, such as those used in some brushless DC motors and rotary linear actuators, cannot be measured by this technique because the magnetic axes do not lie in the same direction, and cancel partially or completely. When a Helmholtz assembly is used to measure a magnet, the magnet is placed between the coils in the region of uniform field, with the magnetic axis parallel to the coil axis, and the fluxmeter attached to the coils is zeroed. The magnet is then withdrawn far enough from the coils that its effect on them is negligible. The reading on the fluxmeter times the coil constant is then equal to the magnetic moment of the tested part. Another method is to rotate the magnet
3 180. The fluxmeter reading times the coil constant is then equal to twice the magnetic moment of the part. If magnetic flux Φ linking an electrically conductive coil of n turns changes with time, a voltage is developed across the coil according to the Faraday induction law, The fluxmeter reading is proportional to the integral of voltage with time, In the above, the constant k m is a scale multiplier which may be set on the instrument, and the constant C is the offset, normally set to zero before the reading is made. Fluxmeters are often used together with a search coil of known area and number of turns, to determine the average magnetic flux density component normal to the plane of the coil, as an averaging gaussmeter. They may also be used to determine the total magnetic flux crossing a section within a magnet or magnetically permeable pole structure, at locations which would not be accessible to a Hall-type gaussmeter. For this purpose a close-fitting coil is wrapped around the part and connected to the fluxmeter, and the instrument is then zeroed. The coil is then removed from the part, without disconnecting it from the fluxmeter, and taken far enough that the magnetic circuit has no noticeable effect on the meter. The fluxmeter reading is then proportional to the total flux crossing the section. It is important that the coil be as closely fitting as possible, to avoid errors due to flux leakage. A correction is sometimes applied for the sense wire thickness. When measurements are made with a Helmholtz coil, on the other hand, there is free space around the part, and handling is much easier. By comparing the units of magnetic moment (magnetic flux density times length cubed) to those of the fluxmeter output (magnetic flux density times length squared), it is apparent that the Helmholtz coil calibration constant will include a unit of length. If the coil windings have negligibly small winding thickness and axial length and if the coils are perfectly constructed and aligned, the coil constant may be derived from its dimensions and number of turns. If the device is to have a coil constant large enough to be useful without excessive resistance, however, the winding dimensions will probably be large enough to affect the measurement accuracy. It is possible to correct the calculation to account for these dimensions (reference 2). Other errors, however, are more difficult to handle, such as misalignment, variations in winding, coil spacing errors, out-of roundness of the coils, etc. Because of these, it is best to measure the coil constant by some means, such as the use of standard magnets or the method described here.
4 2. FLUX FROM A SMALL MAGNETIC DIPOLE LINKING A HELMHOLTZ COIL The flux linking each coil is PAIR Substituting for B x and integrating, The magnetic potential of a magnetic dipole distant from the point of measurement is: 3. MAGNETIC FIELD AT A POINT ON THE AXIS CAUSED BY FIELD CURRENT On the other hand, for the condition of electric current passing through a coil of the same construction, in reference to a point p on the coil axis at a distance from the coil plane (figure 5), Amperes law states that for a differential of coil
5 length ds carrying current i at a distance α 4. COIL CONSTANT Dividing equation (8) by equation (13) and arranging, from point p, an increment of magnetic coercive force dh is caused at p according to: The coil calibration constant, then, is: The component of dh in the axial direction is: which may be integrated to: (per coil) at z = r / 2, for two coils aiding, each of n turns, The coil current i may be determined to high accuracy by means traceable to NIST. Magnetic flux density may be measured by use of a gaussmeter. Some gaussmeters (such as those based on nuclear magnetic resonance) are capable of excellent accuracy. Another possibility would be to measure the flux crossing a coil of known area and turns by use of the same fluxmeter used for the moment measurements, thus canceling any scale error which might be present. Measurements at our lab, made with instruments of somewhat limited accuracy, confirm equations to an accuracy of about.5% with a total uncertainty of measurement from all sources of about.7% maximum. Reference 1 quotes data confirming equation (8) to about
6 .35%. 5. DIRECT METHOD the work done on one pole exactly cancels the work done on the other pole, of opposite sign, except for the path difference l p, within the region of uniform field of strength H c. The net mechanical work, therefore, must be: To simplify the argument, let it be imagined that the coils are made of superconducting material, with no electrical resistance (we could just as well add in a constant resistance term, and then subtract it out later). Let it also be assumed that the power source emits constant current i. With the pole pair at infinity, the voltage E in the circuit is zero, because of the superconductivity of the coil. As the dipole is now brought towards the coil pair, a voltage will be induced in the winding due to lines of magnetic flux cutting the coils, and work will be done by the source: The coincidence of these two integrals, equation (8) and (13), suggests that a more direct method of calculation might be possible. Such is indeed the case. A unit pole experiences unit mechanical force when acted upon by a magnetic field of unit intensity H. An electric current passing through the two coils in series would produce some distribution of magnetic field in space around the coil assembly, and a uniform field of intensity H c in a volume of space between the coils. If a magnetic dipole, consisting of two equal and opposite poles of strengths ± Φ, separated by distance l p, is brought from infinity into the region of uniform field along the coil axis, mechanical work will be done on each pole by the magnetic field. Since each pole traversed the same path, With no losses, due to the superconductivity of the coil, the only possible source of the work done must be the source: However, Φ l p = m, the magnetic moment of the dipole, and E dt is recognizable as the fluxmeter reading (divided by the number of turns n and the meter constant k m ), and so:
7 To obtain the meter constant, then, A. Connect the coil pair to a current source, and apply an amount of current to the coils which is high enough to give a fluxmeter reading which is large enough for good accuracy, but which is not so high as too cause overheating of the coils. Measure the current by accurate means. configuration, and other numbers of coils than two, but except for a few special applications their advantages seem to be more theoretical than practical. Square or rectangular coils may be used, but such coils are much more difficult to wind uniformly, and the coil support frames are more difficult to construct to the same accuracy as round shapes. Coil patterns requiring more than two coils are known (references 2 and 3) which result in a more uniform field or large useful volume, but access to the workspace becomes more restricted. B. Measure the flux density B x in the direction of the coil axis. C. Divide the current i by the magnetic flux density B x and correct units if required. If the fluxmeter reading is in maxwells, as is usual, and if the units of magnetic moment desired are to be in oersted-cm, the result should be multiplied by 4π / 10 oecm to obtain the coil constant. 6. SOME PRACTICAL CONSIDERATIONS When an integrating-type fluxmeter is used with a small, low electrical resistance coil, the coil resistance is small enough to have negligibly small effect on the fluxmeter reading, and is ignored. The winding resistance of the Helmholtz coil, on the other hand, may be large enough to influence the fluxmeter reading, especially on high-sensitivity settings. A correction formula should be available in the instrument manual or from the manufacturer. Other shapes of coils than round are sometimes used in the Helmholtz REFERENCES 1. S. R. Trout, "Use of Helmholtz Coils for Magnetic Measurements", IEEE Trans Magnetics, V.24, No.4, Jul 88 pg W. Franzen, "Generalization of Uniform Magnetic Fields by Means of Air Core Coils", Review of Sci Inst, V.33 No. 9 Sep 62 pg M. W. Garret, "Axially Symmetric Systems for Generating and Measuring Magnetic Fields" J Appl Phys, V.22 No. 9 Sep 52 pg. 1091
### Eðlisfræði 2, vor 2007
[ Assignment View ] [ Pri Eðlisfræði 2, vor 2007 28. Sources of Magnetic Field Assignment is due at 2:00am on Wednesday, March 7, 2007 Credit for problems submitted late will decrease to 0% after the deadline
### Objectives for the standardized exam
III. ELECTRICITY AND MAGNETISM A. Electrostatics 1. Charge and Coulomb s Law a) Students should understand the concept of electric charge, so they can: (1) Describe the types of charge and the attraction
### Chapter 14 Magnets and
Chapter 14 Magnets and Electromagnetism How do magnets work? What is the Earth s magnetic field? Is the magnetic force similar to the electrostatic force? Magnets and the Magnetic Force! We are generally
### Magnetic Field of a Circular Coil Lab 12
HB 11-26-07 Magnetic Field of a Circular Coil Lab 12 1 Magnetic Field of a Circular Coil Lab 12 Equipment- coil apparatus, BK Precision 2120B oscilloscope, Fluke multimeter, Wavetek FG3C function generator,
### Chapter 14: Magnets and Electromagnetism
Chapter 14: Magnets and Electromagnetism 1. Electrons flow around a circular wire loop in a horizontal plane, in a direction that is clockwise when viewed from above. This causes a magnetic field. Inside
### Edmund Li. Where is defined as the mutual inductance between and and has the SI units of Henries (H).
INDUCTANCE MUTUAL INDUCTANCE If we consider two neighbouring closed loops and with bounding surfaces respectively then a current through will create a magnetic field which will link with as the flux passes
### PSS 27.2 The Electric Field of a Continuous Distribution of Charge
Chapter 27 Solutions PSS 27.2 The Electric Field of a Continuous Distribution of Charge Description: Knight Problem-Solving Strategy 27.2 The Electric Field of a Continuous Distribution of Charge is illustrated.
### Electromagnetic Induction
Electromagnetic Induction Lecture 29: Electromagnetic Theory Professor D. K. Ghosh, Physics Department, I.I.T., Bombay Mutual Inductance In the last lecture, we enunciated the Faraday s law according to
### Physics 221 Experiment 5: Magnetic Fields
Physics 221 Experiment 5: Magnetic Fields August 25, 2007 ntroduction This experiment will examine the properties of magnetic fields. Magnetic fields can be created in a variety of ways, and are also found
### Electrical Machines II. Week 1: Construction and theory of operation of single phase transformer
Electrical Machines II Week 1: Construction and theory of operation of single phase transformer Transformers Overview A transformer changes ac electric power at one frequency and voltage level to ac electric
### Measuring Permanent Magnet Characteristics with a Fluxmeter and Helmholtz Coil
Measuring Permanent Magnet Characteristics with a Fluxmeter and Helmholtz Coil Lake Shore Cryotronics, Inc., 10/00 Measuring Permanent Magnet Characteristics with a Fluxmeter and Helmholtz Coil General
### 5. Measurement of a magnetic field
H 5. Measurement of a magnetic field 5.1 Introduction Magnetic fields play an important role in physics and engineering. In this experiment, three different methods are examined for the measurement of
### Linear DC Motors. 15.1 Magnetic Flux. 15.1.1 Permanent Bar Magnets
Linear DC Motors The purpose of this supplement is to present the basic material needed to understand the operation of simple DC motors. This is intended to be used as the reference material for the linear
### 6 J - vector electric current density (A/m2 )
Determination of Antenna Radiation Fields Using Potential Functions Sources of Antenna Radiation Fields 6 J - vector electric current density (A/m2 ) M - vector magnetic current density (V/m 2 ) Some problems
### Electromagnetic Induction - A
Electromagnetic Induction - A APPARATUS 1. Two 225-turn coils 2. Table Galvanometer 3. Rheostat 4. Iron and aluminum rods 5. Large circular loop mounted on board 6. AC ammeter 7. Variac 8. Search coil
### 1 of 7 4/13/2010 8:05 PM
Chapter 33 Homework Due: 8:00am on Wednesday, April 7, 2010 Note: To understand how points are awarded, read your instructor's Grading Policy [Return to Standard Assignment View] Canceling a Magnetic Field
### Gauss's Law. Gauss's Law in 3, 2, and 1 Dimension
[ Assignment View ] [ Eðlisfræði 2, vor 2007 22. Gauss' Law Assignment is due at 2:00am on Wednesday, January 31, 2007 Credit for problems submitted late will decrease to 0% after the deadline has passed.
### Chapter 4. Magnetic Materials and Circuits
Chapter 4 Magnetic Materials and Circuits Objectives List six characteristics of magnetic field. Understand the right-hand rule for current and magnetic fluxes. Define magnetic flux, flux density, magnetomotive
### Physical Specifications (Custom design available.)
Helmholtz Coils A useful laboratory technique for getting a fairly uniform magnetic field, is to use a pair of circular coils on a common axis with equal currents flowing in the same sense. For a given
### Physics 1653 Exam 3 - Review Questions
Physics 1653 Exam 3 - Review Questions 3.0 Two uncharged conducting spheres, A and B, are suspended from insulating threads so that they touch each other. While a negatively charged rod is held near, but
### Fall 12 PHY 122 Homework Solutions #10
Fall 12 PHY 122 Homework Solutions #10 HW10: Ch.30 Q5, 8, 15,17, 19 P 1, 3, 9, 18, 34, 36, 42, 51, 66 Chapter 30 Question 5 If you are given a fixed length of wire, how would you shape it to obtain the
### ELECTRON SPIN RESONANCE Last Revised: July 2007
QUESTION TO BE INVESTIGATED ELECTRON SPIN RESONANCE Last Revised: July 2007 How can we measure the Landé g factor for the free electron in DPPH as predicted by quantum mechanics? INTRODUCTION Electron
### Experimental Question 1: Levitation of Conductors in an Oscillating Magnetic Field SOLUTION ( )
a. Using Faraday s law: Experimental Question 1: Levitation of Conductors in an Oscillating Magnetic Field SOLUTION The overall sign will not be graded. For the current, we use the extensive hints in the
### Physics Notes for Class 12 Chapter 4 Moving Charges and Magnetrism
1 P a g e Physics Notes for Class 12 Chapter 4 Moving Charges and Magnetrism Oersted s Experiment A magnetic field is produced in the surrounding of any current carrying conductor. The direction of this
### The DC Motor. Physics 1051 Laboratory #5 The DC Motor
The DC Motor Physics 1051 Laboratory #5 The DC Motor Contents Part I: Objective Part II: Introduction Magnetic Force Right Hand Rule Force on a Loop Magnetic Dipole Moment Torque Part II: Predictions Force
### Fall 12 PHY 122 Homework Solutions #8
Fall 12 PHY 122 Homework Solutions #8 Chapter 27 Problem 22 An electron moves with velocity v= (7.0i - 6.0j)10 4 m/s in a magnetic field B= (-0.80i + 0.60j)T. Determine the magnitude and direction of the
### Chapter 5. Magnetic Fields and Forces. 5.1 Introduction
Chapter 5 Magnetic Fields and Forces Helmholtz coils and a gaussmeter, two of the pieces of equipment that you will use in this experiment. 5.1 Introduction Just as stationary electric charges produce
### LAB 8: Electron Charge-to-Mass Ratio
Name Date Partner(s) OBJECTIVES LAB 8: Electron Charge-to-Mass Ratio To understand how electric and magnetic fields impact an electron beam To experimentally determine the electron charge-to-mass ratio.
### A Practical Guide to Free Energy Devices
A Practical Guide to Free Energy Devices Part 60: Last updated: 3rd February 2006 Author: Patrick J. Kelly Electrical power is frequently generated by spinning the shaft of a generator which has some arrangement
### Chapter 20. Magnetic Induction Changing Magnetic Fields yield Changing Electric Fields
Chapter 20 Magnetic Induction Changing Magnetic Fields yield Changing Electric Fields Introduction The motion of a magnet can induce current in practical ways. If a credit card has a magnet strip on its
### Chapter 19 Magnetism Magnets Poles of a magnet are the ends where objects are most strongly attracted Two poles, called north and south Like poles
Chapter 19 Magnetism Magnets Poles of a magnet are the ends where objects are most strongly attracted Two poles, called north and south Like poles repel each other and unlike poles attract each other Similar
### Experiment 7: Forces and Torques on Magnetic Dipoles
MASSACHUSETTS INSTITUTE OF TECHNOLOY Department of Physics 8. Spring 5 OBJECTIVES Experiment 7: Forces and Torques on Magnetic Dipoles 1. To measure the magnetic fields due to a pair of current-carrying
### E/M Experiment: Electrons in a Magnetic Field.
E/M Experiment: Electrons in a Magnetic Field. PRE-LAB You will be doing this experiment before we cover the relevant material in class. But there are only two fundamental concepts that you need to understand.
### Eðlisfræði 2, vor 2007
[ Assignment View ] [ Pri Eðlisfræði 2, vor 2007 29a. Electromagnetic Induction Assignment is due at 2:00am on Wednesday, March 7, 2007 Credit for problems submitted late will decrease to 0% after the
### Magnetic Fields; Sources of Magnetic Field
This test covers magnetic fields, magnetic forces on charged particles and current-carrying wires, the Hall effect, the Biot-Savart Law, Ampère s Law, and the magnetic fields of current-carrying loops
### The purposes of this experiment are to test Faraday's Law qualitatively and to test Lenz's Law.
260 17-1 I. THEORY EXPERIMENT 17 QUALITATIVE STUDY OF INDUCED EMF Along the extended central axis of a bar magnet, the magnetic field vector B r, on the side nearer the North pole, points away from this
### Ampere's Law. Introduction. times the current enclosed in that loop: Ampere's Law states that the line integral of B and dl over a closed path is 0
1 Ampere's Law Purpose: To investigate Ampere's Law by measuring how magnetic field varies over a closed path; to examine how magnetic field depends upon current. Apparatus: Solenoid and path integral
### E. K. A. ADVANCED PHYSICS LABORATORY PHYSICS 3081, 4051 NUCLEAR MAGNETIC RESONANCE
E. K. A. ADVANCED PHYSICS LABORATORY PHYSICS 3081, 4051 NUCLEAR MAGNETIC RESONANCE References for Nuclear Magnetic Resonance 1. Slichter, Principles of Magnetic Resonance, Harper and Row, 1963. chapter
### MAGNETISM MAGNETISM. Principles of Imaging Science II (120)
Principles of Imaging Science II (120) Magnetism & Electromagnetism MAGNETISM Magnetism is a property in nature that is present when charged particles are in motion. Any charged particle in motion creates
### Electromagnetic Induction
Electromagnetic Induction "Concepts without factual content are empty; sense data without concepts are blind... The understanding cannot see. The senses cannot think. By their union only can knowledge
### Effect of an Iron Yoke of the Field Homogeneity in a Superconducting Double-Helix Bent Dipole
Excerpt from the Proceedings of the COMSOL Conference 2010 Boston Effect of an Iron Yoke of the Field Homogeneity in a Superconducting Double-Helix Bent Dipole Philippe J. Masson *,1, Rainer B. Meinke
### Parallel Path Magnetic Technology for High Efficiency Power Generators and Motor Drives
Parallel Path Magnetic Technology for High Efficiency Power Generators and Motor Drives Patents & Copyright - Flynn Research, Greenwood MO, 64034 PARALLEL PATH MAGNETIC TECHNOLOGY (PPMT) BACKGROUND Parallel
### Chapter 29 Electromagnetic Induction
Chapter 29 Electromagnetic Induction - Induction Experiments - Faraday s Law - Lenz s Law - Motional Electromotive Force - Induced Electric Fields - Eddy Currents - Displacement Current and Maxwell s Equations
### ElectroMagnetic Induction. AP Physics B
ElectroMagnetic Induction AP Physics B What is E/M Induction? Electromagnetic Induction is the process of using magnetic fields to produce voltage, and in a complete circuit, a current. Michael Faraday
### * Biot Savart s Law- Statement, Proof Applications of Biot Savart s Law * Magnetic Field Intensity H * Divergence of B * Curl of B. PPT No.
* Biot Savart s Law- Statement, Proof Applications of Biot Savart s Law * Magnetic Field Intensity H * Divergence of B * Curl of B PPT No. 17 Biot Savart s Law A straight infinitely long wire is carrying
### PPT No. 26. Uniformly Magnetized Sphere in the External Magnetic Field. Electromagnets
PPT No. 26 Uniformly Magnetized Sphere in the External Magnetic Field Electromagnets Uniformly magnetized sphere in external magnetic field The Topic Uniformly magnetized sphere in external magnetic field,
### Welcome to the first lesson of third module which is on thin-walled pressure vessels part one which is on the application of stress and strain.
Strength of Materials Prof S. K. Bhattacharya Department of Civil Engineering Indian Institute of Technology, Kharagpur Lecture -15 Application of Stress by Strain Thin-walled Pressure Vessels - I Welcome
### AP Physics C Chapter 23 Notes Yockers Faraday s Law, Inductance, and Maxwell s Equations
AP Physics C Chapter 3 Notes Yockers Faraday s aw, Inductance, and Maxwell s Equations Faraday s aw of Induction - induced current a metal wire moved in a uniform magnetic field - the charges (electrons)
### Force on Moving Charges in a Magnetic Field
[ Assignment View ] [ Eðlisfræði 2, vor 2007 27. Magnetic Field and Magnetic Forces Assignment is due at 2:00am on Wednesday, February 28, 2007 Credit for problems submitted late will decrease to 0% after
### Sources of Magnetic Field: Summary
Sources of Magnetic Field: Summary Single Moving Charge (Biot-Savart for a charge): Steady Current in a Wire (Biot-Savart for current): Infinite Straight Wire: Direction is from the Right Hand Rule The
### Physics 122 (Sonnenfeld), Spring 2013 ( MPSONNENFELDS2013 ) My Courses Course Settings
Signed in as Richard Sonnenfeld, Instructor Help Sign Out Physics 122 (Sonnenfeld), Spring 2013 ( MPSONNENFELDS2013 ) My Courses Course Settings Course Home Assignments Roster Gradebook Item Library Essential
### AP R Physics C Electricity and Magnetism Syllabus
AP R Physics C Electricity and Magnetism Syllabus 1 Prerequisites and Purposes of AP R C E & M AP R Physics C Electricity and Magnetism is the second course in a two-course sequence. It is offered in the
### Electric Engineering II EE 326 Lecture 4 & 5
Electric Engineering II EE 326 Lecture 4 & 5 Transformers ١ Transformers Electrical transformers have many applications: Step up voltages (for electrical energy transmission with
### Home Work 9. i 2 a 2. a 2 4 a 2 2
Home Work 9 9-1 A square loop of wire of edge length a carries current i. Show that, at the center of the loop, the of the magnetic field produced by the current is 0i B a The center of a square is a distance
### " - angle between l and a R
Magnetostatic Fields According to Coulomb s law, any distribution of stationary charge produces a static electric field (electrostatic field). The analogous equation to Coulomb s law for electric fields
### Chapter 31. Faraday s Law
Chapter 31 Faraday s Law Michael Faraday 1791 1867 British physicist and chemist Great experimental scientist Contributions to early electricity include: Invention of motor, generator, and transformer
### Chapter 14 Magnets and Electromagnetism
Chapter 14 Magnets and Electromagnetism Magnets and Electromagnetism In the 19 th century experiments were done that showed that magnetic and electric effects were just different aspect of one fundamental
### Chapter 22: Electric motors and electromagnetic induction
Chapter 22: Electric motors and electromagnetic induction The motor effect movement from electricity When a current is passed through a wire placed in a magnetic field a force is produced which acts on
### Eðlisfræði 2, vor 2007
[ Assignment View ] [ Print ] Eðlisfræði 2, vor 2007 30. Inductance Assignment is due at 2:00am on Wednesday, March 14, 2007 Credit for problems submitted late will decrease to 0% after the deadline has
### NUCLEAR MAGNETIC RESONANCE. Advanced Laboratory, Physics 407, University of Wisconsin Madison, Wisconsin 53706
(revised 4/21/03) NUCLEAR MAGNETIC RESONANCE Advanced Laboratory, Physics 407, University of Wisconsin Madison, Wisconsin 53706 Abstract This experiment studies the Nuclear Magnetic Resonance of protons
### Simple Algorithm for the Magnetic Field Computation in Bobbin Coil Arrangement
Simple Algorithm for the Magnetic Field Computation in Bobbin Coil Arrangement V.Suresh 1, V.K.Gopperundevi 2, Dr.A.Abudhahir 3, R.Antonysamy 4, K.Muthukkutti 5 Associate Professor, E&I Department, National
### Insertion Devices Lecture 4 Permanent Magnet Undulators. Jim Clarke ASTeC Daresbury Laboratory
Insertion Devices Lecture 4 Permanent Magnet Undulators Jim Clarke ASTeC Daresbury Laboratory Introduction to Lecture 4 So far we have discussed at length what the properties of SR are, when it is generated,
### A Theoretical Model for Mutual Interaction between Coaxial Cylindrical Coils Lukas Heinzle
A Theoretical Model for Mutual Interaction between Coaxial Cylindrical Coils Lukas Heinzle Page 1 of 15 Abstract: The wireless power transfer link between two coils is determined by the properties of the
### Electromagnetism Laws and Equations
Electromagnetism Laws and Equations Andrew McHutchon Michaelmas 203 Contents Electrostatics. Electric E- and D-fields............................................. Electrostatic Force............................................2
### Coefficient of Potential and Capacitance
Coefficient of Potential and Capacitance Lecture 12: Electromagnetic Theory Professor D. K. Ghosh, Physics Department, I.I.T., Bombay We know that inside a conductor there is no electric field and that
### Ch.20 Induced voltages and Inductance Faraday s Law
Ch.20 Induced voltages and Inductance Faraday s Law Last chapter we saw that a current produces a magnetic field. In 1831 experiments by Michael Faraday and Joseph Henry showed that a changing magnetic
### 5K10.20 Induction Coil with Magnet, Galvanometer
5K10.20 Induction Coil with Magnet, Galvanometer Abstract When a magnet is moved through a coil of wire, a current is induced in the coil. A galvanometer connected to the coil measures this induced current.
### Profs. A. Petkova, A. Rinzler, S. Hershfield. Exam 2 Solution
PHY2049 Fall 2009 Profs. A. Petkova, A. Rinzler, S. Hershfield Exam 2 Solution 1. Three capacitor networks labeled A, B & C are shown in the figure with the individual capacitor values labeled (all units
### Inductive and Magnetic Sensors
Chapter 12 Inductive and Magnetic Sensors 12.1 Inductive Sensors A number of the actuators developed in previous chapters depend on the variation of reluctance with changes in angle or displacement. Since
### 1 of 7 10/1/2012 3:17 PM
Assignment Previewer http://www.webassign.net/v4cgijfederici@njit/control.pl 1 of 7 10/1/2012 3:17 PM HW11Faraday (2861550) Question 1 2 3 4 5 6 7 8 9 10 1. Question Details SerPSE8 31.P.011.WI. [1742725]
### ZERO COGGING MOTORS AEROFLEX MOTION CONTROL PRODUCTS. Introduction - Zero Cogging Motors
ZERO COGGING MOTORS Introduction - Zero Cogging Motors This catalogue is intended as a guide for the user to help select or specify a cog free brushless DC motor. The size, weight and performance characteristics
### Phys222 Winter 2012 Quiz 4 Chapters 29-31. Name
Name If you think that no correct answer is provided, give your answer, state your reasoning briefly; append additional sheet of paper if necessary. 1. A particle (q = 5.0 nc, m = 3.0 µg) moves in a region
### Digital Energy ITI. Instrument Transformer Basic Technical Information and Application
g Digital Energy ITI Instrument Transformer Basic Technical Information and Application Table of Contents DEFINITIONS AND FUNCTIONS CONSTRUCTION FEATURES MAGNETIC CIRCUITS RATING AND RATIO CURRENT TRANSFORMER
### Induction. d. is biggest when the motor turns fastest.
Induction 1. A uniform 4.5-T magnetic field passes perpendicularly through the plane of a wire loop 0.10 m 2 in area. What flux passes through the loop? a. 5.0 T m 2 c. 0.25 T m 2 b. 0.45 T m 2 d. 0.135
### potential in the centre of the sphere with respect to infinity.
Umeå Universitet, Fysik 1 Vitaly Bychkov Prov i fysik, Electricity and Waves, 2006-09-27, kl 16.00-22.00 Hjälpmedel: Students can use any book. Define the notations you are using properly. Present your
### Chapter 33. The Magnetic Field
Chapter 33. The Magnetic Field Digital information is stored on a hard disk as microscopic patches of magnetism. Just what is magnetism? How are magnetic fields created? What are their properties? These
### 2-D Magnetic Circuit Analysis for a Permanent Magnet Used in Laser Ablation Plume Expansion Experiments
University of California, San Diego UCSD-LPLM-02-04 2-D Magnetic Circuit Analysis for a Permanent Magnet Used in Laser Ablation Plume Expansion Experiments Xueren Wang, Mark Tillack and S. S. Harilal December
### Module 7. Transformer. Version 2 EE IIT, Kharagpur
Module 7 Transformer Version EE IIT, Kharagpur Lesson 4 Practical Transformer Version EE IIT, Kharagpur Contents 4 Practical Transformer 4 4. Goals of the lesson. 4 4. Practical transformer. 4 4.. Core
### EXPERIMENT IV. FORCE ON A MOVING CHARGE IN A MAGNETIC FIELD (e/m OF ELECTRON ) AND. FORCE ON A CURRENT CARRYING CONDUCTOR IN A MAGNETIC FIELD (µ o )
1 PRINCETON UNIVERSITY PHYSICS 104 LAB Physics Department Week #4 EXPERIMENT IV FORCE ON A MOVING CHARGE IN A MAGNETIC FIELD (e/m OF ELECTRON ) AND FORCE ON A CURRENT CARRYING CONDUCTOR IN A MAGNETIC FIELD
### University of California, Berkeley Physics H7B Spring 1999 (Strovink) SOLUTION TO PROBLEM SET 10 Solutions by P. Pebler
University of California, Berkeley Physics H7B Spring 1999 (Strovink) SOLUTION TO PROBLEM SET 10 Solutions by P Pebler 1 Purcell 66 A round wire of radius r o carries a current I distributed uniformly
### AP2 Magnetism. (c) Explain why the magnetic field does no work on the particle as it moves in its circular path.
A charged particle is projected from point P with velocity v at a right angle to a uniform magnetic field directed out of the plane of the page as shown. The particle moves along a circle of radius R.
### GENERATORS AND MOTORS
GENERATORS AND MOTORS A device that converts mechanical energy (energy of motion windmills, turbines, nuclear power, falling water, or tides) into electrical energy is called an electric generator. The
### Problem 4.48 Solution:
Problem 4.48 With reference to Fig. 4-19, find E 1 if E 2 = ˆx3 ŷ2+ẑ2 (V/m), ε 1 = 2ε 0, ε 2 = 18ε 0, and the boundary has a surface charge density ρ s = 3.54 10 11 (C/m 2 ). What angle does E 2 make with
### Force on a square loop of current in a uniform B-field.
Force on a square loop of current in a uniform B-field. F top = 0 θ = 0; sinθ = 0; so F B = 0 F bottom = 0 F left = I a B (out of page) F right = I a B (into page) Assume loop is on a frictionless axis
### Electron Charge to Mass Ratio Matthew Norton, Chris Bush, Brian Atinaja, Becker Steven. Norton 0
Electron Charge to Mass Ratio Matthew Norton, Chris Bush, Brian Atinaja, Becker Steven Norton 0 Norton 1 Abstract The electron charge to mass ratio was an experiment that was used to calculate the ratio
### Experiment 9: Biot -Savart Law with Helmholtz Coil
Experiment 9: Biot -Savart Law with Helmholtz Coil ntroduction n this lab we will study the magnetic fields of circular current loops using the Biot-Savart law. The Biot-Savart Law states the magnetic
### Chapter 22: Electric Flux and Gauss s Law
22.1 ntroduction We have seen in chapter 21 that determining the electric field of a continuous charge distribution can become very complicated for some charge distributions. t would be desirable if we
### 11. Sources of Magnetic Fields
11. Sources of Magnetic Fields S. G. Rajeev February 24, 2009 1 Magnetic Field Due to a Straight Wire We saw that electric currents produce magnetic fields. The simplest situation is an infinitely long,
### PHYS 155: Final Tutorial
Final Tutorial Saskatoon Engineering Students Society [email protected] April 13, 2015 Overview 1 2 3 4 5 6 7 Tutorial Slides These slides have been posted: sess.usask.ca homepage.usask.ca/esp991/ Section
### Physics 12 Study Guide: Electromagnetism Magnetic Forces & Induction. Text References. 5 th Ed. Giancolli Pg
Objectives: Text References 5 th Ed. Giancolli Pg. 588-96 ELECTROMAGNETISM MAGNETIC FORCE AND FIELDS state the rules of magnetic interaction determine the direction of magnetic field lines use the right
### Module 3 : Electromagnetism Lecture 13 : Magnetic Field
Module 3 : Electromagnetism Lecture 13 : Magnetic Field Objectives In this lecture you will learn the following Electric current is the source of magnetic field. When a charged particle is placed in an
### TWO FLUXES MULTISTAGE INDUCTION COILGUN
Review of the Air Force Academy No 3 (30) 2015 TWO FLUXES MULTISTAGE INDUCTION COILGUN Laurian GHERMAN Henri Coandă Air Force Academy, Braşov, Romania DOI: 10.19062/1842-9238.2015.13.3.7 Abstract: This
### CHAPTER 24 GAUSS S LAW
CHAPTER 4 GAUSS S LAW 4. The net charge shown in Fig. 4-40 is Q. Identify each of the charges A, B, C shown. A B C FIGURE 4-40 4. From the direction of the lines of force (away from positive and toward
### TECHNICAL GUIDE. Call 800-624-2766 or visit 1 SOLENOID DESIGN & OPERATION
Definition & Operation Linear solenoids are electromechanical devices which convert electrical energy into a linear mechanical motion used to move an external load a specified distance. Current flow through
### Review Questions PHYS 2426 Exam 2
Review Questions PHYS 2426 Exam 2 1. If 4.7 x 10 16 electrons pass a particular point in a wire every second, what is the current in the wire? A) 4.7 ma B) 7.5 A C) 2.9 A D) 7.5 ma E) 0.29 A Ans: D 2.
### The Effects of Airgap Flux Density on Permanent Magnet Brushless Servo Motor Design
The Effects of Airgap Flux Density on Permanent Magnet Brushless Servo Motor Design Lowell Christensen VP Engineering TruTech Specialty Motors Magnetics 2013 Orlando Florida Feb 8 & 9, 2013 This presentation
### Question Bank. 1. Electromagnetism 2. Magnetic Effects of an Electric Current 3. Electromagnetic Induction
1. Electromagnetism 2. Magnetic Effects of an Electric Current 3. Electromagnetic Induction 1. Diagram below shows a freely suspended magnetic needle. A copper wire is held parallel to the axis of magnetic
### Magnetic Field Sensors (Hall Generators)
54 Hall Sensors General Information Magnetic Field Sensors (Hall Generators) Hall generator theory A Hall generator is a solid state sensor which provides an output voltage proportional to magnetic flux | 9,303 | 39,333 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.640625 | 3 | CC-MAIN-2018-47 | longest | en | 0.938128 |
https://www.rouletteforum.cc/index.php?topic=16226.0 | 1,716,140,275,000,000,000 | text/html | crawl-data/CC-MAIN-2024-22/segments/1715971057819.74/warc/CC-MAIN-20240519162917-20240519192917-00787.warc.gz | 885,454,165 | 7,086 | • Welcome to #1 Roulette Forum & Message Board | www.RouletteForum.cc.
## News:
Test the accuracy of your method to predict the winning number. If it works, then your system works. But tests over a few hundred spins tell you nothing.
## Button's Blog
Started by button, Nov 14, 12:45 AM 2015
0 Members and 1 Guest are viewing this topic.
#### button
I'm a bit of a newbie here as a registered user, so over a period of time I am going to post a lot of my collected system ideas for everyone to look at.
Not many of them are mine and where I can I will give credit. Most are just ideas I have collected from all over the Internet and are probably useless, but may give others ideas to build on.
#### button
Supposing you were playing roulette and made even-money bets on Red.
Your starting bankroll is 40 dollars. You can't hope to get ahead if you wager \$1 each time and collect after every win.
As a smart gambler you bet bigger using your winnings, not the money you brought to the table.
This is what would happen if you were on a winning streak and you bet-back your winnings plus one dollar from your original stake in each consecutive round:
1. Round: Bet \$1, win \$1, and collect \$2
2. Round: Bet \$2 plus \$1, win \$3, and collect \$6
3. Round: Bet \$6 plus \$1, win \$7, and collect \$14
4. Round: Bet \$14 plus \$1, win \$15, and collect \$30
5. Round: Bet \$30 plus \$1, win \$31, and collect \$62
6. Round: Bet \$62 plus \$1, win \$63, and collect \$126
As you lose you start again, if you get a six round streak you are ahead.
Five- or six-round streaks are not uncommon, As long as you bet only one unit of your own money each round, you won't get into much trouble.
#### icashbot
hi buttons
what do you use this system on RNG or live roulette?
- | 456 | 1,782 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2024-22 | latest | en | 0.933023 |
https://123doc.org/doc_search_title/3899605-probability-spinner-10-001.htm | 1,508,407,467,000,000,000 | text/html | crawl-data/CC-MAIN-2017-43/segments/1508187823260.52/warc/CC-MAIN-20171019084246-20171019104246-00209.warc.gz | 636,657,284 | 14,040 | # probability spinner 10 001
...
• 5
• 78
• 0
## Introduction to Probability - Chapter 10 pptx
... 2 0 0 0 10 Z8 3 0 0 0 13 Z9 2 0 12 0 0 16 Z10 0 12 0 0 17 Z11 0 13 0 0 15 Z12 6 0 15 0 0 18 Profit -5 0 250 -1 00 50 250 -1 00 300 -5 0 -1 00 -5 0 750 -5 0 -5 0 200 -5 0 -5 0 -5 0 50 850 -5 0 Table 10. 4: Simulation ... 11 0 0 0 12 0 Z10 0 0 0 10 0 0 0 14 0 Z11 0 0 0 16 0 0 0 13 0 Z12 0 0 0 25 0 0 0 10 0 Profit 200 -5 0 -5 0 -5 0 -1 00 300 -5 0 1300 -1 00 -5 0 -1 00 50 -5 0 550 100 -5 0 -5 0 Table 10. 5: Simulation of chain ... CHAPTER 10 GENERATING FUNCTIONS 2.5 1.5 0.5 10 15 20 25 Figure 10. 4: Simulation of Zn /mn for the Keyfitz example the sum of 100 0 independent experiments we can use the Central Limit Theorem to...
• 40
• 74
• 0
## Introduction to Probability phần 10 doc
... same fixed probability vector w 14 If P is a reversible Markov chain, is it necessarily true that the mean time to go from state i to state j is equal to the mean time to go from state j to state ... are the fixed probability row vector for P Then the matrix I−P+W has an inverse Proof Let x be a column vector such that (I − P + W)x = To prove the proposition, it is sufficient to show that x ... 0 9 /10 −1/20 3/20 = −1 /10 6/5 −1 /10 , 3/20 −1/20 9 /10 so Z = (I − P + W)−1 86/75 = 2/25 −14/75 2/5 1/5 2/5 1/25 −14/75 21/25 2/25 1/25 86/75 Using the Fundamental Matrix to Calculate...
• 60
• 100
• 0
...
• 1
• 86
• 0
Xem thêm | 675 | 1,434 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.640625 | 4 | CC-MAIN-2017-43 | latest | en | 0.642821 |
https://undergroundmathematics.org/glossary/centroid | 1,537,562,150,000,000,000 | text/html | crawl-data/CC-MAIN-2018-39/segments/1537267157503.43/warc/CC-MAIN-20180921190509-20180921210909-00271.warc.gz | 638,597,020 | 3,851 | # Centroid
The centroid of a polygon is the point whose coordinates are the mean of the coordinates of the vertices. For a triangle, this is the point of intersection of the medians.
More generally, the centroid of any set of points has coordinates which are the mean of the coordinates of the points. | 63 | 303 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2018-39 | latest | en | 0.907468 |
https://matheducators.stackexchange.com/questions/4222/what-is-a-good-prototypical-example-of-a-construction-that-is-not-well-defined/4228 | 1,618,800,948,000,000,000 | text/html | crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00338.warc.gz | 495,351,131 | 42,848 | # What is a good prototypical example of a construction that is not well-defined?
In the question Why do students have problems with showing that something is well-defined? How can this be improved?, it was suggested that perhaps students have never seen something that is not well-defined.
I agree with this and would like to hear what you think is a good starting example for a not-well-defined object. I think a good starting example should require as little background knowledge as possible while avoiding the pitfalls of the following examples:
• Too fixable: You could claim that "the antiderivative of a function" is not well-defined, since there are lots of them. But students will easily protest that the antiderivative can just be something with a +C on it, which seems to clear up the issue. This doesn't help anyone who doesn't already understand the idea.
• Too obvious: You could say that "the denominator of a rational function" is not well-defined (or similar things in the link above), but the students see it immediately. This gives the wrong impression; in fact checking well-definedness carefully is critical and you can almost never see it immediately.
• Maybe instead of antiderivative of a function, you could talk about "anti-curl" of a vector field, which will only be well defined up to a gradient of a function, a much larger class than simply the constants. – Steven Gubkin Aug 21 '14 at 2:04
• Also the thread you link too does provide a great example: exponentiation of numbers in a cyclic group. – Steven Gubkin Aug 21 '14 at 3:35
• I think ordinal arithmetic is one place to find such examples, but I feel this might take a bit of explanation (possibly in a direction that you don't want to bring the students). The fundamental idea of making precise "$\infty + 1$" involves (at some point) taking the size of the "smallest infinity," $|\mathbb{Z}|:=\omega$, and then beginning to extend the arithmetic operations. But ensuring this is well-defined is nontrivial, since, e.g., addition is not commutative: $1+\omega \neq \omega +1$, etc. Not sure if this is the sort of thing in which you are interested... – Benjamin Dickman Aug 21 '14 at 17:48
• @StevenGubkin Can you move your anti-curl example to an answer, and I'll copy-paste the cyclic exponentiation as a separate answer? So far I kind of think all three of the comments here are actually answers. – Chris Cunningham Aug 21 '14 at 20:08
• complex analysis is awash in multiply-valued "functions" for which the formula appears innocent from our real number experience. – James S. Cook Aug 22 '14 at 14:13
## 6 Answers
The naive "addition" on rational numbers, denote it $\oplus$, where you simply add numerator with numerator and denominator with denominator.
Example: $\tfrac 1 3 \oplus \tfrac 3 4 = \tfrac 4 7$.
I think this is less obviously non-well-defined than the other examples involving fractions.
• This is known as the mediant fraction. It has some interesting applications in number theory. – Bill Dubuque May 15 '18 at 0:13
• The mediant appears in the theory of Farey sequences (en.wikipedia.org/wiki/Farey_sequence). – Dag Oskar Madsen Dec 7 '20 at 20:52
• That well-known application (and others) are mentioned on the Wikipedia page that I linked above in my comment. – Bill Dubuque Dec 7 '20 at 20:55
Supposedly defining a ring homomorphism $f:\mathbb Z/m \to \mathbb Z/n$ by $f(k)=k$ is a popular, traditional attempt to define a map ... initially $\mathbb Z\to \mathbb Z/n$, that may be imagined to "factor through" the quotient $\mathbb Z/m$, but which does not without assumptions on $m,n$. Yet, in my experience, students look at the "formula" for it, and cannot see the difficulty, since the formula has such a straightforward appearance. :)
The linked question has a decent example, from mweiss:
"Well-defined" only becomes a meaningful concept if you have experience with cases in which something is not well-defined. Here is a simple case in which something seems entirely reasonable: Let $m,n$ be two integers and $[m],[n]$ their equivalence classes mod $p$. Define $[m]^{[n]}=[m^n]$. Seems reasonable, especially because of the way we define the other arithmetic operations mod $p$. But as soon as you try to calculate particular examples you realize the definition is broken; different representatives of $[n]$ yield different results.
• Since the example came from my own comment on the linked post, I've taken the liberty of editing the formatting here to make it clearer. Yes, the example was about exponentiation, not multiplication. – mweiss Aug 21 '14 at 21:08
Here is a troubling one that requires very little background:
Given $x \in \mathbf{R}$ and $q \in \mathbf{Q}$, define $x^q$ by:
• Write q as a fraction $\frac{a}{b}$.
• Evaluate $\sqrt[b]{x^a}$.
Then, for example, we compute something unpleasant like
$i = \sqrt{-1} = (-1)^{1/2} = (-1)^{2/4} = \sqrt[4]{(-1)^2} = 1$
or even
$-1 = \sqrt[3]{-1} = (-1)^{1/3} = (-1)^{2/6} = \sqrt[6]{(-1)^2} = 1$
• ah yes, $-1 = \sqrt{-1}\sqrt{-1}=\sqrt{(-1)(-1)} = \sqrt{1}=1$. Wait. What! – James S. Cook Aug 23 '14 at 3:43
You can't get much more prototypical than Euclid. Book I, proposition 1 assumes that the two circles intersect, but there is no postulate that guarantees that they intersect. From a modern point of view, the hidden assumption is that you're working in a space like $\mathbb{R}^2$, with completeness, rather than, say, $\mathbb{Q}^2$.
The simple example I use is an operation on rational numbers, call it $\star$ where
$$\frac{a}{b}\star\frac{c}{d} := \frac{\max(a,c)}{\max(b,d)}$$
This then motivates the need usual proof that one needs to check that $+$ and $\cdot$ are still well defined modulo an ideal. | 1,482 | 5,723 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.3125 | 3 | CC-MAIN-2021-17 | latest | en | 0.965709 |
https://www.enotes.com/homework-help/3x-y-3-0-1-2-353870 | 1,485,197,745,000,000,000 | text/html | crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00008-ip-10-171-10-70.ec2.internal.warc.gz | 894,670,833 | 12,541 | # 3x-y+3=0;(-1,2)write the standard form of the equation of the line that is parallel to the graph of the given equation and that passes through the point with the given coordinates.
sciencesolve | Teacher | (Level 3) Educator Emeritus
Posted on
You should convert the standard form of the given equation to the slope intercept form `y = mx + n` , hence, you need to isolate y to the left side such that:
`-y = -3x - 3 => y = 3x + 3`
Comparing the result to the slope intercept form y = mx + n yields that m = 3 (slope) and n = 3 (y intercept).
You should remember that the slopes of two parallel lines have equal values, hence, the slope of the line whose equation will be found is also 3.
You need to write the point slope form of equation of a line such that:
`y - y_1 = m(x - x_1)`
Notice that the coordinates `x_1` and `y_1` are provided, hence, you need to substitute -1 and 2 for `x_1` and `y_1` such that:
`y -2 = 3(x + 1)`
You need to convert the point slope form into the standard form such that:
`y - 2 - 3x - 3 = 0`
`-3x + y - 5 = 0 =gt 3x - y + 5 = 0`
Hence, evaluating the standard form of equation of the line, under the given conditions, yields `3x - y + 5 = 0` . | 370 | 1,197 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.90625 | 4 | CC-MAIN-2017-04 | longest | en | 0.884535 |
https://robwork.dk/apidoc/java/javadoc/org/robwork/sdurw_math/Line2DPolar.html | 1,723,439,550,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722641028735.71/warc/CC-MAIN-20240812030550-20240812060550-00613.warc.gz | 393,768,305 | 4,637 | ## Class Line2DPolar
• java.lang.Object
• org.robwork.sdurw_math.Line2DPolar
• ```public class Line2DPolar
extends java.lang.Object```
Describes a line in 2D in polar coordinates.
• ### Constructor Summary
Constructors
Constructor Description
`Line2DPolar()`
constructor
rho * (cos(theta), sin(theta)) is the point on the line nearest to origo.
`Line2DPolar(double rho)`
constructor
rho * (cos(theta), sin(theta)) is the point on the line nearest to origo.
```Line2DPolar(double rho, double theta)```
constructor
rho * (cos(theta), sin(theta)) is the point on the line nearest to origo.
```Line2DPolar(long cPtr, boolean cMemoryOwn)```
`Line2DPolar(Line2D line)`
constructor - The line moving through the line segment.
```Line2DPolar(Vector2D pnt, double theta)```
constructor
```Line2DPolar(Vector2D start, Vector2D end)```
constructor - The line moving through the segment from 'start' to 'end'.
• ### Method Summary
All Methods
Modifier and Type Method Description
`Vector2D` `calcNormal()`
get normal of line
`void` `delete()`
`double` `dist2(Vector2D pnt)`
The L_2 distance from 'pnt' to the line.
`static long` `getCPtr(Line2DPolar obj)`
`double` `getRho()`
the shortest distance from origo the line
`double` `getTheta()`
angle in radians from x-axis up to the line that connects the origo and the
point on the line that is closest to origo.
`static Vector2D` `linePoint(Line2DPolar line)`
A supporting point on the line (equal to rho * normal).
`static Line2DPolar` ```lineToLocal(Pose2D pose, Line2DPolar line)```
line given relative to the coordinate frame of pose.
`static Vector2D` ```normalProjectionVector(Line2DPolar line, Vector2D pnt)```
The vector for the projection of pnt onto the normal of line.
`static Vector2D` ```projectionPoint(Line2DPolar line, Vector2D pnt)```
The point for the projection of 'pnt' onto 'line'.
• ### Methods inherited from class java.lang.Object
`equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait`
• ### Constructor Detail
• #### Line2DPolar
```public Line2DPolar(long cPtr,
boolean cMemoryOwn)```
• #### Line2DPolar
```public Line2DPolar(double rho,
double theta)```
constructor
rho * (cos(theta), sin(theta)) is the point on the line nearest to origo.
Parameters:
`rho` - [in] distance to the point on line which is closest to origo
`theta` - [in] angle from x-axis up to the line that connects the origo and the
point on the line that is closest to origo.
• #### Line2DPolar
`public Line2DPolar(double rho)`
constructor
rho * (cos(theta), sin(theta)) is the point on the line nearest to origo.
Parameters:
`rho` - [in] distance to the point on line which is closest to origo
• #### Line2DPolar
`public Line2DPolar()`
constructor
rho * (cos(theta), sin(theta)) is the point on the line nearest to origo.
• #### Line2DPolar
```public Line2DPolar(Vector2D pnt,
double theta)```
constructor
Parameters:
`pnt` - [in] is any point on the line
`theta` - [in] angle in radians from x-axis up to the line that connects the origo
and the point on the line that is closest to origo.
• #### Line2DPolar
```public Line2DPolar(Vector2D start,
Vector2D end)```
constructor - The line moving through the segment from 'start' to 'end'.
Parameters:
`start` - [in] point on line
`end` - [in] point on line
• #### Line2DPolar
`public Line2DPolar(Line2D line)`
constructor - The line moving through the line segment.
Parameters:
`line` - [in] the line described as a segment
• ### Method Detail
• #### getCPtr
`public static long getCPtr(Line2DPolar obj)`
• #### delete
`public void delete()`
• #### getRho
`public double getRho()`
the shortest distance from origo the line
Returns:
• #### getTheta
`public double getTheta()`
angle in radians from x-axis up to the line that connects the origo and the
point on the line that is closest to origo.
Returns:
• #### calcNormal
`public Vector2D calcNormal()`
get normal of line
• #### dist2
`public double dist2(Vector2D pnt)`
The L_2 distance from 'pnt' to the line.
• #### projectionPoint
```public static Vector2D projectionPoint(Line2DPolar line,
Vector2D pnt)```
The point for the projection of 'pnt' onto 'line'.
• #### linePoint
`public static Vector2D linePoint(Line2DPolar line)`
A supporting point on the line (equal to rho * normal).
• #### normalProjectionVector
```public static Vector2D normalProjectionVector(Line2DPolar line,
Vector2D pnt)```
The vector for the projection of pnt onto the normal of line.
Parameters:
`line` - [in] a line.
`pnt` - [in] a point.
Returns:
the projection vector.
• #### lineToLocal
```public static Line2DPolar lineToLocal(Pose2D pose,
Line2DPolar line)```
line given relative to the coordinate frame of pose.
Parameters:
`pose` - [in] the pose.
`line` - [in] the line.
Returns:
a Line2DPolar. | 1,418 | 4,797 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.65625 | 3 | CC-MAIN-2024-33 | latest | en | 0.567605 |
https://riceissa.github.io/everything-list-1998-2009/9499.html | 1,656,594,413,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656103821173.44/warc/CC-MAIN-20220630122857-20220630152857-00316.warc.gz | 531,689,913 | 4,390 | # RE: *THE* PUZZLE (was: ascension, Smullyan, ...)
From: Stathis Papaioannou <stathispapaioannou.domain.name.hidden>
Date: Thu, 15 Jun 2006 22:40:05 +1000
I've always wondered (from a position of relative mathematical naivete, please understand)
about the process in arguments like this whereby reasoning about arithmetic comes to
include the labels applied to arithmetical statements, on the grounds that these labels happen
themselves to be numbers. The function g as defined below is based on a look-up table, and
the contradiction consists in that this table has the same label applied to more than one
distinct value, i.e. fk(k) and fk(k)+1. Doesn't this just mean that the chosen labelling
scheme is ill-chosen? Is there any way to separate f and g formally, perhaps to call g a
meta-function (or something) rather than a function of f, and thus avoid the contradiction?
Or is this whole train of thinking basically flawed, there being no method in general to
banish g from the cosy world of functions like f(x)=x^2 or sin(x) or |x|?
Stathis Papaioannou
> Let me just recall what is a computable function from N to N. It is > function from N to N which is such that it is exist a finite way to > explain how to compute it in a finite number of steps on any natural > numbers. More precisely: f is computable if there is a language L such > that f admits a finite code/program/description/number explaining how > to compute f(n), in a finite time on any number natural n.> I will say that that a language L is universal, if all computable > functions from N to N admit a code in L.> > A weak form of Church thesis can be put in this way: there exists a > universal language.> > I will say a digital (or finitely describable) machine M is universal > if M can "understand" a universal language L, in the sense of being > able to compute any computable functions described in L (and thus all > given that L is universal) . In term of digital machine, Church thesis > becomes: there exists a universal digital machine.> > Now what is wrong with the following argument: if there is an universal > language or machine, the computable functions can be described by > finite description in that language, or program for that machine.> Such a set is obviously enumerable. There is a bijection between N and > the set of those descriptions:> > 1 f1> 2 f2> 3 f3> 4 f4> etc.> > So the following function g is well-defined by:> > g(n) = fn(n) + 1> > Then, to compute it on the number n (439 say), just generate the > description/program of f1 f2 f3 ... until fn, that is f439, apply it > on n to get fn(n), f439(439), and add 1 to get g(n) = fn(n)+1, that is > here: f439(439)+1.> But then g cannot be described in the language L! Why? Suppose g is > described by a code in the language L: then g belongs somewhere in the > list f1, f2, f3, f4, f5, .... Thus there would exist a number k such > that g = fk, and thus g(k) = fk(k); but g(k) = fk(k)+1. And thus> > fK(k) = fk(k)+1 (*)> > And fk(k) is a well defined number given that the fi are all computable > functions from N to N. So I can substract fk(k) on both sides of (*) > just above, and I get 0 = 1 (contradiction). So there is no universal > language, we cannot generate all computable functions, still less, > then, to dovetail on them.>
_________________________________________________________________ | 875 | 3,362 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.96875 | 3 | CC-MAIN-2022-27 | latest | en | 0.905606 |
https://www.topmarks.co.uk/Search.aspx?q=negative%20numbers | 1,563,736,337,000,000,000 | text/html | crawl-data/CC-MAIN-2019-30/segments/1563195527196.68/warc/CC-MAIN-20190721185027-20190721211027-00180.warc.gz | 850,934,287 | 6,280 | # Browse by subject and age group
Negative Numbers
Outsmart the Mission 2110 Roboidz by ordering the negative numbers in ascending or descending order.
Pupils
Not tablet-friendly
10-11 year olds
Teddy Numbers
The Teddy Numbers game can help you to learn numbers to 15. Learn the digits and words for the numbers and it game can help you learn to count too.
Pupils, Parents
Tablet-friendly
3-5 year olds
Number Patterns
It's the year 2010 and the world has been taken over by Roboidz. You need to use your number pattern know-how to crack the Mission 2110 codes and cut off the enemy's vital food source.
Pupils
Not tablet-friendly
10-11 year olds
Ordering Numbers including Negatives
A simple game where you need to order the numbers from smallest to biggest. The range includes negative numbers.
Pupils
Not tablet-friendly
7-11 year olds
Placing Numbers on a Number Line
A versatile number line that can be used at many different levels beginning at numbers 1 to 10 up to fractions, decimals and negative numbers. Place numbers on a number line and see how close you can get.
Pupils
Tablet-friendly
5-14 year olds
Number Pieces
A great teaching resource which helps children to understand place value. It uses hundreds, tens and ones blocks. It's great for demonstrating decomposition as you can break apart the pieces.
Pupils
Tablet-friendly
5-7 year olds
Temperature
Learn to read a thermometer. Useful for getting to grips with reading lots of different scales. Also work out the difference in temperature between two thermometers.
Teachers, Pupils
Not tablet-friendly
6-11 year olds
Number Line
An interactive numberline to support the teaching of number and scales. Useful for teaching negative numbers. It has been designed for use on an interactive whiteboard.
Teachers
Not tablet-friendly
7-11 year olds
Rocket Rounding
A multiple choice game involving rounding numbers to ten, a hundred and to a whole number. There are two options, one with a number line and the other more difficult level, without one.
Pupils
Tablet-friendly
7-11 year olds
Subtraction Grids
Can you meet the challenge to see how many subtraction calculations you can do in two minutes? There are different levels and you can choose either one or two missing numbers to make your number sentence correct.
Teachers, Pupils
Tablet-friendly
6-11 year olds
1 2 3 4 5
Topmarks uses cookies on this website
Cookies improve how our website is used, so that we can continue to improve the site. For more information see our cookie policy.
By using this website you are agreeing to our use of cookies. | 589 | 2,612 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.640625 | 3 | CC-MAIN-2019-30 | longest | en | 0.89152 |
https://leowiki.com/how-many-probabilities-are-there-when-you-flip-a-coin-4-times-1652468026 | 1,674,913,033,000,000,000 | text/html | crawl-data/CC-MAIN-2023-06/segments/1674764499634.11/warc/CC-MAIN-20230128121809-20230128151809-00234.warc.gz | 380,386,084 | 14,327 | Trang chủ » blog » If you flip a fair coin four times, what is
# If you flip a fair coin four times, what is
explanation :
Consider a general tax of flipping N coins and the probability of precisely K times the heads are up. Let ‘s use a symbol P ( N, K ) for this probability.
Knowing this, we can use the leave to evaluate
P(4,2)+P(4,3)+P(4,4)
which will answer the interview of what is the probability of getting heads at lease 2 times out of flipping a mint 4 times.
Since there are only 2 outcomes from a single flip, head or tail, for N flips we can get
2N
unlike outcomes.
The outcomes we are concerned in are those that contain precisely K heads and
N−K
tails in any order. That is where combinatorics will come handy.
Any result of the random experiment of flipping a coin N times can be represented as a string of N characters, each one being a letter H ( to designate that the comparable flip resulted in a head ) or T ( if it was a buttocks ).
The number of outcomes with precisely K heads out of N flips is the number of strings of the length N consist of characters H and T, where H occurs K times and T occurs
N−K
times in any order.
This number is, obviously, a number of combinations of K items out of N, which symbolically is represented as
CNK
( there are other notations as well ) and is equal to
CNK=N!K!⋅(N−K)!
For all the hypothesis behind this and early formulas of combinatorics we can refer you to a correspond part of the advanced course of mathematics for high educate at Unizor.
The probability of having K heads out of N flips is equal to the ratio of the number of “ successful ” outcomes ( those with precisely K heads ) to a sum number of outcomes mentioned above :
P(N,K)=2NCNK=N!K!⋅(N−K)!⋅2N
now we can calculate the probability of at least two heads out of four flips ( do n’t forget that
0≠1
by definition ) :
P(4,2)+P(4,3)+P(4,4)=
=124⋅[4⋅3⋅2⋅1(1⋅2)⋅(1⋅2)+4⋅3⋅2⋅1(1⋅2⋅3)⋅(1)+4⋅3⋅2⋅1(1⋅2⋅3⋅4)⋅1]=
=6+4+116=1116 | 600 | 1,968 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.125 | 4 | CC-MAIN-2023-06 | latest | en | 0.93443 |
https://oeis.org/A258566 | 1,638,023,267,000,000,000 | text/html | crawl-data/CC-MAIN-2021-49/segments/1637964358189.36/warc/CC-MAIN-20211127133237-20211127163237-00504.warc.gz | 527,905,675 | 4,498 | The OEIS Foundation is supported by donations from users of the OEIS and by a grant from the Simons Foundation.
Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!)
A258566 Triangle in which n-th row contains all possible products of n-1 of the first n primes in descending order. 1
1, 3, 2, 15, 10, 6, 105, 70, 42, 30, 1155, 770, 462, 330, 210, 15015, 10010, 6006, 4290, 2730, 2310, 255255, 170170, 102102, 72930, 46410, 39270, 30030, 4849845, 3233230, 1939938, 1385670, 881790, 746130, 570570, 510510 (list; table; graph; refs; listen; history; text; internal format)
OFFSET 1,2 COMMENTS Triangle read by rows, truncated rows of the array in A185973. Reversal of A077011. LINKS FORMULA T(1,1) = 1, T(n,k) = A000040(n)*T(n-1,k) for k < n, T(n,n) = A000040(n-1) * T(n-1,n-1). EXAMPLE Triangle begins: 1; 3, 2; 15, 10, 6; 105, 70, 42, 30; 1155, 770, 462, 330, 210; 15015, 10010, 6006, 4290, 2730, 2310; MAPLE T:= n-> (m-> seq(m/ithprime(j), j=1..n))(mul(ithprime(i), i=1..n)): seq(T(n), n=1..10); # Alois P. Heinz, Jun 18 2015 MATHEMATICA T[1, 1] = 1; T[n_, n_] := T[n, n] = Prime[n-1]*T[n-1, n-1]; T[n_, k_] := T[n, k] = Prime[n]*T[n-1, k]; Table[T[n, k], {n, 1, 10}, {k, 1, n}] // Flatten (* Jean-François Alcover, May 26 2016 *) CROSSREFS Row sums: A024451. T(n,1) = A070826(n). T(n,n) = A002110(n-1). For 2 <= n <= 9, T(n,2) = A118752(n-2). [corrected by Peter Munn, Jan 13 2018] T(n,k) = A121281(n,k), but the latter has an extra column (0). Cf. A077011, A185973, A286947. Sequence in context: A218969 A345291 A185973 * A051917 A302845 A291251 Adjacent sequences: A258563 A258564 A258565 * A258567 A258568 A258569 KEYWORD nonn,tabl AUTHOR Philippe Deléham, Jun 03 2015 STATUS approved
Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam
Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent
The OEIS Community | Maintained by The OEIS Foundation Inc.
Last modified November 27 09:19 EST 2021. Contains 349365 sequences. (Running on oeis4.) | 821 | 2,094 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.5625 | 4 | CC-MAIN-2021-49 | latest | en | 0.478776 |
https://justaaa.com/finance/10689-wee-beastie-animal-farm-bonds-have-9-years-to | 1,696,082,159,000,000,000 | text/html | crawl-data/CC-MAIN-2023-40/segments/1695233510676.40/warc/CC-MAIN-20230930113949-20230930143949-00807.warc.gz | 365,001,650 | 9,898 | Question
Wee Beastie Animal Farm bonds have 9 years to maturity and pay an annual coupon at...
Wee Beastie Animal Farm bonds have 9 years to maturity and pay an annual coupon at the rate of 6.2 % . The face value of the bonds is \$ 1,000 . The price of the bonds is \$ 1,091.31 to yield 4.92 % . What is the capital gain yield on the? bonds?
The capital gain yield? is:???(Select the best choice? below.)
A. negative 0.73
B. plus 0.62 %
C. negative 0.77 %
D. negative 0.76 %
E. plus 0.68 %
Face Value = \$1,000
Annual Coupon Rate = 6.2%
Annual Coupon = 6.2% * \$1,000
Annual Coupon = \$62
Time to Maturity = 9 years
Annual YTM = 4.92%
Current Price, P0 = \$1,091.31
Next Year Price, P1 = \$62 * PVIFA(4.92%, 8) + \$1,000 * PVIF(4.92%, 8)
Next Year Price, P1 = \$62 * (1 - (1/1.0492)^8) / 0.0492 + \$1,000 / 1.0492^8
Next Year Price, P1 = \$1,083.00
Capital Gain Yield = (P1 - P0) / P0
Capital Gain Yield = (\$1,083.00 - \$1,091.31) / \$1,091.31
Capital Gain Yield = -0.0076
Capital Gain Yield = -0.76%
Earn Coins
Coins can be redeemed for fabulous gifts. | 385 | 1,070 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.578125 | 3 | CC-MAIN-2023-40 | latest | en | 0.749702 |
http://www.ukclimbing.com/forums/t.php?t=376804 | 1,484,696,044,000,000,000 | text/html | crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00358-ip-10-171-10-70.ec2.internal.warc.gz | 733,060,464 | 12,611 | ## / NEWS: Steve Crowe Climbs Full Height of Malham Cove
This topic has been archived, and won't accept reply postings.
Steve Crowe, 51, from Sunderland, has become the second person to climb the full height of Malham Cove.
Steve first climbed The Groove back in 2003 and from then on had an ambition to climb from the bottom to the top of the Cove and started to attempt the complete link of the pitches to the top, but two knee operations and last year a dislocated shoulder held him back.
News item updated with a topo.
Aye went up with steve on Tuesday and saw him complete this ascent from left hand side of the catwalk. could take in the full length of the route from there and fully appreciate the position and the awesomeness of the achievement. Must be good to be able to say to the tourists when they ask that stupid question "have you climbed to the top". Aye i have.
Cheers
Tim
Bloody hell thats some effort! Well done that man.
You'd think 7b into 8a+ into 7c into 7c would be more than 8b. But what do I know....only that he's a beast
In reply to UKC News: Brilliant effort Steve, I guess all that waits now is the first Para-Alpine Ascent of the Cove, (via either yours or Johns route). There's a good challenge ! Any takers? ;-)
Tim
Fantastic! An inspiration to us all.
Mick
In reply to 220bpm: it's not 7b into 8a+ the 8a+ (the groove) starts and includes the 7b (something stupid). So its 8a+ into 7c (freeandeasy) into breach (which is apparently 8a when it includes rennaisance) otherwise ive heard 7b/+. But not been on it so cannot say for sure. Still sounds hard!
Cheers
Tim
That's very impressive.
Nice one Steve!
> You'd think 7b into 8a+ into 7c into 7c would be more than 8b.
The 7b is actually part of the 8a+ (The Groove) so the whole equation would be:
(((7b + 7b+)=7c+) + good rest + 7c+ = 8a+) + ok rest + 7c + good rest + 7c =Glory
Well done Steve
In reply to Serpico & tbertenshaw:
Cheers for the explanation guys.
In reply to UKC News: Wow.
In reply to UKC News: Great effort Steve. Very Impressive!
Top job, Steve.
I remember as a young(er) lad reading some correspondance in the Climber and Rambler, contesting the "record" for climbing the "three Yorkshire roofs" (back in the day when they were all aid climbs). Can't remember what it was (nor how they got from one to the other: running?)
I assume Gordale & Kilnsey are have their "full height" climbed: is the record the "next big challenge"
Y.
Great effort.
Why has it taken 21 years for someone to repeat this feat?
Chris
Probably no other bugger has a long enough rope !
In reply to Chris Craggs: It's a new feat, not the old one! Steve finished up Breach, John finished independantly further left. There was a very heated argument in the press you may remember, accusations of glued on holds. John claimed it wasn't him, others seemed to think it was possibly him, the people who truely know never let on. The original finish hasn't been repeated (to the best of my knowledge) and John was quoted in the mags of the time that the roof was the crux of the route, giving it a hefty (for the time) 8c grade.
Andy F
> (In reply to UKC News)
>
> Great effort.
>
> Why has it taken 21 years for someone to repeat this feat?
Because you never got on it Chris.
In reply to Mick Ryan - UKClimbing.com:
any chance of someone drawing a line on the picture to give us a topo? Of john's route and steve's?
lost
> (In reply to Chris Craggs) It's a new feat, not the old one!
Depends whether the feat is repeating John Dunne's route (in which case you're right) or climbing the full height of Malham Cove (which I think Chris might have meant)?
Yes I remember all the fuss and was wondering why in the intervening 21 years no-one has bothered to get on and repeat it in either form.
Chris
I appreciate there are rests of sorts, and Steve is in the position to judge, but doesn't tacking a 7c+ with extra roof onto the top of a tough 8a+ warrant/merit more than an 8b in total?
great effort, and about time someone took on and did the challenge in whatever form, well done.
In reply to richardh: It's 8a+ to a brilliant rest, then 7c to a brilliant rest, then 7b+ for the final roof. Which does, admittedly sound more than 8b, given the enormous rope drag at the top. A stunning effort whatever the grade. I was talking to Steve about this a while ago and he did say he planned to walk off, which IMHO is a fabulous way to finish. A truely stupendous ascent and one which should be roundly applauded.
Andy f
Fantastic achievement Steve.
Always fascinated me this.
Would it be cheating to have a friend waiting at the bottom of Free and Easy to put you on belay as you climbed past?
Enty
Isn't that how John Dunne did it - or something similar?
Chris
News item now updated with a topo.
http://www.ukclimbing.com/news/item.php?id=49821
Did Dunny solo Rodney's Route first? Sorry Steve your ascent doesn't count ;-)
Enty
In reply to andy farnell: I thought Arron Tonks did the top roof a while back, though i maybe wrong. I abbed it a few years ago and the glue marks are still there, in just the place where you might want a hold.
In reply to gaz parry: I've been climbing with Aaron for the last few weeks and he hasn't mentioned it, I'll ask him next time I see him.
Andy
In reply to UKC News: Fantastic effort Steve. Inspirational and well deserved as you looked close back in June when I was on The Groove.
> (In reply to UKC News)
>
> Great effort.
>
> Why has it taken 21 years for someone to repeat this feat?
>
>
> Chris
Sorry it took me so long Chris! | 1,448 | 5,581 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.59375 | 3 | CC-MAIN-2017-04 | latest | en | 0.948095 |
https://www.wyzant.com/resources/answers/222205/could_some_one_help_me_out | 1,620,460,384,000,000,000 | text/html | crawl-data/CC-MAIN-2021-21/segments/1620243988850.21/warc/CC-MAIN-20210508061546-20210508091546-00479.warc.gz | 1,111,795,378 | 13,682 | Kuna P.
# could some one help me out?
how can I proof that( sine alpha + cos alpha) <= 1/2, using the fact that sine (- beta) and cos (-beta) = cos beta?
Arturo O.
The statement
sinα + cosα ≤ 1/2
is not correct in general. For example, if α = π/2, then
sinα + cosα = sin(π/2) + cos(π/2) = 1 + 0 > 1/2
sin(-β) = -sinβ
cos(-β) = cosβ
Is there some other information in the statement of the problem?
Report
06/04/16 | 156 | 423 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.21875 | 3 | CC-MAIN-2021-21 | latest | en | 0.672201 |
http://textlab.io/doc/23222141/mathematics-and-statistics-91267--2.12--exam | 1,534,291,223,000,000,000 | text/html | crawl-data/CC-MAIN-2018-34/segments/1534221209650.4/warc/CC-MAIN-20180814225028-20180815005028-00459.warc.gz | 397,059,429 | 14,360 | # Mathematics and Statistics 91267 (2.12) Exam
#### Document technical information
Format docx
Size 128.0 kB
First found May 22, 2018
#### Document content analysis
Category Also themed
Language
English
Type
not defined
Concepts
no text concepts found
#### Transcript
```Enter School name here
NAME: _______________________________________
2
Teacher: __________
Level 2 Mathematics and Statistics
91267 (2.12): Apply probability methods in solving
problems
Credits: Four
Achievement
Apply probability methods in solving
problems
Achievement with Merit
Apply probability methods, using
relational thinking, in solving
problems.
Achievement with Excellence
Apply probability methods, using
extended abstract thinking, in solving
problems
You should answer ALL parts of ALL questions in this booklet.
You should show ALL your working.
If you need more space for any answer, use your own paper and clearly number the question.
Check that this booklet has pages 2–9 in the correct order and that none of these pages is blank.
YOU MUST HAND THIS BOOKLET TO YOUR TEACHER AT THE END OF THE ALLOTTED TIME.
TOTAL
AS91267 Kohia 2014
You are advised to spend 60 minutes answering the questions in this booklet.
QUESTION ONE
(a)
The Eastpaddock Mall has two types of food outlets: Cafés and Fast food outlets.
70% of visitors to the Mall stay at least 60 minutes.
Of the visitors to the Mall that stay at least 60 minutes, 45% of them use a food outlet.
Only a quarter of visitors to the Mall that stay less than 60 minutes use a food outlet.
Some of the information is shown on the probability tree below.
0.7
Stay at least
60 minutes
Stay less than
60 minutes
Use a food
provider
Don’t use a
food provider
Use a food
provider
Don’t use a
food provider
(i)
Calculate the probability that a visitor to the Mall will stay at least 60 minutes
and will use a food provider.
(ii)
Calculate the percentage of visitors to the Mall that don’t use a food provider.
(iii)
60% of food providers are Fast food outlets.
If a visitor to the Mall stays at least 60 minutes, calculate the probability that
they will use a café.
AS 91267 2.12 PROBABILITY
KOHIA Page 2
Assessor’s
e only
(b)
At the Eastpaddock Mall there is a large store called The HaveOne.
65% of visitors to the Mall are female.
81% of females visit The HaveOne store.
Assessor’s
e only
Some of the information is shown on the probability tree below.
Female
Visited The
HaveOne
Don’t visit The
HaveOne
0.65
Male
(i)
What proportion of visitors to the Mall were female that visited The HaveOne?
(ii)
There were 4500 visitors to the Mall last week. How many would you have expected
to be female that didn’t visit the The HaveOne store?
(iii)
Two thirds of the visitors to the Mall visit The HaveOne store.
If a male is selected, calculate the probability they visited The HaveOne store.
(iv)
If a visitor to the Mall visited The HaveOne store, what is the probability that they
were female?
AS 91267 2.12 PROBABILITY
KOHIA Page 3
QUESTION TWO
(a)
Assessor’s
e only
in a survey at the Mall last week.
The table below gives the number of ‘groups’ in each category.
Spent at least
\$100
Spent less than
\$100
Totals
Individuals
Couples
Families
Totals
1560
1250
310
3120
480
220
680
1380
2040
1470
990
4500
(i)
What proportion of ‘groups’ in the survey were individuals at the Mall?
(ii)
Of those groups that spent less than \$100 at the Mall, what proportion were
individual ‘groups’?
(iii) Last year 60 000 ‘groups’ visited the Mall.
Using the survey results, how many couple ‘groups’ would you expect to spend at least
\$100 at the Mall?
(iv) Show that for the ‘groups’ surveyed, the risk of spending at least \$100 is about 7/10.
AS 91267 2.12 PROBABILITY
KOHIA Page 4
(v)
A newspaper headline on a report summarising the results of the survey stated
“Survey shows individuals are two and half times as likely to spend at least
\$100 on their visit to the Mall than a family ‘group’.”
Show whether or not you agree with this headline, stating full reasons and calculations.
(b)
The survey also recorded the age group of the individual ‘groups’ that visited the Mall.
This showed:
There were 580 individual ‘groups’ aged at least 50 years old that spent at
least \$100 at the Mall
There were 1170 individual ‘groups’ aged less than 50 years old.
There were 480 individual ‘groups’ who spent less than \$100.
Individuals at
least 50
Individuals less
than 50
Totals
Spent at least
\$100
Spent less than
\$100
2040
Totals
AS 91267 2.12 PROBABILITY
KOHIA Page 5
Assessor’s
e only
(i)
What proportion of individual ‘groups’ visiting the Mall who spent at least \$100
were aged less than 50?
(ii)
A researcher claimed that individual ‘groups’ aged less than 50 are 25% more likely to
spend at least \$100 at the Mall than those aged at least 50.
Comment on this claim using the survey data. Use suitable reasons and calculations to
AS 91267 2.12 PROBABILITY
KOHIA Page 6
Assessor’s
e only
QUESTION THREE
(a)
Assessor’s
e only
Visitors to the Eastpaddock Mall walk a variety of distances inside the Mall.
The distance walked by visitors in the Mall is normally distributed, with a mean of
1250 metres and standard deviation of 310 metres.
(i)
What is the probability that a visitor to the Mall will walk between 1000 and
1500 metres?
(ii)
What percentage of visitors to the Mall, walk less than 500 metres?
(iii) 5% of visitors to the Mall are classified as ‘brief visitors’.
Calculate the maximum distance (to the nearest metre) walked in the Mall by
a ‘brief visitor’.
(b)
Mall officials claim that older visitors to the Mall generally don’t walk as far as younger
visitors.
It was found that only 10% of older visitors to the Mall walk more than 1000 metres.
Calculate the mean distance walked by older visitors to the Mall.
Assume that a normal distribution can be used to model the distances walked by older
visitors in the Mall and that it has the same standard deviation (310 metres).
AS 91267 2.12 PROBABILITY
KOHIA Page 7
(c)
A survey of the distances walked by male and female visitors to the Mall last week
produced the following results.
Distance walked by Males
Distance walked by Females
AS 91267 2.12 PROBABILITY
KOHIA Page 8
Assessor’s
e only
(i)
Give a possible reason why the distances walked by males is not independent
of the distances walked by females.
(ii)
Compare and contrast the distributions and males and for females.
You should discuss shape, centre and spread in relation to the context.
AS 91267 2.12 PROBABILITY
KOHIA Page 9
Assessor’s
e only
AS 91267 2.12 PROBABILITY
KOHIA Page 10
``` | 1,686 | 6,560 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.375 | 3 | CC-MAIN-2018-34 | longest | en | 0.851928 |
https://www.nagwa.com/en/videos/870106039072/ | 1,718,586,269,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198861674.39/warc/CC-MAIN-20240616233956-20240617023956-00482.warc.gz | 794,734,877 | 35,707 | Question Video: Finding the Measure of an Angle given a Relation with Its Supplementary Angle’s Measure | Nagwa Question Video: Finding the Measure of an Angle given a Relation with Its Supplementary Angle’s Measure | Nagwa
# Question Video: Finding the Measure of an Angle given a Relation with Its Supplementary Angleโs Measure Mathematics • First Year of Preparatory School
## Join Nagwa Classes
A pair of supplementary angles are in the ratio of one to nine. What is the measure of the smaller angle?
02:22
### Video Transcript
A pair of supplementary angles are in the ratio of one to nine. What is the measure of the smaller angle?
In this question, we are told that a pair of supplementary angles are in the ratio of one to nine. This means that one angle is nine times the size of the other. We need to use this information to determine the measure of the smaller angle.
To answer this question, we can first recall that we call two angles ๐ด and ๐ต supplementary if their measures sum to 180 degrees. We can note that the angles are in a ratio of one to nine. If we say that angle ๐ด is the angle with smaller measure, then this means that nine times the measure of angle ๐ด is equal to the measure of angle ๐ต.
We want to find the measure of angle ๐ด since it is the smaller angle. We can do this by substituting this expression of the measure of angle ๐ด into the equation of the measure of the supplementary angles. This gives us that the measure of angle ๐ด plus nine times the measure of angle ๐ด is equal to 180 degrees. We can then simplify the left-hand side of the equation to get 10 times the measure of angle ๐ด. We can then solve for the measure of the smaller angle by dividing the equation through by 10. We get that the smaller angle has measure 18 degrees.
We can check this answer by multiplying the measure by nine to find that the measure of the larger angle is 162 degrees. We can check that these angles are indeed supplementary by checking that the sum of their measures is 180 degrees. We can calculate that 18 degrees plus 162 degrees is 180 degrees, confirming that the angles are supplementary.
Hence, if two supplementary angles are in the ratio of one to nine, we have shown that the smaller angle must have measure 18 degrees.
## Join Nagwa Classes
Attend live sessions on Nagwa Classes to boost your learning with guidance and advice from an expert teacher!
• Interactive Sessions
• Chat & Messaging
• Realistic Exam Questions | 561 | 2,501 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.78125 | 5 | CC-MAIN-2024-26 | latest | en | 0.936461 |
https://www.polytechforum.com/control/water-head-pressure-pipe-diameter-1972-.htm | 1,702,229,848,000,000,000 | text/html | crawl-data/CC-MAIN-2023-50/segments/1700679102612.80/warc/CC-MAIN-20231210155147-20231210185147-00733.warc.gz | 1,017,468,251 | 15,559 | # Water head, pressure, pipe diameter
• posted
Please pardon this multiple newsgroup article. I do not know which newsgroup would be the most-correct. I hang out in talk.origins mostly, so I do not know which hard-science venue would be appropreate for my query. Hydrodynamics does not seem to be represented in the newsgroup list as far as I can tell.
I live and work on a cattle ranch. (Moooo!) We have a fresh-water spring on the side of a hill that produces about ten gallons (38 liters) of water per minute. We want to go up the hill and dig a hole and bury a 55-gallon (208 liter) drum as a collection box and pipe the spring water into the top of the drum; we then want to run a pipe from the bottom of the drum and down the hill into a cabin. (There will also be an over-flow fitting at the top of the drum, but that is not part of my query.)
At the cabin we hope to get around 43 PSI, or about 100 head feet, of water pressure. We plan on using pipe with an inner diameter of 1.5 inches or perhaps 1.0 inches. We do not want to use a water meter / pressure regulator.
The hill's decline is about 20 degrees, but I do not know if that is important to know. As far as I know, what is important is the height of the water source above the water demand (the "head").
My query is:
1) how high up the hill should the collecting drum be?
2) is there a danger of too much pressure if the collecting drum is too high up the hill?
3) is a pressure regulator at the cabin necessary?
I shall appreciate any thoughts and opinions on the subject.
DMR
• posted
100 feet of elevation will give you 43.31 pounds of pressure. 1 foot = .43 psi just multiply feet of elevation cahng time .43 to get pressure you want
• posted
1. 100 feet of elevation difference.
1. No
2. No
I presume you will use the barrel as a reservoir with overflow of the excess water. Pipe size depend on the horizontal distance and the desired rate of flow. This data is missing.
SJF
• posted
Thank you. Is it really that simple? Seems to me even one of our cows could have figured that out.
• posted
No need for a regulator, the head is unlikely to vary by more than the height of the barrel anyway.
Yup, the angle makes little difference (it does dictate the length of the pipe which will have some influence on flow rate, but not static pressure.
In theory, but even double your proposed head would be unlikely to cause serious problems for most pipes / fittings / brassware etc.
Nope - not unless you need to restrict pressure to some appliance not designed to handle it.
• posted
I did this kinda thing for my mom, back in 1976... geez how time flies:(
Anyhow with a similiar drop and using garden hose we were able to run a sprinkler the kind that goes left and right, for a long time.
in my moms case she had a cistern on the hill, for their home.
I tapped the overflow to a old hot water tank so she could water her garden withourt concern about depleting the water for her house. It worked great till my moved back here and got diovorced.
odd how something that long ago applies here today
• posted
You didn't say how far up the hill the spring is... If you plan to bury the barrel higher up the hill than the spring, you will most likely have to install a pump to fill the barrel.
If you do need to install a pump, it is probably cheaper and a lot less trouble to install the barrel, pump and a pressure regulator at the cabin itself to save installing several hundred feet of pipe and pump motor cables - and maybe use the excess head pressure to run a turbine supplying power to both the pump and the cabin.
HTH, Cameron:-)
• posted
50 PSI is not too high for domestic plumbing. The pump switch at my country cabin keeps the tank pressure between 30 and 50 psi. (The tank level is approximately floor level.) There is a pressure-relief valve rated at 150 psi to ensure that the tank doesn't burst from overpressure, and the pipes can withstand more than that. PVC schedule 40 pipe is rated at 280 PSI cold, derated to 210 at 90F. (Derate to 72% to allow for water hammer.) Where freezing is possible, you may prefer polyethylene, which withstands somewhat lower pressure but tolerates freezing and better withstands water hammer.
In any case, 1-1/4" pipe will generously supply your cabin from any reasonable distance. My cabin is supplied by a 1" pipe through a 100' run from the tank I mentioned. My suburban house is supplied from the main 125' distant through a 1" pipe, and inside plumbing is 1/2" copper, though 3/4 would be better. "Just do it" would seem to be appropriate.
Jerry
• posted
Thank you. Your answer matched the other reply. Hummm. Why did I not know the answer? 100 feet of head is 100 feet of head, after all. It does not seem it could be that simple.
Yes.
The greatest demand at the cabin will probably be a shower: about 5 gallons a minute at most. Since the hill's incline is about 20 degrees, I can probably use a sine table to find distance. Angle "A" is 20 degrees and side "a" is 100 feet. Makes me wish I finished high school. :-) Horizontal distance at the moment is unknown because I do not know how far away, climbing the hill, will be 100 feet high.
Thank you for your answers. Since the answer to query #2 appears to be "No," then we can err on the side of too high.
• posted
You already have your answer -- 100 vertical feet from the cabin will give you 43 PSI static pressure. The pipe size is determined first by your maximum expected draw rate and then by the length. The more gallons per minutes you want, the larger the pipe to avoid too much pressure drop. (sorry, I don't recall the flow rates for different size pipes) Don't forget that there's a 1.25" pipe size.
Bob
• posted
Humm. Thank you for your reply. We plan on burrying the pipe about 24 inches because freezing is a problem. However we will also plumb a fitting to drain the system. The owner of the ranch, bless her heart, wants water in the cabin even in the winter, so we plan on burrying the supply system and then adding drain taps to the shower and sink.
The owner of the ranch suggested 1.5 inch pipe but I said, guessing, that would be "over-kill." However, it also occured to me that bigger is always better. :-) If they can afford the 1.5 inch pipe, I'll install it. I think polyethylene will be used since that is what is used on other parts of the ranch (there is water already going to The Big House and also water going down here in the bunk house where I live).
Your system uses a pump; the cabin where the ranch owners want water does not have any electricity (nearest power line is 22 miles away) so it must all be gravity fed. Elizabeth wants hot water, however, so an on-demand propane heater will probably be used. Since there will be no tanks at the cabin, perhaps we will skip the pressure relief valve.
=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF= =AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF
• posted
That is good to know. We want the system burried around 24 to 28 inches below ground to prevent freezing, so a pressure regulator would have to be located inside and a drain tap added to it. Far better to skip the regulator if not needed.
Great. See, my 2 years of high school did finally pay off! :-)
Cool. Perhaps I'll put it 125 feet just to be sure.
That is also good to know. I would hate to put the sourse too high up the hill and have the sink blow up. :-)
The plan is for only a shower and sink. There was going to be a toilet but the ranch owners purchased a composting "dry" toilet (for \$1,100! Eeeak! I could eat for six months on that amount of money).
• posted
Yeah, damn few of us are getting younger as the days fly by. :-)
Hummm. I was thinking of a second tank at the cabin but the demand for water probably will not be that great.
Thank you for the reply--- I have added it to my notes to tell the ranch owners.
• posted
Hi. Thank you for replying to my query.
The spring is quite high up the hill (at the moment I do not know the height), however there is already a pipe from it going down to the ranch. The goal here is to tap into that pipe to fill a buried 55 drum and have the over-flow continue down to the ranch. The bottom of the barrel would then be plumbed to the cabin.
There is no power at the cabin. As for the excess pressure, the amount of work (w=fd) the water could perform at the cabin would be zero: the water flow would be at where the 55 gallon drum is. But I like the idea of getting power out of the water: at the moment the water flows into a fish pond and then is piped down to the river---- all gravity fed.
• posted
Thank you. Unless I can think of any reason other than cost to not suggest the ranch owners buy 1.5 polyethylene hose, that is what I'll suggest: it is the same size and material currently used elsewhere on the ranch.
As for pressure drops, I'll ask the ranch owners what shower head flow rate they plan on installing. Seems to me they could run five gallons a minute and still suffer no pressure drop.
• posted
1.5 DEFINETELY BETTER! Thats what I helped install for my moms main water line. a gazillion years ago, its pretty cheap to.
you might add a solar panel & battery for minimal lighting too.
• posted
But you could use the overflow if it is run through a pipe down to the cabin. That will give you 43 psi and 10gpm to work with. Would supply quite a goodly steady amount of electricity - expensive electricity until the equipment amortizes but...
The only time you would get less than 10gpm flow would be while water is being drawn at the cabin.
Re: pressure regulator. It is absolutely unneccessary unless you go
-way- up the hill to install the collector barrel.
Re: pipe size. You might as well go with the 1.5 in as the difference in cost between that and a smaller size over 100 ft is minimal.
Harry K
• posted
You could use the water power to run a dynamo to power a windfan. That you could use to blow the wind more to counteract all those windmills that are slowing the wind down.
Remove NOPSAM to email me. Please let me know if you have posted also.
• posted
Yeah you are right about almost no pressure drop because it will be very small (about 1psi) for 5 gal/min over 1000' or pipe. You could drop the size to 1 1/4 and not notice the difference. 1 1/2" pipe would give you more capacity (in case someone want to have a shower, run a washing machine and flush the toilet simultaneously).
• posted
...
The distance that matters is the length of the pipe. One elbow or globe valve has about as much pressure drop as maybe eight feet of straight pipe. Use 3/4, or, to be generous, 1". (Generous means you won't mind someone flushing the toilet while you're taking a shower.)
Jerry
PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners. | 2,656 | 10,985 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.03125 | 3 | CC-MAIN-2023-50 | latest | en | 0.943035 |
http://www.johndcook.com/blog/2012/09/15/the-paper-is-too-big/ | 1,685,351,100,000,000,000 | text/html | crawl-data/CC-MAIN-2023-23/segments/1685224644817.32/warc/CC-MAIN-20230529074001-20230529104001-00771.warc.gz | 73,178,200 | 14,135 | # The paper is too big
In response to the question “Why are default LaTeX margins so big?” Paul Stanley answers
It’s not that the margins are too wide. It’s that the paper is too big!
This sounds flippant, but he gives a compelling argument that paper really is too big for how it is now used.
As is surely by now well-known, the real question is the size of the text block. That is a really important factor in legibility. As others have noted, the optimum line length is broadly somewhere between 60 characters and 75 characters.
Given reasonable sizes of font which are comfortable for reading at the distance we want to read at (roughly 9 to 12 point), there are only so many line lengths that make sense. If you take a book off your shelf, especially a book that you would actually read for a prolonged period of time, and compare it to a LaTeX document in one of the standard classes, you’ll probably notice that the line length is pretty similar.
The real problem is with paper size. As it happens, we have ended up with paper sizes that were never designed or adapted for printing with 10-12 point proportionally spaced type. They were designed for handwriting (which is usually much bigger) or for typewriters. Typewriters produced 10 or 12 characters per inch: so on (say) 8.5 inch wide paper, with 1 inch margins, you had 6.5 inches of type, giving … around 65 to 78 characters: in other words something pretty close to ideal. But if you type in a standard proportionally spaced font (worse, in Times—which is rather condensed because it was designed to be used in narrow columns) at 12 point, you will get about 90 to 100 characters in the line.
He then gives six suggestions for what to do about this. You can see his answer for a full explanation. Here I’ll just summarize his points.
1. Use smaller paper.
2. Use long lines of text but extra space between lines.
3. Use wide margins.
4. Use margins for notes and illustrations.
5. Use a two column format.
6. Use large type.
Given these options, wide margins (as in #3 and #4) sound reasonable.
## 12 thoughts on “The paper is too big”
1. If printers had used wider margins in the seventeenth century, we might know what proof Fermat had in mind for his last theorem.
2. Kirk Lowery
Edward Tufte has a number of design principles that use extra-wide margins very effectively.
3. Back in the 1980s I heard leslie Lamport himself say that the default LaTeX layout was chosen with the help of a professional book designer, and that the motivation for the extra-wide margins was indeed the one pointed out by Paul Stanley: because long lines with small font make the text hard to read.
4. A lot of conference proceedings use a double column format to keep line length down while maximizing the amount of text per page. However I think the default font size is about 9 points which I think is a bit too small. 10 point, 2 columns seems pretty good to me.
5. Oscar Cassetti
I wish all printed document followed the rules you mentioned. I find myself reading notes or lectures written in Word or similar and looking focus pretty soon. In these cases if I often collapse two virtual pages into a landscape page and in this way I found much easier to read. I like arxXiv.org style suggestion – http://arxiv.org/help/submit_tex
6. Shrutarshi Basu: The default font size in LaTeX is 10 point. I guess the conference proceedings could screw around with that though.
It also depends on the font you use. Times (New Roman) looks pretty good at 8 point, while I find Bookman Old Style, which I normally prefer, looks horrible and hard to read at that size.
7. Dave Tate
I’m curious, John — why do you recommend wide margins over larger type? I can’t think of any disadvantages to larger fonts, and (especially as my eyes get ready to exceed their warranty) at least one clear advantage.
(If you want optimal aesthetics and legibility, be sure to increase the line spacing more than proportionately. The most beautiful Western calligraphic hand is the half-uncial of Tours — but only if you write it with the original triple-spacing between lines.)
8. Dave: I believe I’ve read that people read most efficiently when text is set in 9-12 pt type. If so, enlarging the text to fill the page would decrease reading speed.
9. The line:
“This sounds flippant, but he gives a compelling argument that paper really is too big for how”
has 101 characters.
10. @John: I wonder if that varies depending on language? For example, French has a lot of accents which are small, so I wonder if there is a different ideal size for it? What about Russian or other Cyrillic languages?
11. As a teacher, wiling to use as less XeroX as possible I now generally use article, a4, landscape, twocolumn. That’s 10pt, with small margins. | 1,088 | 4,789 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.515625 | 3 | CC-MAIN-2023-23 | latest | en | 0.967761 |
https://web2.0calc.com/questions/in-the-staircase-shaped-region-below-all-angles-that | 1,550,593,120,000,000,000 | text/html | crawl-data/CC-MAIN-2019-09/segments/1550247490225.49/warc/CC-MAIN-20190219142524-20190219164524-00094.warc.gz | 719,845,031 | 6,228 | +0
In the staircase-shaped region below, all angles that look like right angles are right angles, and each of the eight congruent sides marked
+2
520
1
+113
In the staircase-shaped region below, all angles that look like right angles are right angles, and each of the eight congruent sides marked with a tick mark have length 1 foot. If the region has area 53 square feet, what is the number of feet in the perimeter of the region?
Dec 5, 2017
#1
+7348
+1
Let's call the second longest side x .
Now we can see that the figure is a rectangle with 10 square feet removed from it. So...
area = (9 ft)(x ft) - 10 ft2
area = 9x ft2 - 10 ft2
53 ft2 = 9x ft2 - 10 ft2
53 = 9x - 10
63 = 9x
x = 7
And the perimeter, in feet = 9 + 7 + 9 + 7 = 32
Dec 5, 2017
#1
+7348
+1
Let's call the second longest side x .
Now we can see that the figure is a rectangle with 10 square feet removed from it. So...
area = (9 ft)(x ft) - 10 ft2
area = 9x ft2 - 10 ft2
53 ft2 = 9x ft2 - 10 ft2
53 = 9x - 10
63 = 9x
x = 7
And the perimeter, in feet = 9 + 7 + 9 + 7 = 32
hectictar Dec 5, 2017 | 432 | 1,139 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.4375 | 4 | CC-MAIN-2019-09 | longest | en | 0.698824 |
http://www.dadsworksheets.com/v1/Worksheets/Missing%20Operations/Missing-Operation-Addition-Subtraction-Teens_V1_Answer_Key.html?ModPagespeed=noscript | 1,371,682,288,000,000,000 | text/html | crawl-data/CC-MAIN-2013-20/segments/1368709379061/warc/CC-MAIN-20130516130259-00049-ip-10-60-113-184.ec2.internal.warc.gz | 395,089,935 | 4,870 | ## Answer Key to Math Practice Worksheet for Missing Add Subtract Teens and Twenties
Missing Add SubtractTeens and TwentiesVersion 1 Name:________________________
26 + 5 = 31
8 + 16 = 24
20 + 7 = 27
24 - 6 = 18
16 + 6 = 22
25 + 6 = 31
11 - 1 = 10
24 + 1 = 25
24 + 9 = 33
20 + 2 = 22
28 + 3 = 31
24 + 7 = 31
28 - 9 = 19
24 + 8 = 32
16 - 6 = 10
1 + 10 = 11
22 + 3 = 25
26 + 4 = 30
9 + 15 = 24
10 + 1 = 11
3 + 13 = 16
4 + 10 = 14
25 - 8 = 17
29 + 6 = 35
21 - 9 = 12
22 + 4 = 26
14 + 2 = 16
5 + 18 = 23
10 - 8 = 2
27 + 3 = 30
Total: 30 Goal: _____ Complete: _____ Correct: _____ | 316 | 628 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3 | 3 | CC-MAIN-2013-20 | latest | en | 0.509936 |
https://www.shaalaa.com/question-bank-solutions/find-the-number-of-sides-in-a-polygon-if-the-sum-of-its-interior-angle-is-32-right-angles-sum-of-angles-of-a-polynomial_110459 | 1,656,816,170,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00656.warc.gz | 1,016,103,195 | 9,395 | # Find the Number of Sides in a Polygon If the Sum of Its Interior Angle Is: 32 Right-angles. - Mathematics
Sum
Find the number of sides in a polygon if the sum of its interior angle is: 32 right-angles.
#### Solution
Let no. of sides = n
Sum of angles of polygon = 32 right angles = 32 x 90 = 2880°
(n – 2) x 180° = 2880
n – 2 = 2880/180
n – 2 = 16
n = 16 + 2
n = 18
Concept: Sum of Angles of a Polynomial
Is there an error in this question or solution?
#### APPEARS IN
Selina Concise Mathematics Class 8 ICSE
Chapter 16 Understanding Shapes
Exercise 16 (A) | Q 3.4 | Page 180 | 192 | 590 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.375 | 4 | CC-MAIN-2022-27 | longest | en | 0.748071 |
https://studydaddy.com/question/to-satisfy-concerns-of-potential-customers-the-management-of-ourcampus-has-under | 1,566,570,947,000,000,000 | text/html | crawl-data/CC-MAIN-2019-35/segments/1566027318421.65/warc/CC-MAIN-20190823130046-20190823152046-00204.warc.gz | 651,673,609 | 9,720 | QUESTION
# To satisfy concerns of potential customers, the management of OurCampus! has undertaken a research project to learn the amount of time it takes...
1. To satisfy concerns of potential customers, the management of OurCampus! has undertaken a research project to learn the amount of time it takes users to load a complex video features page. The research team has collected data and has made some claims based on the assertion that the data follow a normal distribution. Open, which documents the work of a quality response team at OurCampus! Read the internal report that documents the work of the team and their conclusions. Then answer the following:
a. Can the collected data be approximated by the normal distribution?
b. Review and evaluate the conclusions made by the OurCampus! research team. Which conclusions are correct? Which ones are incorrect?
c. If OurCampus! could improve the mean time by five seconds, how would the probabilities change?
2. Toss a coin 10 times and record the number of heads. If each student performs this experiment five times, a frequency distribution of the number of heads can be developed from the results of the entire class. Does this distribution seem to approximate the normal distribution?
3. The advocacy group Consumers Concerned About Cereal Cheaters (CCACC) suspects that cereal companies, including Oxford Cereals, are cheating consumers by packaging cereals at less than labeled weights. Recently, the group investigated the package weights of two popular Oxford brand cereals. Open CCACC.pdf to examine the group's claims and supporting data, and then answer the following questions:
a. Are the data collection procedures that the CCACC uses to form its conclusions flawed? What procedures could the group follow to make its analysis more rigorous?
b. Assume that the two samples of five cereal boxes (one sample for each of two cereal varieties) listed on the CCACC website were collected randomly by organization members. For each sample, do the following:
i. Calculate the sample mean.
ii. Assume that the standard deviation of the process is 15 grams and the population mean is 368 grams. Calculate the percentage of all samples for each process that have a sample mean less than the value you calculated in (i).
iii. Again, assuming that the standard deviation is 15 grams, calculate the percentage of individual boxes of cereal that have a weight less than the value you calculated in (i).
c. What, if any, conclusions can you form by using your calculations about the filling processes for the two different cereals?
d. A representative from Oxford Cereals has asked that the CCACC take down its page discussing shortages in Oxford Cereals boxes. Is that request reasonable? Why or why not?
e. Can the techniques discussed in this chapter be used to prove cheating in the manner alleged by the CCACC? Why or why not?
4. Using Random Number Table E.1 from Page 544 of the textbook, simulate the selection of different-colored balls from a bowl, as follows:
a. Start in the row corresponding to the day of the month in which you were born.
b. Select one-digit numbers.
c. If a random digit between 0 and 6 is selected, consider the ball white; if a random digit is a 7, 8, or 9, consider the ball red.
Select samples of digits. In each sample, count the number of white balls and compute the proportion of white balls in the sample. If each student in the class selects five different samples for each sample size, a frequency distribution of the proportion of white balls (for each sample size) can be developed from the results of the entire class. What conclusions can you reach about the sampling distribution of the proportion as the sample size is increased?
Suppose that step c of this problem uses the following rule: "If a random digit between 0 and 8 is selected, consider the ball to be white; if a random digit of 9 is selected, consider the ball to be red." Compare and contrast the results in this problem and those in step c.
5. The fill amount of bottles of a soft drink is normally distributed, with a mean of 2.0 liters and a standard deviation of 0.05 liter. If you select a random sample of 25 bottles, what is the probability that the sample mean will be
a. between 1.99 and 2.0 liters?
b. below 1.98 liters?
c. greater than 2.01 liters?
d. The probability is 99% that the sample mean amount of soft drink will be at least how much?
e. The probability is 99% that the sample mean amount of soft drink will be between which two values (symmetrically distributed around the mean)? | 972 | 4,592 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.671875 | 3 | CC-MAIN-2019-35 | longest | en | 0.942709 |
https://physics.stackexchange.com/questions/202415/why-does-the-superposition-principle-work-in-method-of-images | 1,717,029,420,000,000,000 | text/html | crawl-data/CC-MAIN-2024-22/segments/1715971059412.27/warc/CC-MAIN-20240529230852-20240530020852-00238.warc.gz | 384,566,126 | 42,795 | # Why does the superposition principle work in method of images?
Okay, let there be a conducting sphere having radius $a$ initially charged with $Q$ & insulated. Now, $q$ is brought in front of the conductor at $y$ from the center.
Now, Jackson in his book uses the superposition principle to calculate the field & potential :
If we wish to consider the problem of an insulated conducting sphere with total charge $Q$ in the presence of a point charge $q$, we can build up the solution for the potential by linear superposition. In an operational sense, we can imagine with a grounded conducting sphere(with its charge $q'$ distributed over its surface). We then disconnect the ground wire & add to the sphere an amount of charge $(Q - q')$. ... .To find the potential, we merely note that the added charge $(Q - q')$ will distribute itself uniformly over the surface since the electrostatic forces due to point charge $q$ are already balanced by the charge $q'$.[...]
Okay, this seems to me correct since the force imparted by $q'$ on $\left(Q- q'\right)$ is cancelled by the field of $q$ since it is a zero equipotential surface of $q-q'$ system.
But why doesn't $\left(Q- q'\right)$ exert force on $q'$ on the surface?? On the other hand, superposition principle tells that you can't alter the configuration of either system when superimposing the two systems which means the surface charge density of $q'$ remains the same even when there are $\left(Q- q'\right)$ other charges on the surface?? So, why doesn't $\left(Q- q'\right)$ exert force on $q'$?? If it exerts force on $q'$, then it would change the distribution of $q'$ which can't happen as said by superposition principle. So, can anyone explain why $\left(Q- q'\right)$ wouldn't exert force on $q'$??
Okay, this seems to me correct since the force imparted by $q'$ on $\left(Q- q'\right)$ is cancelled by the field of $q$ since it is a zero equipotential surface of $q-q'$ system.
I have no idea why you think a zero potential surface has anything to so with anything. While the field of $q$ and the field of $q'$ together make a zero potential surface on the surface of sphere they most definitely exert net force on the charge $Q-q'.$
But why doesn't $\left(Q- q'\right)$ exert force on $q'$ on the surface??
It does. The charges that make up the density of charge $Q-q'$ exerts forces on each other so they exert force on the charges that make up the density of charge $q'$ in exactly the same way.
On the other hand, superposition principle tells that you can't alter the configuration of either system when superimposing the two systems which means the surface charge density of $q'$ remains the same even when there are $\left(Q- q'\right)$ other charges on the surface??
Let's be straight, when you grounded the sphere the external charge $q$ felt a force as if there were a charge $q'$ at the image charge location. And there is a net charge of $q'$ on the surface and the charges that make up the net charge of $q'$ exert forces on each other but the forces on each other add up to zero. But they still feel a force due to the charge $q$ and in fact the total force on all the charge on the surface is equal and opposite to the force the charge $q$ feels due to an image charge at the image location. And the actual force at each point in the sphere points orthogonal to the surface.
That orthogonality is the sole fact we get from the fact that the surface is an equipotential surface. And the force that is orthogonal is the total force i.e. the force due to the other charges on the surface plus the force due to the charge $q$ ... only their sum is orthogonal.
As for superposition, the force due to the uniform spread of charge $Q-q'$ over the surface is a force that points radially outwards as well.
The densities add, the fields add and the forces add. The only reason things don't move is that forces orthogonal the surface don't allow charges to move.
So, why doesn't $\left(Q- q'\right)$ exert force on $q'$??
It does.
If it exerts force on $q'$, then it would change the distribution of $q'$
No it doesn't, because the net force is orthogonal to the surface. And charges are free to move inside a conductor but they are not free to move outside a conductor. So a force orthogonal to the surface simply doesn't move the charges because the conductor itself is capable of keeping charges from leaving the conductor. It is called a work function.
which can't happen as said by superposition principle.
And now I think you misunderstand the superposition principle. It just says that if you add fields and add charges that it is also a solutions. And it does mean that if you add static solutions, you get static solution. If 100% everything were electrostatics then yes you could add like that. But consider the simple case of adding charge $q''$ uniformly spread to the outside of a conducting sphere, no grounds, no external charge. If you kept adding solutions like that over and over again eventually the work function might not be enough to keep such a huge charge density on the surface. Or maybe the field density outside the sphere gets large enough to break down the dielectric that is the air itself outside the sphere.
Just because you add charge and current and fields together doesn't mean that things don't move just because they didn't move on their own. Because the sphere itself or the air itself has to deal with all those things added together and it could have limits to what it can handle.
Superposition is just having a linear equation, add the sources and add the individual solutions and you get a solution. Anything non-linear such as a work function or a dielectric breakdown means the effects might not be static even if the individual things were static. Superposition doesn't mean more than it means.
Why did then Jackson write that the force on $Q - q'$ from $q$ is balanced by $q'$?
The forces due to $q$ and $q'$ are already balanced in the sense that the total field due to both is orthogonal to the surface so the new charge also arranges itself so that its field is also orthogonal to the surface as well which means the new charge is arranged uniformly.
• Firstly, thanks for the answer..... Just read the first para; why wouldn't zero potential mean no net force?? Let two equal but opposite charges be taken; suppose a test charge is just in front of the negative charge & hence it must be in negative potential. As it goes towards the positive charge, attractive force from '-' charge decreases & repulsive force from $+$ charge increases; eventually at the mid-point of the configuration, both attractive & repulsive force would become equal & opposite to each other. The potential there would be zero. Why? Because the potential is changing from ...
– user36790
Aug 25, 2015 at 15:15
• being negative at the negative charge region to being positive at the positive charge site. So, the region where the force from the opposite charges cancel each other is the zero equipotential region. Hence zero equi-potential region of any charge-configuration is the region where there is no net force. Am I wrong, sir? Isn't my argument in the example above, right??
– user36790
Aug 25, 2015 at 15:17
• @user36790 Don't generalize from a single example. If you have equal and opposite charges at a fixed distance away from each other then the plane that bisects the line segment between them is the zero of potential and you identified the one point on that entire surface where the force is zero (and actually it's not zero there either you made an error). At every single other point the force isn't zero. And potential is related to energy I don't know why you would think energy is related to force in any way except that the gradient of the potential (energy) is proportional to the field (force). Aug 25, 2015 at 15:23
• I want to know the error, please. And energy is not related to force?? As you wrote later, it is the negative of the gradient of the potantial.
– user36790
Aug 25, 2015 at 15:28
• @user36790 Yes. Though I thought we covered that earlier in a different question. Like physics.stackexchange.com/q/201718 Aug 26, 2015 at 16:13
The problems solved by the method of images are, at their heart, boundary value problem involving the divergence of a scalar field, and those problem have uniqueness theorems that says there is only one configuration of fields in the volume that generates a particular set of boundary conditions.
So if you find any method of generating a field that meets the boundary value conditions, you can use the fields from that method as the field in the volume.
The method of images say "imagine that the actual physical situation was replaced with this charge distribution" that has the desired boundary values. And then uses the imaginary charge distributions to compute the field, and every computation of field from a charge distribution relies on the the superposition of field contributions.
• The image problem doesn't have a discontinuous electric field at the conductor surface so I don't think it directly tells you the force the surface charges feel from each other. It gives you the field outside the conductor but the field inside matters too. You could try to argue that the average of the two is related to the force but this doesn't follow from a simple uniqueness argument for a boundary value problem. Aug 26, 2015 at 5:07
• Hmmm ... it seems I misunderstood the question. Aug 26, 2015 at 5:33
• You answered the title well, the body of the question basically asks how superposition can be consistent with the forces involved. Aug 26, 2015 at 12:07 | 2,155 | 9,652 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.390625 | 3 | CC-MAIN-2024-22 | latest | en | 0.944592 |
https://cstheory.stackexchange.com/questions/54378/simple-coq-simplification-question | 1,718,817,696,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198861828.24/warc/CC-MAIN-20240619154358-20240619184358-00387.warc.gz | 166,876,832 | 38,644 | # Simple Coq simplification question
I am working through Software Foundations, I had a question about the leb_refl theorem from the induction chapter. Here is my solution:
Theorem leb_refl : forall n:nat,
(n <=? n) = true.
Proof.
intros n. induction n as [| n' IHn'].
+ simpl. reflexivity.
(* Question: how does it know how to simplify? here *)
+ simpl. rewrite -> IHn'. reflexivity.
In the inductive step, the simpl command appears to rewrite the goal (S n' <=? S n') = true to (n' <=? n') = true using simplification. Intuitively, this is obvious (simply subtract one from both sides), but I was wondering what is happening in internally to make that jump? In general, how can I demystify what is happening when I use simpl?
What simpl does is compute the result of expressions that can be computed with the available information. For example, Eval simpl in (1 + 1). returns 2, but Eval simpl in (n + m). doesn't change anything because there is not enough information about n and m.
In order to answer your question about _ <=? _, check its definition. One way to find it is [*]:
Locate "_ <=? _".
(* Notation "x <=? y" := (Nat.leb x y) : nat_scope (default interpretation) *)
Print Nat.leb.
(*
Nat.leb =
fix leb (n m : nat) {struct n} : bool :=
match n with
| 0 => true
| S n' => match m with
| 0 => false
| S m' => leb n' m'
end
end
: nat -> nat -> bool
*)
Here we see that Nat.leb is defined recursively on the first argument and then on the second in such a way that the computation of S n <=? S m reduces to the computation of n <=? m. Not knowing anything else about n and m, simpl stops here.
[*] With Require Import PeanoNat. beforehand. | 440 | 1,659 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.65625 | 4 | CC-MAIN-2024-26 | latest | en | 0.886574 |
https://it.mathworks.com/matlabcentral/cody/problems/10-determine-whether-a-vector-is-monotonically-increasing/solutions/215470 | 1,558,392,510,000,000,000 | text/html | crawl-data/CC-MAIN-2019-22/segments/1558232256163.40/warc/CC-MAIN-20190520222102-20190521004102-00197.warc.gz | 534,140,912 | 15,489 | Cody
# Problem 10. Determine whether a vector is monotonically increasing
Solution 215470
Submitted on 11 Mar 2013 by Steven
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
### Test Suite
Test Status Code Input and Output
1 Pass
%% x = [0 1 2 3 4]; assert(isequal(mono_increase(x),true));
ans = 1
2 Pass
%% x = [0]; assert(isequal(mono_increase(x),true));
ans = 1
3 Pass
%% x = [0 0 0 0 0]; assert(isequal(mono_increase(x),false));
ans = 0
4 Pass
%% x = [0 1 2 3 -4]; assert(isequal(mono_increase(x),false));
ans = 0
5 Pass
%% x = [-3 -4 2 3 4]; assert(isequal(mono_increase(x),false));
ans = 0
6 Pass
%% x = 1:.1:10; assert(isequal(mono_increase(x),true));
ans = 1
7 Pass
%% x = cumsum(rand(1,100)); x(5) = -1; assert(isequal(mono_increase(x),false));
ans = 0
8 Pass
%% x = cumsum(rand(1,50)); assert(isequal(mono_increase(x),true));
ans = 1 | 330 | 938 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.1875 | 3 | CC-MAIN-2019-22 | longest | en | 0.481566 |
http://nrich.maths.org/public/leg.php?code=-1&cl=4&cldcmpid=2686 | 1,438,562,476,000,000,000 | text/html | crawl-data/CC-MAIN-2015-32/segments/1438042989331.34/warc/CC-MAIN-20150728002309-00269-ip-10-236-191-2.ec2.internal.warc.gz | 169,542,448 | 6,972 | # Search by Topic
#### Resources tagged with Patterned numbers similar to Golden Eggs:
Filter by: Content type:
Stage:
Challenge level:
### There are 20 results
Broad Topics > Numbers and the Number System > Patterned numbers
### Whole Number Dynamics I
##### Stage: 4 and 5
The first of five articles concentrating on whole number dynamics, ideas of general dynamical systems are introduced and seen in concrete cases.
### Whole Number Dynamics III
##### Stage: 4 and 5
In this third of five articles we prove that whatever whole number we start with for the Happy Number sequence we will always end up with some set of numbers being repeated over and over again.
### Whole Number Dynamics IV
##### Stage: 4 and 5
Start with any whole number N, write N as a multiple of 10 plus a remainder R and produce a new whole number N'. Repeat. What happens?
### Whole Number Dynamics V
##### Stage: 4 and 5
The final of five articles which containe the proof of why the sequence introduced in article IV either reaches the fixed point 0 or the sequence enters a repeating cycle of four values.
### Magic Squares II
##### Stage: 4 and 5
An article which gives an account of some properties of magic squares.
### Whole Number Dynamics II
##### Stage: 4 and 5
This article extends the discussions in "Whole number dynamics I". Continuing the proof that, for all starting points, the Happy Number sequence goes into a loop or homes in on a fixed point.
### Try to Win
##### Stage: 5
Solve this famous unsolved problem and win a prize. Take a positive integer N. If even, divide by 2; if odd, multiply by 3 and add 1. Iterate. Prove that the sequence always goes to 4,2,1,4,2,1...
### On the Importance of Pedantry
##### Stage: 3, 4 and 5
A introduction to how patterns can be deceiving, and what is and is not a proof.
### Magic Squares
##### Stage: 4 and 5
An account of some magic squares and their properties and and how to construct them for yourself.
### How Old Am I?
##### Stage: 4 Challenge Level:
In 15 years' time my age will be the square of my age 15 years ago. Can you work out my age, and when I had other special birthdays?
### Rolling Coins
##### Stage: 4 Challenge Level:
A blue coin rolls round two yellow coins which touch. The coins are the same size. How many revolutions does the blue coin make when it rolls all the way round the yellow coins? Investigate for a. . . .
### Tower of Hanoi
##### Stage: 4 Challenge Level:
The Tower of Hanoi is an ancient mathematical challenge. Working on the building blocks may help you to explain the patterns you notice.
### Back to Basics
##### Stage: 4 Challenge Level:
Find b where 3723(base 10) = 123(base b).
### Sixty-seven Squared
##### Stage: 5 Challenge Level:
Evaluate these powers of 67. What do you notice? Can you convince someone what the answer would be to (a million sixes followed by a 7) squared?
### Odd Differences
##### Stage: 4 Challenge Level:
The diagram illustrates the formula: 1 + 3 + 5 + ... + (2n - 1) = n² Use the diagram to show that any odd number is the difference of two squares.
##### Stage: 4 Challenge Level:
A walk is made up of diagonal steps from left to right, starting at the origin and ending on the x-axis. How many paths are there for 4 steps, for 6 steps, for 8 steps?
### Counting Binary Ops
##### Stage: 4 Challenge Level:
How many ways can the terms in an ordered list be combined by repeating a single binary operation. Show that for 4 terms there are 5 cases and find the number of cases for 5 terms and 6 terms.
### Generating Number Patterns: an Email Conversation
##### Stage: 2, 3 and 4
This article for teachers describes the exchanges on an email talk list about ideas for an investigation which has the sum of the squares as its solution.
### One Basket or Group Photo
##### Stage: 2, 3, 4 and 5 Challenge Level:
Libby Jared helped to set up NRICH and this is one of her favourite problems. It's a problem suitable for a wide age range and best tackled practically.
### Magical Maze - 35 Activities
##### Stage: 4 and 5
Investigations and activities for you to enjoy on pattern in nature. | 991 | 4,156 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.3125 | 4 | CC-MAIN-2015-32 | longest | en | 0.865551 |
https://mathoverflow.net/questions/422749/bounds-on-largest-possible-square-in-sum-of-two-squares | 1,680,020,259,000,000,000 | text/html | crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00751.warc.gz | 454,423,183 | 25,463 | # Bounds on largest possible square in sum of two squares
Suppose we are given integers $$k,c$$ such that $$k=1+c^2$$.
Let $$n$$ be an odd integer and suppose that $$k^n=a_i^2+b_i^2$$ for distinct positive integers $$a_i and $$i\le d$$. That is, there are $$d$$ different ways to express $$k^n$$ as a sum of two squares.
For instance, $$(a_1,b_1)=(k^{(n-1)/2},ck^{(n-1)/2})$$ is a valid pair.
What can be said about $$\max b_i$$? Are there good bounds (as a function of $$c,n$$) on the magnitude of largest possible square when writing an integer as a sum of two squares?
Addendum: As mentioned below, we can rephrase this problem in terms of the irrationality measure of $$\arctan(1/c)/\pi$$. I'm having a lot of trouble finding results on irrationality measures of values of inverse trigonometric functions in general, but I could be missing something.
• if $\theta=\arctan 1/c$, we need an odd multiple of $\theta$ which is close to 0 modulo $2\pi$, this may be tricky May 17, 2022 at 15:13
Rather than discuss $$\max b_{i}$$, I'll discuss the equivalent question of bounding $$\min a_{i}$$. The ABC conjecture implies that for all $$\epsilon > 0$$, $$\min a_{i} \gg (c^{2}+1)^{n/2 - 1 - \epsilon}$$. This is because if we have a solution to $$a^{2} + b^{2} = (c^{2}+1)^{n}$$ with $$a \ll (c^{2} + 1)^{n/2 - 1 - \epsilon}$$, set $$A = a^{2}$$, $$B = b^{2}$$ and $$C = (c^{2}+1)^{n}$$. Assume for simplicity that $$\gcd(a,b) = 1$$. (It doesn't change much if $$\gcd(a,b) > 1$$.) Then $$C \ll {\rm rad}(ABC)^{1+\delta}$$ for all $$\delta > 0$$. This gives $$(c^{2}+1)^{n} < (ab(c^{2}+1))^{1+\delta} \ll ((c^{2}+1)^{n/2 - 1 - \epsilon} (c^{2}+1)^{n/2} (c^{2}+1))^{1+\delta} = ((c^{2}+1)^{n-\epsilon})^{1+\delta}$$ which is a contradiction if $$\delta < \frac{\epsilon}{n-\epsilon}$$.
For $$n = 3$$, it is possible to construct a sequence of integers $$c$$ which getse close to this bound. In particular, let $$c_{k} = \frac{(2 + \sqrt{3})^{k} - (2 - \sqrt{3})^{k}}{\sqrt{3}} \quad a_{k} = \frac{(2 + \sqrt{3})^{k} + (2 - \sqrt{3})^{k}}{2}.$$ It is easy to see that $$a_{k}, c_{k} \in \mathbb{Z}$$, $$c_{k}$$ is even, and a somewhat tedious calculation shows that $$(c_{k}^{2} + 1)^{3} = a_{k}^{2} + \left(c_{k}^{3} + \frac{3}{2} c_{k}\right)^{2}.$$ In particular $$\min a_{i} \leq a_{k} \approx \frac{\sqrt{3}}{2} c_{k}$$.
One could hope to generalize this construction by finding $$c_{k}^{2} + 1 = d_{k}^{2} + e_{k}^{2}$$, and $$\frac{e_{k}}{c_{k}} \approx \sin\left(\frac{\pi}{2n}\right)$$. Letting $$\frac{e_{k}}{\sqrt{d_{k}^{2} + e_{k}^{2}}} = \sin(\theta_{k})$$, this makes $$(c_{k} + i)^{n} = (d_{k} + ie_{k})^{n} \approx (c_{k}^{2} + 1)^{n/2} \left(\cos(\theta_{k}) + i \sin(\theta_{k})\right)^{n} = (c^{2} + 1)^{n/2} \left(\cos(n \theta_{k}) + i \sin(n \theta_{k}\right))$$ and $$\sin(n \theta_{k}) \approx \sin(\pi/2) = 1$$. This boils down to finding points on the hyperboloid $$x^{2}+y^{2} = z^{2} + 1$$ that lie close to the line $$y = \sin\left(\frac{\pi}{2n}\right) z$$. | 1,159 | 2,991 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 41, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.984375 | 4 | CC-MAIN-2023-14 | longest | en | 0.780243 |
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/400/6/c/h/ | 1,600,463,785,000,000,000 | text/html | crawl-data/CC-MAIN-2020-40/segments/1600400188841.7/warc/CC-MAIN-20200918190514-20200918220514-00572.warc.gz | 910,347,839 | 47,098 | # Properties
Label 400.6.c.h Level 400 Weight 6 Character orbit 400.c Analytic conductor 64.154 Analytic rank 0 Dimension 2 CM no Inner twists 2
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$400 = 2^{4} \cdot 5^{2}$$ Weight: $$k$$ $$=$$ $$6$$ Character orbit: $$[\chi]$$ $$=$$ 400.c (of order $$2$$, degree $$1$$, not minimal)
## Newform invariants
Self dual: no Analytic conductor: $$64.1535279252$$ Analytic rank: $$0$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{-1})$$ Defining polynomial: $$x^{2} + 1$$ Coefficient ring: $$\Z[a_1, \ldots, a_{13}]$$ Coefficient ring index: $$2$$ Twist minimal: no (minimal twist has level 40) Sato-Tate group: $\mathrm{SU}(2)[C_{2}]$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of $$i = \sqrt{-1}$$. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + 8 i q^{3} -108 i q^{7} + 179 q^{9} +O(q^{10})$$ $$q + 8 i q^{3} -108 i q^{7} + 179 q^{9} + 604 q^{11} -306 i q^{13} -930 i q^{17} -1324 q^{19} + 864 q^{21} + 852 i q^{23} + 3376 i q^{27} -5902 q^{29} + 3320 q^{31} + 4832 i q^{33} -10774 i q^{37} + 2448 q^{39} -17958 q^{41} -9264 i q^{43} -9796 i q^{47} + 5143 q^{49} + 7440 q^{51} -31434 i q^{53} -10592 i q^{57} + 33228 q^{59} -40210 q^{61} -19332 i q^{63} + 58864 i q^{67} -6816 q^{69} + 55312 q^{71} + 27258 i q^{73} -65232 i q^{77} + 31456 q^{79} + 16489 q^{81} -24552 i q^{83} -47216 i q^{87} + 90854 q^{89} -33048 q^{91} + 26560 i q^{93} -154706 i q^{97} + 108116 q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2q + 358q^{9} + O(q^{10})$$ $$2q + 358q^{9} + 1208q^{11} - 2648q^{19} + 1728q^{21} - 11804q^{29} + 6640q^{31} + 4896q^{39} - 35916q^{41} + 10286q^{49} + 14880q^{51} + 66456q^{59} - 80420q^{61} - 13632q^{69} + 110624q^{71} + 62912q^{79} + 32978q^{81} + 181708q^{89} - 66096q^{91} + 216232q^{99} + O(q^{100})$$
## Character values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/400\mathbb{Z}\right)^\times$$.
$$n$$ $$101$$ $$177$$ $$351$$ $$\chi(n)$$ $$1$$ $$-1$$ $$1$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
49.1
− 1.00000i 1.00000i
0 8.00000i 0 0 0 108.000i 0 179.000 0
49.2 0 8.00000i 0 0 0 108.000i 0 179.000 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
5.b even 2 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 400.6.c.h 2
4.b odd 2 1 200.6.c.c 2
5.b even 2 1 inner 400.6.c.h 2
5.c odd 4 1 80.6.a.f 1
5.c odd 4 1 400.6.a.f 1
15.e even 4 1 720.6.a.h 1
20.d odd 2 1 200.6.c.c 2
20.e even 4 1 40.6.a.b 1
20.e even 4 1 200.6.a.c 1
40.i odd 4 1 320.6.a.e 1
40.k even 4 1 320.6.a.l 1
60.l odd 4 1 360.6.a.b 1
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
40.6.a.b 1 20.e even 4 1
80.6.a.f 1 5.c odd 4 1
200.6.a.c 1 20.e even 4 1
200.6.c.c 2 4.b odd 2 1
200.6.c.c 2 20.d odd 2 1
320.6.a.e 1 40.i odd 4 1
320.6.a.l 1 40.k even 4 1
360.6.a.b 1 60.l odd 4 1
400.6.a.f 1 5.c odd 4 1
400.6.c.h 2 1.a even 1 1 trivial
400.6.c.h 2 5.b even 2 1 inner
720.6.a.h 1 15.e even 4 1
## Hecke kernels
This newform subspace can be constructed as the kernel of the linear operator $$T_{3}^{2} + 64$$ acting on $$S_{6}^{\mathrm{new}}(400, [\chi])$$.
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ 1
$3$ $$1 - 422 T^{2} + 59049 T^{4}$$
$5$ 1
$7$ $$1 - 21950 T^{2} + 282475249 T^{4}$$
$11$ $$( 1 - 604 T + 161051 T^{2} )^{2}$$
$13$ $$1 - 648950 T^{2} + 137858491849 T^{4}$$
$17$ $$1 - 1974814 T^{2} + 2015993900449 T^{4}$$
$19$ $$( 1 + 1324 T + 2476099 T^{2} )^{2}$$
$23$ $$1 - 12146782 T^{2} + 41426511213649 T^{4}$$
$29$ $$( 1 + 5902 T + 20511149 T^{2} )^{2}$$
$31$ $$( 1 - 3320 T + 28629151 T^{2} )^{2}$$
$37$ $$1 - 22608838 T^{2} + 4808584372417849 T^{4}$$
$41$ $$( 1 + 17958 T + 115856201 T^{2} )^{2}$$
$43$ $$1 - 208195190 T^{2} + 21611482313284249 T^{4}$$
$47$ $$1 - 362728398 T^{2} + 52599132235830049 T^{4}$$
$53$ $$1 + 151705370 T^{2} + 174887470365513049 T^{4}$$
$59$ $$( 1 - 33228 T + 714924299 T^{2} )^{2}$$
$61$ $$( 1 + 40210 T + 844596301 T^{2} )^{2}$$
$67$ $$1 + 764720282 T^{2} + 1822837804551761449 T^{4}$$
$71$ $$( 1 - 55312 T + 1804229351 T^{2} )^{2}$$
$73$ $$1 - 3403144622 T^{2} + 4297625829703557649 T^{4}$$
$79$ $$( 1 - 31456 T + 3077056399 T^{2} )^{2}$$
$83$ $$1 - 7275280582 T^{2} + 15516041187205853449 T^{4}$$
$89$ $$( 1 - 90854 T + 5584059449 T^{2} )^{2}$$
$97$ $$1 + 6759265922 T^{2} + 73742412689492826049 T^{4}$$ | 2,281 | 4,825 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.6875 | 3 | CC-MAIN-2020-40 | latest | en | 0.29697 |
https://stats.stackexchange.com/questions/86830/transformation-to-normality-of-the-dependent-variable-in-multiple-regression | 1,713,482,877,000,000,000 | text/html | crawl-data/CC-MAIN-2024-18/segments/1712296817249.26/warc/CC-MAIN-20240418222029-20240419012029-00029.warc.gz | 478,882,349 | 43,835 | # transformation to normality of the dependent variable in multiple regression
Is it really important to normalize dependent variables in multiple regression or are there any exceptions?
My model is providing better results with more significant hypothesis when the DVs are not normalized (transformed). Will be grateful for the comments.
• Be careful of the word 'normalize'. It generally doesn't mean 'transform to approximate normality' which is what I think you mean here. Why would you attempt to do this to a dependent variable in regression? Feb 17, 2014 at 6:59
• @Glen_b. Well i read in most texts that we normalize a variable through transformation (that can be done through different ways). Further, as i read, it is basic assumption of regression to normalize DVs. Feb 17, 2014 at 7:03
• "to normalize DVs" is an action, not an assumption. What regression assumption do you think this action corresponds to? Feb 17, 2014 at 7:12
• Something is assumed to be normal in regression, but it's not the DV itself (and it's certainly not the IV's). Feb 17, 2014 at 7:17
• I believe few texts, if any, would use a term like "normalize" in this context. Many might refer to the utility of standardizing the DVs for (a) numerical stability of calculations, (b) interpreting standardized coefficients, and (c) streamlining certain conceptual explanations. But because standardization is tantamount to a mere change in units of measurement, it will not affect the results. If your results are indeed changing, then you are not "normalizing" in any commonly understood sense of the word: please tell us more precisely what it is you actually are doing to your data.
– whuber
Feb 17, 2014 at 19:00
I'd planned to link to an answer with a good list (with discussion) of the regression assumptions an answer with the multiple regression assumptions, but I can't find a completely suitable one for what I had in mind. There are plenty of discussions of the issues (especially in comments), but not quite everything I think is needed in one place.
The regression model looks like this:
Most of the regression assumptions relate to the error component(s) of the model.
So, time for the multiple regression assumptions. [Formal hypothesis testing of the assumptions is generally not recommended - it mostly answers the wrong question, for starters. Diagnostic displays (residual plots for example) are commonly used.]
This is a typical way to organize the list, but depending on how you frame things, people may add more or put them together a bit differently. Approximately in order of importance:
0. To fit a regression doesn't require these assumptions, except perhaps (arguably) the first. The assumptions potentially matter when doing hypothesis tests and producing confidence intervals and - most importantly - prediction intervals (for which several of them matter a fair bit).
1. The model for the mean is correct ("Linearity"). The model is assumed to be linear in the (supplied) predictors and linear in the parameters*. (NB A quadratic model, or even a sinusoidal model, for example, can still be linear in the predictors, if you supply the right ones.)
*(and in most situations, that all the important terms are included)
This might be checked by examining residuals against fitted values, or against any independent variables that might have non-linear relationships; added-variable plots could be used to see whether any variables not in the model are important.
2. The $x$'s are observed without error
This generally isn't something you can assess by looking at the data set itself; it will usually proceed from knowing something about the variables and how they're collected. A person's height might be treated as fixed (even though the measurement of it is subject to both variation over time and measurement error) - the variation is very small, but for example a person's blood pressure is typically much more variable - if you measured a second time a little later, it might be quite different.
3. Constant error variance ("homoskedasticity").
This would normally be assessed either: (i) by looking at residuals against fitted (to check for variance related to the mean), or against variates that the error variance is particularly expected to be related to; or (ii) looking at some function of squared residuals (as the best available measure of observation variance) against the same things.
For example, one of the default diagnostic displays for R's linear regression is a plot of $\sqrt{|r_i|}$ vs fitted values, where $r_i$ is the standardized residual, which would be the fourth root of the squared standardized residuals. This transformation is mostly used to make the distribution less skew, facilitating comparisons without being dominated by the largest values but it also serves a purpose in not making relatively moderate changes in spread look very dramatic as they might with sat squared residuals.
4. Independence. The errors are assumed to be independent of each other (and of the $x$'s).
There are many ways that errors can exhibit dependence; you generally need some prior expectation of the form of dependence to assess it. If the data are observed over time (or along some spatial dimension), serial dependence would be an obvious thing to check for (perhaps via a sample autocorrelation function plot).
5. The errors are assumed to be normal (with zero mean).
The assumption about zero mean overall is uncheckable, since any non-zero mean is absorbed into the intercept (constant) term. Locally nonzero-mean would show up in the plot of residuals vs fitted plot as a lack of fit. The assumption of normality might be assessed (for example) via a Q-Q plot.
In larger samples, the last assumption becomes much less important, except for prediction intervals (where it always matters for the usual normal-theory inference).
Note that the collection of dependent variables ($Y$'s) is not assumed to be normal. At any given combination of $x$-values (IVs) they are normal, but the whole sample of $Y$'s will then be a mixture of normals with different means ... and - depending on the particular collection of combination of independent variable values, that might be very non-normal.
Which is to say, there's no point looking at the distribution of the IV to assess the normality assumption, because that's not what is assumed normal. The error term is assumed normal for the most usual forms of inference, which you estimate by the residuals.
Note that it's not required to assume normality even to perform inference; there are numerous alternatives that allow inference either via hypothesis tests (e.g. a permutation test) or confidence intervals (e.g. bootstrap intervals or intervals based on nonparametric correlation between residuals and predictor) and the relationship between the two forms of inference; there's also different parametric assumptions that can be accommodated with linear regression (e.g. fitting a Poisson or gamma GLM with identity link.
Examples of non-normal theory fits:
(a) One is illustrated here -- the red line in the plot there is the linear regression fitted using a Gamma GLM (a parametric assumption); tests of coefficients are easy to obtain from GLM output; this approach also generalizes to "multiple regression" easily.
(b) This answer shows estimated lines based on nonparametric correlations; tests and intervals can be generated for those.
A big problem with transforming to achieve normality
Let's say all the other regression assumptions are reasonable, apart from the normality assumption.
Then you apply some nonlinear transformation in the hopes of making the residuals look more normal.
Suddenly, your previously linear relationships are no longer linear. | 1,584 | 7,768 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.796875 | 3 | CC-MAIN-2024-18 | latest | en | 0.946505 |
http://gmatclub.com/forum/if-the-number-58-is-1-std-dev-below-the-mean-the-28508.html?fl=similar | 1,485,016,886,000,000,000 | text/html | crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00462-ip-10-171-10-70.ec2.internal.warc.gz | 120,485,663 | 40,203 | If the number 58 is 1 std dev below the mean & the : Quant Question Archive [LOCKED]
Check GMAT Club Decision Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack
It is currently 21 Jan 2017, 08:41
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If the number 58 is 1 std dev below the mean & the
Author Message
Intern
Joined: 26 Mar 2006
Posts: 27
Followers: 0
Kudos [?]: 0 [0], given: 0
If the number 58 is 1 std dev below the mean & the [#permalink]
### Show Tags
17 Apr 2006, 22:39
This topic is locked. If you want to discuss this question please re-post it in the respective forum.
If the number 58 is 1 std dev below the mean & the number 98 is 2 std dev above the mean, then what is the mean value?
a. 70
b 72
c 74
d 78
VP
Joined: 29 Apr 2003
Posts: 1403
Followers: 2
Kudos [?]: 28 [0], given: 0
### Show Tags
17 Apr 2006, 23:53
m - 2d = 58 [m= mean, d=standard dev]
m + 3d = 98
Solving for m, we get 74
Hence C
Its qte easy if the theory behind SD is known!
Director
Joined: 09 Oct 2005
Posts: 720
Followers: 3
Kudos [?]: 23 [0], given: 0
### Show Tags
18 Apr 2006, 00:19
discussed recently ))
_________________
IE IMBA 2010
VP
Joined: 29 Apr 2003
Posts: 1403
Followers: 2
Kudos [?]: 28 [0], given: 0
### Show Tags
18 Apr 2006, 00:50
Yep. I posted there also!! :p
Manager
Joined: 28 Dec 2005
Posts: 117
Followers: 1
Kudos [?]: 2 [0], given: 0
### Show Tags
18 Apr 2006, 01:39
how were you able to set up the equations -2dand +3d
VP
Joined: 29 Apr 2003
Posts: 1403
Followers: 2
Kudos [?]: 28 [0], given: 0
### Show Tags
18 Apr 2006, 01:47
This is the original problem that I answered!
http://www.gmatclub.com/phpbb/viewtopic ... highlight=
U are rite... it shud be
m -1d =58
m+2d = 98
but then u do not get the value of m as one of the given nos. So I think the above link has the correct version of the question!
18 Apr 2006, 01:47
Display posts from previous: Sort by | 792 | 2,454 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.6875 | 4 | CC-MAIN-2017-04 | latest | en | 0.833271 |
https://www.physicsforums.com/threads/sound-and-doppler-effect-on-window-pressure.376406/ | 1,611,491,252,000,000,000 | text/html | crawl-data/CC-MAIN-2021-04/segments/1610703548716.53/warc/CC-MAIN-20210124111006-20210124141006-00166.warc.gz | 926,675,382 | 13,968 | Sound and doppler effect on window pressure
Homework Statement
*See attached diagram*
On straight, level, parallel tracks separated by a distance d,
two trains are testing their horns (in still air of density ρair).
The horns (located at the train fronts) emit equal frequencies.
Horn 1 is a pipe, open at one end, emitting a total power P
and resonating at its 9th harmonic. Horn 2 is a loudspeaker
of circular diameter equal to the length of horn 1.
In one test, the trains (1 and 2) and three researchers (A, B, C) are all stationary and are positioned as shown.
Sound from horn 1 takes time t to reach C, midway between the tracks.
A hears only horn 1 (loudness = β1). B (right next to A) hears both. But when both horns are sounding from
the positions shown, C hears both horns at maximum combined loudness. And if train 2 were repositioned
farther and farther forward along its track until it was exactly side-by-side with train 1, C would also hear
maximum combined loudness at 18 other positions of train 2 (including the fi nal position when the trains were
exactly side-by-side).
In a second test, C stands alone, still midway between the tracks. The trains (from much farther away) move
toward her at constant speeds (v2 > v1, but only v1 is known), both sounding their horns. When the two trains
are side-by-side, C notes a beat frequency of f beat .
Find the net air force (magnitude & direction) on a window pane (area = A2) on the right side of train 2.
Train 2ʼs windows were closed just before it started moving.
The list of known values: d, ρ(air) (density of air) , P (power), t, β1 , v1 , f beat , A2
Homework Equations
sin(theta)=1.22(wavelength)/Diameter
F=AxP
fc=f2[1/(1-v2/v)]
The Attempt at a Solution
By working backward I know I have to find the velocity of train 2 in order to find the F exerted by the wind on the window. In order to find the velocity I need to use the doppler shift equation, however in order to find the frequency of train 2 I need the wavelength of its train horn.
Any help would be appreciated. Thanks.
Attachments
• 8.9 KB Views: 332 | 543 | 2,095 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.578125 | 4 | CC-MAIN-2021-04 | latest | en | 0.959324 |
https://www.arxiv-vanity.com/papers/1408.6980/ | 1,601,450,517,000,000,000 | text/html | crawl-data/CC-MAIN-2020-40/segments/1600402118004.92/warc/CC-MAIN-20200930044533-20200930074533-00616.warc.gz | 704,205,367 | 63,897 | arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.
Augmentation Schemes for Particle MCMC
Paul Fearnhead, Loukia Meligkotsidou.
1. Department of Mathematics and Statistics, Lancaster University.
2. Department of Mathematics, University of Athens.
* Correspondence should be addressed to Paul Fearnhead. (e-mail: ).
Abstract
Particle MCMC involves using a particle filter within an MCMC algorithm. For inference of a model which involves an unobserved stochastic process, the standard implementation uses the particle filter to propose new values for the stochastic process, and MCMC moves to propose new values for the parameters. We show how particle MCMC can be generalised beyond this. Our key idea is to introduce new latent variables. We then use the MCMC moves to update the latent variables, and the particle filter to propose new values for the parameters and stochastic process given the latent variables. A generic way of defining these latent variables is to model them as pseudo-observations of the parameters or of the stochastic process. By choosing the amount of information these latent variables have about the parameters and the stochastic process we can often improve the mixing of the particle MCMC algorithm by trading off the Monte Carlo error of the particle filter and the mixing of the MCMC moves. We show that using pseudo-observations within particle MCMC can improve its efficiency in certain scenarios: dealing with initialisation problems of the particle filter; speeding up the mixing of particle Gibbs when there is strong dependence between the parameters and the stochastic process; and enabling further MCMC steps to be used within the particle filter.
Keywords: Dirichlet process mixture models, Particle Gibbs, Sequential Monte Carlo, State-space models, Stochastic volatility.
1 Introduction
Particle MCMC Andrieu et al. (2010) is a recent extension of MCMC. It is most naturally applied to inference for models, such as state-space models, where there is an unobserved stochastic process. Standard MCMC algorithms, such as Gibbs samplers, can often struggle with such models due to strong dependence between the unobserved process and the parameters (see e.g. Pitt and Shephard, 1999a; Fearnhead, 2011). Alternative Monte Carlo methods, called particle filters, can be more efficient for inference about the unobserved process given known parameter values, but struggle when dealing with unknown parameters. The idea of particle MCMC is to embed a particle filter within an MCMC algorithm. The particle filter will then update the unobserved process given a specific value for the parameters, and MCMC moves will be used to update the parameter values. Particle MCMC has already been applied widely: in areas such as econometrics Pitt et al. (2012), inference for epidemics Rasmussen et al. (2011), systems biology Golightly and Wilkinson (2011), and probabilistic programming Wood et al. (2014).
The standard implementation of particle MCMC is to use an MCMC move to update the parameters and a particle filter to update the unobserved stochastic process (though see Murray et al., 2012; Wood et al., 2014, for alternatives). However, this may be inefficient, due to a large Monte Carlo error in the particle filter, or due to slow mixing of the MCMC moves. The idea of this paper is to consider generalisations of this standard implementation, which can lead to more efficient particle MCMC algorithms.
In particular, we suggest a data augmentation approach, where we introduce new latent variables into the model. We then implement particle MCMC on the joint posterior distribution of the parameters, unobserved stochastic process and latent variables. We use MCMC to update the latent variables and a particle filter to update the parameters and the stochastic process. The intuition behind this approach is that the latent variables can be viewed as containing information about the parameters and the stochastic process. The more information they contain, the lower the Monte Carlo error of the particle filter. However, the more information they contain, the stronger the dependencies in the posterior distribution, and hence the poorer the MCMC moves will mix. Thus, by carefully choosing our latent variables, we are able to appropriately trade-off the error in the particle filter against the mixing of the MCMC, so as to improve the efficiency of the particle MCMC algorithm.
In the next section we introduce particle filters and particle MCMC. Then in Section 3 we introduce our data augmentation approach. A key part of this is constructing a generic way of defining the latent variables so that the resulting particle MCMC algorithm is easy to implement. This we do by defining the latent variables to be observations of the parameters or of the stochastic process. By defining the likelihood for these observations to be conjugate to the prior for the parameters or the stochastic process we are able to analytically calculate quantities needed to implement the resultant particle MCMC algorithm. Furthermore, the accuracy of the pseudo-observations can be varied to allow them to contain more or less information. In Section 4 we investigate the efficiency of the new particle MCMC algorithms. We focus on three scenarios where we believe the data augmentation approach may be particularly useful. These are to improve the mixing of the particle Gibbs algorithm when there are strong dependencies between parameters and the unobserved stochastic process; to enable MCMC to be used within the particle filter; and to deal with diffuse initial distributions for the stochastic process. The paper ends with a discussion.
2 Particle MCMC
2.1 State-Space Models
For concreteness we consider application of particle MCMC to a state-space model, though both particle MCMC and the ideas we develop in this paper can be applied more generally. Throughout we will use and to denote general marginal and conditional probabililty density functions, with the arguments making it clear which distributions these relate to.
Our state-space model will be parameterised by , and we introduce a prior distribution for this parameter, . We then have a latent discrete-time stochastic process, . This process is often termed the state-process, and we assume it is a Markov process. Thus the probability density of a realisation of the state process can be written as
p(x1:T|θ)=p(x1|θ)T∏t=2p(xt|xt−1,θ).
We do not observed the state process directly. Instead we take partial observations at each time-point, . We assume that the observation at any just depends on the state process through its value at that time, . Thus we can write the likelihood of the observations, given the state process and parameters as,
p(y1:T|x1:T,θ)=T∏t=1p(yt|xt,θ).
Our interest is in calculating, or approximating, the posterior for the parameters and states:
p(x1:T,θ|y1:T) ∝ p(θ)p(x1:T|θ)p(y1:T|x1:T,θ) (1) = p(θ)[p(x1|θ)T∏t=2p(xt|xt−1,θ)][T∏t=1p(yt|xt,θ)].
We frequently use the notation of an extended state vector , which consists of the full path of the state process to time , and the value of the parameter. Thus consists of the full state-process and the parameter, and we are interested in calculating or approximating .
2.2 Particle Filters
Particle filters are Monte Carlo algorithms that can be used to approximate posterior distributions for state-space models, such as (1). We will describe a particle filter with a view to their use within particle MCMC algorithms, which is introduced in the next section.
Rather than use a particle filter to approximate (1), we will consider conditioning on part of the process. We will assume we condition on , which will be some function of . Thus the particle filter will target . The output of running the particle filter will just be an estimate of the marginal likelihood for the data given , and a single realisation for the parameter and state process. A simple particle filter algorithm is given in Algorithm 1.
If we stopped this particle filter algorithm at the end of iteration , we would have a set of values for the extended state, often called particles, each with an associated weight. These weighted particles give an approximation to . At iteration we propagate the particles and use importance sampling to create a set of weighted particles to approximate . This involves first generating new particles at time through (i) sampling particles from the approximation to ; and (ii) propagating these particles by simulating values for from the transition density of the state-process, . Secondly, each of these particles at is then given a weight proportional to the likelihood of the observation for that particle value. (See Doucet et al., 2000; Fearnhead, 2008, for more details). At the end of iteration we output a single value of , by sampling once from the particles at time , with the probability of choosing a particle being proportional to its weight.
A by product of the importance sampling at iteration is that we get a Monte Carlo estimate of , and the product of these for gives an unbiased estimate of the marginal likelihood (Del Moral, 2004, proposition 7.4.1). This unbiased estimate will be key to the implementation of particle MCMC, and is also output.
This is arguably the simplest particle filter implementation, and more efficient extensions do exist (see, for example Liu and Chen, 1998; Pitt and Shephard, 1999b; Carpenter et al., 1999; Gilks and Berzuini, 2001). In describing this algorithm we have implicitly assumed that has been chosen so that all the conditional distributions needed for implementing particle filter are easy to calculate or sample from, as required. In practice this will mean depending on either the parameters and/or the initial state or states of the latent process. The most common choice would be to condition on the parameter values, , or to have no conditioning, . The latter choice means that parameter values are sampled within the particle filter, and can often lead to particle degeneracy: whereby most or all particles at time have the same parameter value. This is less of an issue when we are implementing particle MCMC, as we output a single particle, than when using particle filters to approximate the posterior distribution (Andrieu et al., 2010, Section 2.2.2).
2.3 Particle MCMC
The idea of particle MCMC is to use a particle filter within an MCMC algorithm. There are two generic implementations of particle MCMC: particle marginal Metropolis-Hastings (PMMH) and particle Gibbs.
2.3.1 Particle marginal Metropolis-Hastings Algorithm
First we describe the particle marginal Metropolis-Hastings (PMMH) sampler (Andrieu et al., 2010, Section 2.4.2). This involves choosing, , an appropriate function of the extended state . Our MCMC algorithm has a state that is , a value for this function, a corresponding value for the extended state and an estimate for the marginal likelihood given the current value of . We assume that has been chosen so that we can both implement the particle filter of Algorithm 1, and also that we can calculate the marginal distribution, . A common choice is , though see below for other possibilities.
Within each iteration of PMMH we first propose a new value for , using a random walk proposal. Then we run a particle filter to both propose a new value of and to calculate an estimate for the marginal likelihood. These new values are then accepted with a probability that depends on the ratio of the new and old estimates of the marginal likelihood. Full details are given in Algorithm 2.
One intuitive interpretation of this algorithm is that we are using a particle filter to sample a new value of from an approximation to within a standard MCMC algorithm. If we ignore the approximation, and denote the current state by , then the acceptance probability of this MCMC algorithm would be
min{1,q(Z|Z′)p(y1:T|Z′)p(Z′)q(Z′|Z)p(y1:T|Z)p(Z)},
as the terms cancel as they appear in both the target and the proposal. The actual acceptance probability we use just replaces the, unknown, marginal likelihoods with our estimates. The magic of particle MCMC is that despite these two approximations, both in the proposal distribution for given and in the marginal likelihoods, the resulting MCMC algorithm has the correct stationary distribution.
2.3.2 Particle Gibbs
The alternative particle MCMC algorithm, particle Gibbs, aims to approximate a Gibbs sampler. A Gibbs sampler that targets would involve iterating between (i) sampling a new value for from its full-conditional given the other components of ; and (ii) sampling a new value for from its full conditional given , .
Implementing step (i) is normally straightforward. For example if , this involves sampling new parameter values from their full conditional given the path of the state-process. For many models, for example where there is conjugacy between the prior for the parameter and the model for the state and observation process, this distribution can be calculated analytically.
The difficulty, however, comes with implementing step (ii). The idea of particle Gibbs is to use a particle filter to approximate this step. Denote the current value for by . Then, informally, this involves implementing a particle filter but conditioned on one of the particles at time being . We then sample one of the particles at time from this conditioned particle filter and update to the value of this particle. This simulation step is called a conditional particle filter, or a conditional SMC sampler. For full details of this, and proof of the validity of the particle Gibbs sampler, see Andrieu et al. (2010).
2.3.3 Implementation
Whilst both particle MCMC algorithms have the correct stationary distribution regardless of the accuracy of the particle filter, the accuracy does affect the mixing properties. More accurate estimates of the marginal likelihood will lead to more efficient algorithms Andrieu and Roberts (2009). In implementing particle MCMC, as well as choosing details of the proposal distribution for , we need also to choose the number of particles to use in the particle filter. Theory guiding these choices for PMMH is given in Pitt et al. (2012), Doucet et al. (2012) and Sherlock et al. (2013).
The standard implementation of particle MCMC will have . However, our description is aimed to stress that particle MCMC is more general than this. It involves using MCMC proposals to update part of the extended state, and then a particle filter to update the rest. There is flexibility in choosing which part is updated by the MCMC move and which by the particle filter within the particle MCMC algorithm. For example, in order to deal with a diffuse initial distribution for the state-process, Murray et al. (2012) choose , so that MCMC is used to update both the parameters and the initial value of the state-process. Alternatively, Wood et al. (2014) choose , so that both the parameters and the path of the state are updated using the particle filter.
To demonstrate this flexibility, and discuss its impact on the performance of the particle MCMC algorithm, we will consider a simple example.
2.4 Example of Particle MCMC for linear-Gaussian Model
We consider investigating the efficiency of particle MCMC for a simple linear-Gaussian model where we can calculate the posterior exactly. The model has a one-dimensional state process, defined by
X1=σ1ϵ(X)1; Xt=γXt−1+σXϵ(X)t, for t=2,…,T,
where are independent standard normal random variables. For we have observations
Yt=θ+Xt+σYϵ(Y)t,
where are independent standard normal random variables. We assume that , , and are known, and thus the only unknown parameter is . Finally, we assume a normal prior for with mean 0 and variance .
We simulated data for 100 times steps, with , and chosen so that that process will have variance of 1 at stationarity. Our interest was in seeing how particle MCMC performs in situations where there is substantial uncertainty in and . Here we present results with as we vary . We implemented particle MCMC with , and . For the latter two implementations we used a random walk update for and with the variance set to the posterior variance; with independent random walk updates for and when .
To evalulate performance we ran each particle MCMC algorithm using 100 particles and iterations. We removed the first quarter of iterations as burn-in, and calculated autocorrelation times for estimating . These are shown in Figure 1(a).
The results show the trade-off in the choice of . Including more information in leads to poorer mixing of the underlying MCMC algorithm, but comes at the advantage of smaller Monte Carlo error in the estimate of the likelihood from the particle filter. This reduction in Monte Carlo error becomes increasingly important as the prior variance for increases. So for smaller values of the best algorithm has , whereas when we increase first the choice of then the choice of performs better.
For this simple example there are alternative ways to improve the mixing of particle MCMC. One reason that works better than for small is that the posterior distribution has strong correlation between and , and this correlation is ignored in the random walk update for . So using a better reparameterisation Papaspiliopoulos et al. (2003), or a better random walk proposal would improve the mixing for the case where . Also we have kept the number of particles fixed, whereas better results may be possible for and if we increase the number of particles as we increase . However the main purpose of this example is to give insight into when different choices of would work well: adding information to is one way to reduce the Monte Carlo error in estimates of the likelihood, but at the cost of slower mixing of the underlying MCMC algorithm particular if the posterior for has strong correlations which are not taken account of in the MCMC update.
3 Augmentation Schemes for Particle MCMC
The example at the end of the previous section shows that the choice of which part of the extended state is updated by the particle filter, and which by a standard MCMC move, can have a sizeable impact on the performance of particle MCMC. Furthermore the default option for state-space models of updating parameters by MCMC and the state-process by a particle filter, is not always optimal.
The potential within this choice can be greatly enhanced by augmenting the original model. We will introduce an extra latent variable, , drawn from some distribution conditional on . This will introduce a new posterior distribution
p(XT,z|y1:T)=p(XT|y1:T)p(z|XT), (2)
where is defined by (1) as before.
For any choice of , if we marginalise out of (2) we get (1). Our approach will be to implement a particle MCMC algorithm for sampling from (2). This will give us samples from (2), with the from (1) as required.
In implementing the particle MCMC algorithm, using either PMMH or particle Gibbs, we will choose . That is, we update the latent variable, , using the MCMC move, and we use a particle filter to update conditional on . By appropriate choice of we hope to obtain a particle MCMC algorithm that mixes better than the standard implementation.
Whilst, in theory, we have a completely free choice over the distribution of the new latent variable, , in practice we need to be able to easily implement the resulting particle MCMC algorithm. For both PMMH and particle Gibbs this will require us to be able to run a particle filter, or conditional particle filter, conditional on . In practice this will mean that we need to be able to easily simulate from , and, for , . For PMMH we will also need to be able to calculate the acceptance probability of the algorithm involves, which involves the term
p(z) = ∫p(z|XT)p(XT)dXT = ∫p(z|θ,x1:T)p(θ)p(x1)T∏t=2p(xt|xt−1)dθdx1:T.
Thus we are restricted to cases where these conditional and marginal distributions can be calculated. We investigate possible generic choices in the next section.
3.1 Generic Augmentation Schemes: Pseudo-Observations
In choosing an appropriate latent variable we need to first consider the ease with which we can implement the resulting particle MCMC algorithm. A generic approach is to model as an observation of either or or both. As is a latent variable we have added to the model, we call these pseudo-observations.
By the Markov property of the state-process, if only depends on and/or then we have . Thus to be able to implement the particle filters we only need to choose our model for the pseudo-observation so that we can simulate from and . To enable this we can let each component of be an independent pseudo-observation of a component of or , with the likelihood for the pseudo-observation chosen so that the prior for the relevant component of or is conjugate to this likelihood. Conjugacy will ensure that we can both simulate from the necessary conditional distributions and we can calculate as required to implement the particle MCMC algorithms. Constructing such models for the pseudo-observations is possible for many state-space models of interest. In some applications other choices for may be necessary or advisable: see Section 4.2 for an example.
To make these ideas concrete consider the linear Gaussian model of Section 2.4. We can choose where is a pseudo-observation of and is one of . As we have both a Gaussian prior for and a Gaussian initial distribution for , in each case a conjugate likelihood model arise from observations with additive Gaussian error. So for example we could choose
Zθ|θ∼N(θ,τ2). (3)
This would give a marginal distribution of and a conditional distribution of
θ|zθ∼N(zθσ2θτ2+σ2θ,τ2σ2θτ2+σ2θ).
Consider the case where we let depend only on . In specifying we will have a choice as to how informative is about – for example the choice of in (3) for the linear Gaussian model example. As such this gives a continuum between the implementations of particle MCMC in Section 2.3. In the limit as is increasingly informative about , we converge on an implementation of particle MCMC where we update using MCMC and using the particle filter. As becomes less informative, we would tend to an implementation of particle MCMC where both and are updated through the particle filter. Where on this continuum is optimal will depend on a trade-off between Monte Carlo error in our estimate of the marginal likelihood and the size of move in parameter space that we propose. When we simulate parameter values at the start of the particle filter we simulate from . Thus if is less informative, this will be a more diffuse distribution. Hence our sample of parameter values will be cover a larger region of the parameter space, and this will allow for potentially bigger proposed moves. However, the greater spread of parameter values we sample will mean a larger proportion of them will be in regions of low posterior probability, which will be wasteful. As a result the Monte Carlo variance of our estimate of the marginal-likelihood is likely to increase. Similar considerations will apply if we let depend on .
To gain some intuition about the trade-off in the choice of we implemented particle MCMC for the linear-Gaussian model with chosen as above. We simulated data as described in Section 2.4, but with . Our aim is to investigate how the performance of the new particle MCMC algorithm varies as we vary the variance of the noise in the definition of and . For this model we can calculate analytically the true posterior distribution for and , and we chose the variance of the pseudo observations to be proportional to the marginal posterior variances. So for a chosen we set
Var(Zx|x1)=kZVar(X1|y1:100), and Var(Zθ|θ)=kZVar(θ|y1:100).
Figure 1 (b) shows the resulting auto-correlation times for and as we vary . Choosing gives auto-correlation times similar to running particle MCMC with . As is increased the efficiency of the particle MCMC algorithm initially increases, due to the better mixing of the underlying MCMC algorithm as we run particle MCMC conditioning on less information. However for very large values the efficiency of particle MCMC becomes poor. In this case it starts behaving like particle MCMC with , for which the large Monte Carlo error in estimating the likelihood leads to poorer mixing. The best values of correspond to adding noise to the pseudo-observations which is similar in size to the marginal posterior variances of and , and we notice good performance for a relatively large range of values.
The improvement in mixing as we initially increase is due to two aspects. The MCMC moves are updating , and as we increase the noise for these pseudo-observations we reduce the posterior dependence between them. Thus we observed a reduction in the auto-correlation time for . However, over and above this, we have that and are able to vary given values of and . So for larger noise in the pseudo-observations we see substantially smaller auto-correlation times for than for .
3.2 MCMC within PMCMC
One approach to improve the performance of a particle filter is to use MCMC moves within it (see Fearnhead, 1998; Gilks and Berzuini, 2001). An example is to use a MCMC kernel to update particles prior to propagating them to the next time-step. This involves a simple adaptation of Algorithm 1. Assume that is a Markov kernel that has as its stationary distribution. Then we change step of 9 of Algorithm 1 to:
• Sample from , and from .
The use of such an MCMC can be particularly helpful for updating parameters, as they help to ensure some diversity in the set of parameter values stored by the particles is maintained. Where possible, a common choice of kernel is to update just the parameters of the particle by sampling from the full conditional . Often such updates can be implemented in a computationally efficient manner as the full conditional distribution just depends on the state-path through fixed-dimensional sufficient statistics Storvik (2002); Fearnhead (2002). For recent examples of the benefits of using such MCMC moves see, for example, Carvalho et al. (2010a), Carvalho et al. (2010b) and Gramacy and Polson (2011).
For standard implementations of particle MCMC, where , using MCMC to update the parameters within the particle filter is not possible. Whereas by introducing pseudo-observations for the parameters, , and then implementing particle MCMC with we can use such MCMC moves within the particle filter, or conditional particle filter. This can be of particular benefit if we use information from all particles, rather than just a single one. Andrieu et al. (2010) suggest an approach for doing this using Rao-Blackwellisation idea. We consider an alternative approach in Section 4.1.
4 Examples
4.1 Stochastic Volatility
A simple stochastic volatility model assumes a univariate state-process, defined as
X1=σ0ϵ(X)1;and Xt=γXt−1+σXϵ(X)t, for t=2,…,T,
where are independent standard normal random variables. For we have observations
Yt=σYexp{xt}ϵ(Y)t,
where are independent standard normal random variables. Thus the state process governs the variance of the observations, with larger values of meaning larger variability in the observation at time .
We assume are unknown. We introduce independent priors, with having a normal distribution with mean and variance , but truncated to ; while and have gamma prior distributions with shape parameters and respectively, and scale parameter and respectively. We assume that .
We introduce a four-dimensional pseudo-observation where conditional on
ZX∼N(X1,τ2X), Zγ∼N(γ,τ2γ),
ZβX∼gamma(nX,βX),and ZβY∼gamma(nY,βY).
This choice for the pseudo-observations ensures that we can calculate the required marginal and conditional distributions, see Appendix A for details. To finalise the specification of these models we need to choose the values for , , and which determine how informative the pseudo-observations are.
Particle Gibbs
We first compare different implementations of the Particle Gibbs algorithm. Our focus here is to show that using pseudo-observations can improve mixing in scenarios where the underlying Gibbs sampler, even if it could be implemented, would mix poorly. For the stochastic volatility model this corresponds to situations where there is strong dependence in the state-process.
We simulated data with observations, , and , so that the stationary variance of the state process is 1. We present results for priors with and ; and ; and and . This corresponds to the true values of and being in the tails of the prior, and a relatively uninformative prior for .
We implemented both the standard version of Particle Gibbs, with , and Particle Gibbs with conditioning on the pseudo-observations, defined above. We chose the tuning parameters of the pseudo observations so that the variance of the parameters given was slightly smaller than the posterior variance we observed from a pilot run. For further comparison we show results for Particle Gibbs with no conditioning, , again implemented with .
We ran the standard version with particles, and the other two versions with . This was based on choosing so that the estimate of the log-likelihood had a variance of around 1 Pitt et al. (2012). To compensate for the doubling of the computational cost of the conditional SMC sampler with the latter two versions, we ran the standard version of the Particle Gibbs for twice as many iterations. To ease comparison of results we then thinned the output by keeping the values of the chain on even iterations only.
Results are shown in Figure 2. The standard implementation performs badly here. This is because of strong dependencies between the parameters and the state-process that occurs for this model which means that the underlying Gibbs sampler mixes slowly. By conditioning on less information when running the conditional SMC sampler we reduce this dependence between the and which improves mixing. However, choosing results in a substantial decrease in efficiency of the conditional SMC sampler. This is particularly pronounced due to the relatively uninformative priors we chose, and the fact that one of the parameter values was in the tail of the prior. If much more informative priors were chosen, using would give similar results to the use of pseudo-observations. Also this effect could be reduced slightly by increasing further for this implementation of Particle Gibbs, but doing so will still lead to a less efficient sampler than using pseudo-observations.
Finally, we note that there are alternative approaches to reduce correlation for a Gibbs sampler, such as reparameterisation approaches Pitt and Shephard (1999b). Getting such ideas to work often requires implementations that are specific to a given application. By comparison, the use of pseudo-observations gives a general way of reducing the correlation between and that adversely affects the mixing of the underlying Gibbs sampler.
PMMH with MCMC
We now compare PMMH on the stochastic volatility model. Our focus is purely on how using MCMC within the particle filter can help improve mixing over a standard PMMH algorithm. We simulated data with parameter values as above. To help reduce the computational cost involved in analysing this data, and hence implementing the simulation study, using PMMH we use more informative priors (which meant we could use fewer particles when running the particle filters), with and ; and ; and and .
We compared two implementations of PMMH, one with and one with . For the latter we were able to use MCMC within the particle filter to update the parameter values, using standard Particle Learning algorithms Carvalho et al. (2010a). Using the criteria of Pitt et al. (2012), we chose and particles respectively for these implementations. We used random walk proposals with the variances informed by a pilot run Roberts and Rosenthal (2001). Again we compensate for the slow running of the PMMH with pseudo-observations by running the other PMMH algorithm for three times as long, and thinning: keeping only every third value.
The main improvement in effiency we observed with the second PMMH algorithm was through using the diversity in parameter values we obtain when using the Particle Learning Algorithm. Our approach for implementing this was to output a set of equally weighted particle values from the particle learning algorithm. We then make a decision as to whether to accept this set of particles, with the normal acceptance probability. Finally, we add an extra step to each iteration where we resample the state of the PMMH algorithm from the last stored set of particles. Full details are given in Algorithm 3.
Trace-plots from part of the PMMH run are shown in Figure 3. These highlight the main improvement that using particle learning within PMMH gives. Both runs of PMMH can have long periods were they reject the output of the particle filter. However, by utilising the diversity in the parameter values of the particles that are output when particle learning is used, the PMMH algorithm is still able to mix over different parameter values in that case. Calculations of effective sample sizes show that this leads to a roughly three-fold increase in effective sample sizes (for a given CPU cost) for estimating and .
Remember we cannot use particle learning to update parameters for the standard implementation of PMMH, where , as in that case the particle filter is implemented conditional on a fixed set of parameter values.
4.2 Dirichlet Process Mixture Models
We now consider inference for a mixture model used to infer population structure from population genetic data. Assume we have data from a set of diploid individuals, and this data consists of the genotype of each individual at a set of unlinked loci. Thus each locus will have a set of possible alleles (different genetic types), and the data for an individual at that locus will be which alleles are present on each of two copies of that individual’s genome. We further assume that the individuals each come from one of an unknown number of populations. The frequency of each allele at each locus will vary across these populations. We wish to infer how many populations there are, and which individuals come from the same population.
This is an important problem in population genetics. We will consider a model based on that of Pritchard et al. (2000a). Though see Pritchard et al. (2000a), Nicholson et al. (2002) and Falush et al. (2003a) for extensions of this model; Falush et al. (2003b), Pritchard et al. (2000b) and Rosenberg et al. (2002) for example applications; and Price et al. (2006) and Patterson et al. (2006) for alternative approaches to this problem.
Assume we have loci. At locus we have alleles. The allele frequencies of these alleles in population are given by . The genotype of individual at locus is . Let be a unobserved latent variable which defines the population that individual is from. Then the conditional likelihood of given is
p(yi|xi=j)=L∏l=1p(j,l)y(1)i,lp(j,l)y(2)i,l.
This model assumes the loci are unlinked and there is no admixture, hence conditional on the data at each locus are independent.
We assume conjugate Dirichlet priors for the allele frequencies in each population. These priors are independent across both loci and population. For locus the parameter vector of the Dirichlet prior is .
We use a mixture Dirichlet process (MDP) model Ferguson (1973) for the prior distribution of latent variables . We will use the following recursive representation of the MDP model Blackwell and MacQueen (1973). Let be the population of origin of the first individuals, and define to be the number of populations present in . We number these populations , and let be the number of these individuals assigned to population . Then
p(xi+1=j|x1:i)={nj(x1:i)/(i+α)if j≤m(x1:i),α/(i+α)if j=m(x1:i)+1. (4)
This model does not pre-specify the number of populations present in the data. Note that the actual labelling of populations under the MDP model is arbitrary, and the information in is essentially which subset of individuals belong to each of the populations. In our implementation the actual labels are defined by the order of the individuals in the data set. With population 1 being the population that the first individual belongs to, population 2 is the population that the first individual not in population 1 belongs to, and so on.
Inference for this model was considered in Fearnhead (2008) for the case where and were known. Here we introduce hyperpriors for both these parameters, and perform inference using particle MCMC. We use independent gamma priors, with and . If we condition on values for and , then Fearnhead (2008) presents an efficient particle algorithm for this problem. This particle filter is based on ideas in Fearnhead and Clifford (2003) and Fearnhead (2004).
The particle filter of Fearnhead (2008) can struggle in applications where is large, due to problems with initialisation. To show this we considered inference for individuals at loci, using a subset of data taken from Rosenberg et al. (2002). Figure 4 plots estimates of for increasing values of . This shows how the posterior probability of the first two individuals being from the same population changes as we analyse data from more people. Initially this is close to 1, whereas once all data has been analysed the probability is essentially 0. This substantial change in probability causes problems in a particle filter, as all particles with are likely to lost during resampling in the early iterations of the algorithm.
To overcome this problem of initialisation of the particle filter for this application we propose to introduce a pseudo observation, , that contains information about the populations of a random subset of the individuals. The distribution of given is obtained by (i) sampling the number of individuals in the subset, say; (ii) choosing individuals at random from the sample, ; and (iii) letting , the subset of individuals and their population labels.
As mentioned above, the actual values of the population labels is arbitrary, and just contains information about which of the individuals belong to the same population. In practice, at each iteration we re-order the individuals in the sample so that individuals become the first individuals, and the order of the remaining individuals is chosen uniformly at random. The labels for the new first individuals are changed to be consistent with our recursive represenation of the MDP model above.
In implementing PMCMC we use . Our proposal distribution for is just its true conditional distribution given . We can easily adapt the particle filter of Fearnhead (2008) to condition on , by fixing the labels of the first individuals in the sample to those specified by . We use a random walk proposal for updating and an independence proposal for . Further details are given in Appendix B.
We compared the reparameterised PMCMC with this choice of with a standard PMCMC algorithm where . Our aim is purely to investigate the relative efficiency of the two implementations of PMCMC on this challenging problem. We ran each PMCMC algorithm for iterations, storing only every 100th value. We implemented the new PMCMC algorithm using 20 particles for the particle filter, and with storing population information from an average of 5 individuals. We implemented the standard PMCMC algorithm with 20, 40 and 60 particles. Results, in terms of trace and acf plots for are shown in Figure 5.
We see that the reparameterised PMCMC algorithm has substantially better mixing than the standard PMCMC algorithm, even when the latter used 3 times as many particles, and hence would have three times the CPU cost per iteration. For all the standard PMCMC algorithms, the chain gets stuck for substantial periods of time. This is due to a large variance of the estimate of the likelihood. By running the particle filter conditional on we obtain a substantial reduction in the variance of our estimates of the likelihood, and hence avoid this problem.
Estimated auto-correlation times are 1.3 for the reparameterised PMCMC algorithm with 20 particles, and 105, 56 and 36 for the standard PMCMC with 20, 40 and 60 particles respectively. After taking account that the CPU cost of an iteration of PMCMC is proportional to the number of particles, this suggests the re-parameterised PMCMC is about 80 times more efficient than each of the standard PMCMC algorithms.
5 Discussion
We have introduced a way to generalise particle MCMC through data augmentation. The idea is to introduce new latent variables into the model, and then to implement particle MCMC where the MCMC moves update the latent variables, and the particle filter updates the rest of the variables in the model. By careful choice of the latent variables, we have shown this can lead to substantial gains in efficiency in situations where the standard particle MCMC algorithm performs poorly. For the Stochastic Volatility example of Section 4.1 we saw that it can help break down dependencies that make the particle Gibbs algorithm mix slowly, and can enable particle learning ideas to be used within the particle filter component of particle MCMC. It can also help for models where the particle filter struggles with initialisation, that is where at early time-steps the filter is likely to sample particles in areas that are inconsistent with the full data, as we saw in Section 4.2.
The ideas in this paper bear some similarity with the marginal augmentation approaches for improving the Gibbs sampler (e.g. van Dyk and Meng, 2001). In both cases, adding a latent variable to the model, and implementing the MCMC algorithm for this expanded model, can improve mixing. Our way of introducing the latent variables, and the way they are used are completely different though. However, both approaches can improve mixing for the same reason. Introducing the latent variables reduces the correlation between variables updated at different stages of the Gibbs, or particle Gibbs, sampler.
The new data augmentation ideas add great flexibility to the particle MCMC algorithm. One key open question is how to choose the best latent variables to introduce, or, equivalently, how to tune the variance of the pseudo-observations. Our experience to date suggests that you want to choose this so that the conditional distribution of the parameters, or of the initial state, given the pseudo-observations has a similiar variance to that of the posterior distribution.
Acknowledgements: The first author was supported by the Engineering and Physical Sciences Research Council grant EP/K014463/1.
Appendix A Calculations for the Stochastic Volatility Model
First consider . Standard calculations give
p(βX|zβX) ∝ p(βX)p(zβX|βX) ∝ βax−1xexp{−bxβx}(βnxXexp{−βxzβX})
This gives that the conditional distribution of given is gamma with parameters and . Furthermore the marginal distribution for is
p(zβX) = ∫p(βX)p(zβX|βX)dβX = baxxznx−1βxΓ(ax)Γ(nx)∫βax+nx−1xexp{−(bx+zβXβx}%dβX = (Γ(ax+nx)baxxΓ(ax)Γ(nx))⎛⎝znx−1βx(zβx+bx)nx+ax⎞⎠
The calculations for are identical.
Calculations for and are as for the linear Gaussian model (see Section 3).
Appendix B Calculations for the Dirichlet Process Mixture Model
The conditional distribution of given can be split into (i) the marginal distribution for , ; (ii) the conditional distribution of the sampled individuals, , given . Given , the clustering of these individuals is deterministic, being defined by the clustering .
The marginal distribution of thus can be written as
p(Zx)=p(v)p(i1,…,iv|v)p(xi1,…,xiv).
Where we that, due to uniform sampling of the individuals,
p(i1,…,iv|v)=(nv).
Finally is given by the Dirichlet process prior. If we relabel the populations so that , population 2 is the population of the first individual in that is not in population 1, and so on; then for ,
p(xi1,…,xiv)=v∏j=2p(xij|xi1,…,xij−1),
with defined by (4).
Within the PMMH we use a proposal for given that is its full conditional
q(Zx|x1:n)=p(Zx|x1:n)=p(v)p(i1,…,iv|v).
In practice we take the distribution of to be a Poisson distribution with mean 5, truncated to take values less than . (Similar results were observed as we varied both the distribution and the mean value.)
References
Want to hear about new tools we're making? Sign up to our mailing list for occasional updates. | 9,544 | 44,587 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2020-40 | latest | en | 0.817062 |
https://brainly.com/question/309671 | 1,484,698,056,000,000,000 | text/html | crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00332-ip-10-171-10-70.ec2.internal.warc.gz | 805,723,997 | 9,100 | 2015-02-17T20:24:59-05:00
Therefore 2 is a rational root of this equation and is a factor.
We can find the missing quadratic using inspection:
or by dividing the cubic by the linear factor:
Finally, factorise the new quadratic for the two remaining rational roots:
Therefore both 3 and -3 are rational roots of the equations.
Collecting all the answers gives us:
(or fancier)
as the rational roots of this equation | 101 | 423 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.40625 | 3 | CC-MAIN-2017-04 | latest | en | 0.911471 |
https://tbc-python.fossee.in/convert-notebook/Fundamentals_Of_Physical_Chemistry_by_H._D._Crockford,_Samuel_B.Knight/Chapter_8_Chemical_Equlibrium.ipynb | 1,607,116,355,000,000,000 | text/html | crawl-data/CC-MAIN-2020-50/segments/1606141743438.76/warc/CC-MAIN-20201204193220-20201204223220-00224.warc.gz | 518,589,268 | 36,292 | # Chapter 8 Chemical Equlibrium¶
## Example 8.1 , Page no:164¶
In [1]:
import math
from __future__ import division
#initialisation of variables
x= 3.33
n= 5 #moles
#CALCULATIONS
N= x**2/(n-x)**2
#RESULTS
print"moles of water and ester formed=",round(N);
moles of water and ester formed= 4.0
## Example 8.2 , Page no:165¶
In [2]:
import math
from __future__ import division
#initialisation of variables
n= 1 #mole
x= 3
y= 4
#CALCULATIONS
r= x**2/n**2
z= n/x
n= n+z
n1= x-z
#RESULTS
print"moles of acid and alcohol=",round(n,2),"moles";
print"moles of ester and water=",round(n1,2),"moles";
moles of acid and alcohol= 1.33 moles
moles of ester and water= 2.67 moles
## Example 8.3 , Page no:165¶
In [3]:
import math
from __future__ import division
#initialisation of variables
k= 1.1*10**-5
V= 600 #ml
n= 0.4 #mole
#CALCULATIONS
m= n*1000/V
x= (-k+math.sqrt(k**2+4*4*0.67*k))/(2*4)
M= 2*x
P= x*100/m
#RESULTS
print"molar concentration of NO2=",'%.2E'%M,"mol per litre";
print"per cent dissociation=",round(P,2),"per cent";
molar concentration of NO2= 2.71E-03 mol per litre
per cent dissociation= 0.2 per cent
## Example 8.4 , Page no:167¶
In [4]:
import math
from __future__ import division
#initialisation of variables
pno2= 0.31 #atm
pn2o2= 0.69 #atm
p= 10 #atm
#CALCULATIONS
Kp= pno2**2/pn2o2
x= (-Kp+math.sqrt(Kp**2+4*4*p*Kp))/(2*4)
p1= p-x
p2= 2*x
#RESULTS
print"Kp=",round(Kp,2);
print"N2O4=",round(p1,2);
print"NO2=",round(p2,2);
Kp= 0.14
N2O4= 9.43
NO2= 1.15
## Example 8.5 , Page no:172¶
In [5]:
import math
from __future__ import division
#initialisation of variables
T= 65 #C
R= 1.98 #cal/mol K
kp= 2.8
kp1= 0.141
T1= 25 #C
#CALCULATIONS
H= math.log10(kp/kp1)*2.303*R*(273+T1)*(273+T)/(T-T1)
#RESULTS
print"average heat of reaction=",round(H+62),"cal";
average heat of reaction= 14965.0 cal | 767 | 1,834 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.515625 | 4 | CC-MAIN-2020-50 | latest | en | 0.372553 |
https://mathoverflow.net/questions/94149/p%C3%B3lyas-conjecture-on-the-spectra-of-the-laplacians | 1,628,013,442,000,000,000 | text/html | crawl-data/CC-MAIN-2021-31/segments/1627046154466.61/warc/CC-MAIN-20210803155731-20210803185731-00125.warc.gz | 382,629,672 | 32,281 | # Pólya's conjecture on the spectra of the Laplacians
Recently I've learned something about the spectra of the Laplacians. Given a bounded domain $\Omega \subset \mathbb{R}^n$ with $\partial \Omega$ smooth, we can consider eigenfunctions of Dirichlet type, i.e. $u \in C^2(\Omega)\cap C(\partial \Omega)$ s.t. $-\triangle u=\lambda u$ and $u|_{\partial \Omega}=0$. By standard results in functional analysis, $-\triangle$ has a discrete spectrum $0<\lambda_1 \leq \lambda_2 \leq \cdots$ with $\lambda_k \to +\infty$.
A well-known asymptotic formula by Weyl says $\displaystyle \lambda_k \sim W_n (\frac{k}{V(\Omega)})^{2/n}$. We refer $W_n$ as the Weyl constant. And Pólya conjectured that $\displaystyle \lambda_k \geq W_n (\frac{k}{V(\Omega)})^{2/n}$ holds.
As far as I know, the best known result is due to Li and Yau. They proved the conjecture in the sense of "average": $\displaystyle \sum_{j=1}^k \lambda_j \geq \frac{nW_n}{n+2}{k}^{(n+2)/n}{V(\Omega)}^{-n/2}$.
I find their argument is elementary, only employing some standard Fourier tricks. And the big picture is quite clear if put into the quantum framework. My question is in some sense "soft", but it does make me feel absurd: what makes it so difficult to estimate the eigenvalues one by one while their average is so well understood? Does anyone work on this problem by carrying further Li & Yau's analysis? I do know one instance: Kröger has transplanted their proof to Neumann settings, but how about the original Dirichlet problem?
• Fourier analysis comes with an uncertainity principle, which allows at most counting eigenvalues in a certain range, but disallows us to isolate them. At least, that is what one encounters in the spectral analysis of hyperbolic manifolds. – Marc Palm Apr 15 '12 at 21:22
• Some reference would be extremely appreciated, but thanks any way Mrc Plm! – Zhang Xiao Apr 16 '12 at 2:47
Since your question is "soft", I think it is okay if I give a soft answer in the case of a hyperbolic compact Riemann surface.
The Selberg trace formula describes the spectrum of the Beltrami-Laplace operator pretty well. You get an identity $$\sum\limits_{\lambda} f(\lambda) = \sum\limits_{\gamma} \widehat{f( \log \gamma)} + \dots ,$$ where $\lambda$ is asymptotic to the square of an eigenvalue and $\gamma$ to the lengths of closed geodesics.
The best known error term for the Weyl law her is $O(\sqrt{T} / \log T)$, so approxiamtely the square root of the main term.
Laplace-Beltrami Operator on Surfaces
Eigenvalues of Laplacian-Beltrami operator
Now why can we not do better? One of the main problems, is that you are only allowed to plugin holomorphic $f$, so it will be hard to estimate single objects. On the other hand, if you would be allowed to pluggin something compactly supported, then $\widehat{f}$ becomes entire, and is not compactly supported.
The fact that the support of $f$ and $\widehat{f}$ can not be simultaneously small, is a first instance of the uncertainity principle of Fourier analysis. The name is certainly derived from the Heisenberg uncertainity prinicple, eigenvalues are here "waves" and length of closed geodesics here "particles".
Similar things are happening for prime numbers and zeros of the Riemann zeta function (Weil's excplicit formula) or in quantum chaos (Gutzwiller trace formula). Already in finite group theory, if you try to compare traces of irreducible representations with conjugacy classes.
Perhaps Paley-Wiener theorem's give you a good flavor for first instances of Fourier uncertainity. Stein-Shakarchi "Complex Analysis" has a good treatment, chapter 4, I guess.
Best, Marc.
• Perhaps you will like "Spectra of Hyperbolic Surfaces" from Sarnak. It's freely available. – Marc Palm Apr 16 '12 at 17:55
• I am reading the fascinating Sarnak now. Best regards Plm:) – Zhang Xiao Apr 17 '12 at 17:15
You might want to check out Laptev's notes. -- he gives some background and explanations of methods involved.
• To be frank I don't think the above notes make an attempt to attack the problem by detailed analysis on the "quantum spectrum". But still many thanks! – Zhang Xiao Apr 16 '12 at 2:49 | 1,099 | 4,152 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.875 | 3 | CC-MAIN-2021-31 | latest | en | 0.886184 |
https://www.justintools.com/unit-conversion/frequency.php?k1=decihertz&k2=zettahertz | 1,627,237,427,000,000,000 | text/html | crawl-data/CC-MAIN-2021-31/segments/1627046151760.94/warc/CC-MAIN-20210725174608-20210725204608-00580.warc.gz | 883,398,143 | 26,817 | Please support this site by disabling or whitelisting the Adblock for "justintools.com". I've spent over 10 trillion microseconds (and counting), on this project. This site is my passion, and I regularly adding new tools/apps. Users experience is very important, that's why I use non-intrusive ads. Any feedback is appreciated. Thank you. Justin XoXo :)
# FREQUENCY Units Conversiondecihertz to zettahertz
1 Decihertz
= 1.0E-22 Zettahertz
Category: frequency
Conversion: Decihertz to Zettahertz
The base unit for frequency is hertz (Non-SI/Derived Unit)
[Decihertz] symbol/abbrevation: (dHz)
[Zettahertz] symbol/abbrevation: (ZHz)
How to convert Decihertz to Zettahertz (dHz to ZHz)?
1 dHz = 1.0E-22 ZHz.
1 x 1.0E-22 ZHz = 1.0E-22 Zettahertz.
Always check the results; rounding errors may occur.
Definition:
In relation to the base unit of [frequency] => (hertz), 1 Decihertz (dHz) is equal to 0.1 hertz, while 1 Zettahertz (ZHz) = 1.0E+21 hertz.
1 Decihertz to common frequency units
1 dHz = 0.1 hertz (Hz)
1 dHz = 0.0001 kilohertz (kHz)
1 dHz = 1.0E-7 megahertz (MHz)
1 dHz = 1.0E-10 gigahertz (GHz)
1 dHz = 0.1 1 per second (1/s)
1 dHz = 6.0000000024 revolutions per minute (rpm)
1 dHz = 0.1 frames per second (FPS)
1 dHz = 2160.0138240885 degree per minute (°/min)
1 dHz = 1.0E-13 fresnels (fresnel)
Decihertzto Zettahertz (table conversion)
1 dHz = 1.0E-22 ZHz
2 dHz = 2.0E-22 ZHz
3 dHz = 3.0E-22 ZHz
4 dHz = 4.0E-22 ZHz
5 dHz = 5.0E-22 ZHz
6 dHz = 6.0E-22 ZHz
7 dHz = 7.0E-22 ZHz
8 dHz = 8.0E-22 ZHz
9 dHz = 9.0E-22 ZHz
10 dHz = 1.0E-21 ZHz
20 dHz = 2.0E-21 ZHz
30 dHz = 3.0E-21 ZHz
40 dHz = 4.0E-21 ZHz
50 dHz = 5.0E-21 ZHz
60 dHz = 6.0E-21 ZHz
70 dHz = 7.0E-21 ZHz
80 dHz = 8.0E-21 ZHz
90 dHz = 9.0E-21 ZHz
100 dHz = 1.0E-20 ZHz
200 dHz = 2.0E-20 ZHz
300 dHz = 3.0E-20 ZHz
400 dHz = 4.0E-20 ZHz
500 dHz = 5.0E-20 ZHz
600 dHz = 6.0E-20 ZHz
700 dHz = 7.0E-20 ZHz
800 dHz = 8.0E-20 ZHz
900 dHz = 9.0E-20 ZHz
1000 dHz = 1.0E-19 ZHz
2000 dHz = 2.0E-19 ZHz
4000 dHz = 4.0E-19 ZHz
5000 dHz = 5.0E-19 ZHz
7500 dHz = 7.5E-19 ZHz
10000 dHz = 1.0E-18 ZHz
25000 dHz = 2.5E-18 ZHz
50000 dHz = 5.0E-18 ZHz
100000 dHz = 1.0E-17 ZHz
1000000 dHz = 1.0E-16 ZHz
1000000000 dHz = 1.0E-13 ZHz
(Decihertz) to (Zettahertz) conversions | 1,026 | 2,224 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.90625 | 3 | CC-MAIN-2021-31 | longest | en | 0.542892 |
https://chandoo.org/forum/threads/something-like-sumifs-function-please-help.42096/ | 1,571,016,016,000,000,000 | text/html | crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00277.warc.gz | 510,458,578 | 10,808 | #### Ivan1984
##### New Member
Dear all,
i have three different rows with filters, for example, Ronaldo played for his country and club on five different positions.
I want to calculate how much goals and assists score like striker, left wing, right wing etc... but somewhere i wanna know how much he score only for club and two different position... Off course, i have 500 different football players and i wanna know for any of them how many he scores or assist for his country/club, and how many from combination of position...
sorry, i don t know how better to explain what i try to get...
I try with sumifs function, but i cannot get values by two different condition in one column.
Example attached, i would apreciate help.
Thank you very much
#### Attachments
• 10.6 KB Views: 8
#### XOR LX
##### Active Member
Hi,
In J4:
=SUM(SUMIFS(D:D,\$A:\$A,\$G4,\$B:\$B,IF(\$H4="International/Club","*",\$H4),\$C:\$C,MID(\$I4,3*{0;1;2;3;4}+1,2)))
and copied down and right.
This assumes that:
1) The only 2 possibilities for the CAPS column are "International" and "Club".
2) There are only 5 positions: "LW", "RW", "ST", "AM" and "MC" (though the above can easily be modified to accommodate more).
Regards
#### Ivan1984
##### New Member
Hello Xor LX,
thank you very much for your effort and quickness.
Something don t work. I have excel 2016 and i must separate conditions with ;. i change that and at the end of formula excel said i have too few arguments: +1,2 this part of formula show me...
On your first comment you sad that i have only 2 possibilities, but in reality is three (club, international, club and international).
Sorry, but i don t get it that...
Thank you very much again.
#### AliGW
##### Active Member
Change that final comma to a semi-colon - it is NOT a decimal point.
This has nothing to do with the version of Excel - it is your locale that requires you to use semi-colons instead of commas, and commas instead of decimal points (which this isn't).
#### Ivan1984
##### New Member
AliGW, thank you very much. It works.
Thank you!
#### AliGW
##### Active Member
No problem - glad to help. | 546 | 2,133 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.8125 | 3 | CC-MAIN-2019-43 | latest | en | 0.92868 |
https://www.jagranjosh.com/articles/upsee-important-questions-and-preparation-tips-application-of-derivatives-1512727172-1 | 1,537,278,577,000,000,000 | text/html | crawl-data/CC-MAIN-2018-39/segments/1537267155413.17/warc/CC-MAIN-20180918130631-20180918150631-00128.warc.gz | 603,113,472 | 36,397 | UPSEE: Important Questions and Preparation Tips – Application of Derivatives
Dec 8, 2017 15:26 IST
WBJEE 2018: Application of Derivatives
In this article, we bring to you chapter notes of chapter Application of Derivatives including important concepts, formulae and some previous year solved questions for UPSEE/UPTU 2018. About 2-3 questions are always asked from this topic in the examination.
These notes contain all important topics related to chapter Application of Derivatives like rate of change, increasing function, decreasing function, equation of tangent, equation of normal, critical points, local maxima, local minima, first derivative test, second derivative test etc.
These chapter notes will help students in retaining maximum concepts related to chapter Application of Derivatives. These notes are based on the latest syllabus of UPSEE/UPTU examination 2018.
These notes can be used for quick revision when only few days are left before the examination.
With the help of solved previous year questions students can predict the difficulty level of the questions which can be asked in coming of UPSEE/UPTU examination 2018.
Important Concepts:
UPSEE: Important Questions and Preparation Tips – Limits
UPSEE: Important Questions and Preparation Tips – Inverse Trigonometric Functions
UPSEE: Important Questions and Preparation Tips – Trigonometric Functions
Some previous year solved questions are given below:
Question 1:
Solution 1:
Hence, the correct option is (b).
UPSEE: Important Questions and Preparation Tips – Hyperbola
Question 2:
Solution 2:
Hence, the correct option is (a).
UPSEE: Important Questions and Preparation Tips – Ellipse
Question 3:
Solution 3:
Hence, the correct option is (c).
UPSEE: Important Questions and Preparation Tips – Probability
Question 4:
Solution 4:
Hence, the correct option is (d).
UPSEE: Important Questions and Preparation Tips – Binomial Theorem
Question 5:
Solution 5:
Hence, the correct option is (d).
UPTU/UPSEE 2018 Examination: Everything you should know
Is it true that JEE Main and JEE advanced 2018 will be subjective?
Latest Videos
All Fields Mandatory
• (Ex:9123456789) | 484 | 2,171 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.15625 | 4 | CC-MAIN-2018-39 | longest | en | 0.860368 |
https://edurev.in/test/30515/Test-Patterns-1 | 1,708,664,181,000,000,000 | text/html | crawl-data/CC-MAIN-2024-10/segments/1707947474360.86/warc/CC-MAIN-20240223021632-20240223051632-00663.warc.gz | 250,696,200 | 44,709 | Test: Patterns - 1 - EmSAT Achieve MCQ
# Test: Patterns - 1 - EmSAT Achieve MCQ
Test Description
## 15 Questions MCQ Test - Test: Patterns - 1
Test: Patterns - 1 for EmSAT Achieve 2024 is part of EmSAT Achieve preparation. The Test: Patterns - 1 questions and answers have been prepared according to the EmSAT Achieve exam syllabus.The Test: Patterns - 1 MCQs are made for EmSAT Achieve 2024 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests for Test: Patterns - 1 below.
Solutions of Test: Patterns - 1 questions in English are available as part of our course for EmSAT Achieve & Test: Patterns - 1 solutions in Hindi for EmSAT Achieve course. Download more important topics, notes, lectures and mock test series for EmSAT Achieve Exam by signing up for free. Attempt Test: Patterns - 1 | 15 questions in 30 minutes | Mock test for EmSAT Achieve preparation | Free important questions MCQ to study for EmSAT Achieve Exam | Download free PDF with solutions
1 Crore+ students have signed up on EduRev. Have you?
Test: Patterns - 1 - Question 1
### Which of the following statements is true about patterns in Python?
Detailed Solution for Test: Patterns - 1 - Question 1
Patterns in Python refer to a sequence of characters or numbers arranged in a specific order, often used for decorative or logical purposes.
Test: Patterns - 1 - Question 2
### In Python, how can you create a star pattern using a loop?
Detailed Solution for Test: Patterns - 1 - Question 2
A star pattern can be created by using the range() function in a loop and printing the required number of stars in each iteration.
Test: Patterns - 1 - Question 3
### What is the purpose of using nested loops in pattern creation?
Detailed Solution for Test: Patterns - 1 - Question 3
Nested loops are used to create complex patterns that involve multiple levels of repetition. Each nested loop controls a different level of repetition.
Test: Patterns - 1 - Question 4
Which of the following statements is true about number patterns in Python?
Detailed Solution for Test: Patterns - 1 - Question 4
Number patterns can be created by using loops and conditional statements to control the order and repetition of numbers in the pattern.
Test: Patterns - 1 - Question 5
How can you create a character pattern in Python?
Detailed Solution for Test: Patterns - 1 - Question 5
Character patterns can be created by using the ord() and chr() functions to convert characters to their ASCII values and vice versa.
Test: Patterns - 1 - Question 6
What is the output of the following code?
for i in range(5):
print('*' * i)
Detailed Solution for Test: Patterns - 1 - Question 6
The code prints a star pattern where each row contains an increasing number of asterisks.
Test: Patterns - 1 - Question 7
What is the output of the following code?
for i in range(5, 0, -1):
print(str(i) * i)
Detailed Solution for Test: Patterns - 1 - Question 7
The code prints a number pattern where each row contains a repeating number based on the row index.
Test: Patterns - 1 - Question 8
What is the output of the following code?
for i in range(1, 6):
print(' ' * (5 - i) + str(i) * i)
Detailed Solution for Test: Patterns - 1 - Question 8
The code prints a combination of spaces and numbers to create a pyramid-like pattern.
Test: Patterns - 1 - Question 9
What is the output of the following code?
for i in range(1, 6):
print(' ' * (5 - i) + '* ' * i)
Detailed Solution for Test: Patterns - 1 - Question 9
The code prints a star pattern where each row contains an increasing number of asterisks and spaces.
Test: Patterns - 1 - Question 10
What is the output of the following code?
for i in range(1, 6):
print(' ' * (5 - i) + chr(64 + i) * i)
Detailed Solution for Test: Patterns - 1 - Question 10
The code prints a character pattern where each row contains a repeating character based on the row index, starting from 'E' and descending to 'A'.
Test: Patterns - 1 - Question 11
What is the output of the following code?
for i in range(1, 6):
print(' ' * (5 - i) + ' '.join(str(j) for j in range(1, i + 1)))
Detailed Solution for Test: Patterns - 1 - Question 11
The code prints a number pattern where each row contains an increasing sequence of numbers.
Test: Patterns - 1 - Question 12
What is the output of the following code?
for i in range(1, 6):
print(' ' * (5 - i) + ' '.join(str(j) for j in range(i, 0, -1)))
Detailed Solution for Test: Patterns - 1 - Question 12
The code prints a number pattern where each row contains a decreasing sequence of numbers.
Test: Patterns - 1 - Question 13
What is the output of the following code?
for i in range(1, 6):
print(' ' * (5 - i) + ' '.join(str(j) for j in range(i, 0, -1)) + ' '.join(str(j) for j in range(2, i + 1)))
Detailed Solution for Test: Patterns - 1 - Question 13
The code prints a combination of increasing and decreasing sequences of numbers to create a pattern.
Test: Patterns - 1 - Question 14
What is the output of the following code?
for i in range(1, 6):
print(' ' * (5 - i) + ' '.join(str(j) for j in range(i, 0, -1)) + ' '.join(str(j) for j in range(i - 1, 0, -1)))
Detailed Solution for Test: Patterns - 1 - Question 14
The code prints a combination of increasing and decreasing sequences of numbers to create a pattern.
Test: Patterns - 1 - Question 15
What is the output of the following code?
for i in range(1, 6):
print(' ' * (5 - i) + ' '.join(str(j) for j in range(i, 0, -1)) + ' '.join(str(j) for j in range(i - 1, 0, -1)) + ' '.join(str(j) for j in range(2, i + 1)))
Detailed Solution for Test: Patterns - 1 - Question 15
The code prints a combination of increasing and decreasing sequences of numbers to create a pattern.
Information about Test: Patterns - 1 Page
In this test you can find the Exam questions for Test: Patterns - 1 solved & explained in the simplest way possible. Besides giving Questions and answers for Test: Patterns - 1, EduRev gives you an ample number of Online tests for practice | 1,553 | 6,032 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.765625 | 3 | CC-MAIN-2024-10 | latest | en | 0.877917 |
https://www.effortlessmath.com/tag/prisms/ | 1,726,632,430,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00824.warc.gz | 676,595,115 | 6,297 | # Prisms
Search in Prisms articles.
## Calculating the Surface Area of Prisms and Cylinders
From the skyscrapers that pierce the skyline to the soda cans in our fridges, prisms and cylinders are foundational shapes in our daily lives. But while their outward appearance may seem straightforward, there’s an underlying mathematical beauty in determining just how much space they cover on the outside. In this guide, we’ll dive deep into […]
## Unfolding Shapes: How to Identify the Nets of Prisms and Pyramids
Hello, math enthusiasts! Imagine you could unfold a three-dimensional shape and lay it flat. The result is what we call a “net”. Today, we’re going to dive into the world of nets, focusing on prisms and pyramids. | 160 | 726 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.828125 | 3 | CC-MAIN-2024-38 | latest | en | 0.914327 |
https://www.econgraphs.org/graphs/micro/consumer_theory/indifference_curves_math | 1,513,086,825,000,000,000 | text/html | crawl-data/CC-MAIN-2017-51/segments/1512948517181.32/warc/CC-MAIN-20171212134318-20171212154318-00716.warc.gz | 744,417,538 | 4,089 | # Indifference Curves{{ params.hasOwnProperty('selectedUtility') ? ': ' + model.utility.title : '' }}
utility parameters
a = {{ params.a }}
b = {{ params.b }}
r = {{model.utility.r | number: 2}}:
u(x,y) = x^ay^b || =x^{ {{ params.a }} }y^{ {{ params.b }} }
u(x,y) = \min \left\{ \frac{x}{a}, \frac{y}{b} \right\} || = \min \left\{ \frac{x}{ {{ params.a }} }, \frac{y}{ {{ params.b }} } \right\}
u(x,y) = ax + by || = {{ params.a }}x + {{ params.b }}y
u(x,y) = (ax^r + by^r)^{1/r} || = ({{ params.a }}x^{ {{ model.utility.r | number:2 }} } + {{ params.b }}y^{ {{ model.utility.r | number:2 }} })^{ {{ 1/model.utility.r | number: 2 }} }
u(x,y) = \frac{a}{a+b}\ln x + \frac{b}{a+b}y || = {{ params.a/(params.a + params.b) | number:2 }}\ln x + {{ params.b/(params.a + params.b) | number:2 }}y
u({{ params.x }},{{ params.y }}) = {{ model.utility.utility({x: params.x, y: params.y}) | number:2 }}
MRS line indifference curve map preferred set dispreferred set
Copyright (c) Christopher Makler / econgraphs.org | 345 | 1,006 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.875 | 3 | CC-MAIN-2017-51 | longest | en | 0.135509 |
http://mathhelpforum.com/advanced-statistics/277970-proving-independence.html | 1,571,190,849,000,000,000 | text/html | crawl-data/CC-MAIN-2019-43/segments/1570986661296.12/warc/CC-MAIN-20191016014439-20191016041939-00018.warc.gz | 128,276,966 | 11,570 | 1. ## Proving independence
Hi
I was wondering, how do check whether the following is independent or not $\displaystyle (A \cup B) \cap (A' \cap B')$?
Thanks
2. ## Re: Proving independence
I don't know what you mean by sets being independent. However
$(A \cup B) \cap (A^\prime \cap B^\prime) = (A \cup B) \cap (A \cup B) = A \cup B$
3. ## Re: Proving independence
Originally Posted by cooltowns
I was wondering, how do check whether the following is independent or not $\displaystyle (A \cup B) \cap (A' \cap B')$?
As stated, this is a meaningless question.
It is true that $(A \cup B)~ \&~ (A^\prime \cap B^\prime)$ are mutually exclusive.
Originally Posted by romsek
I don't know what you mean by sets being independent. However
$(A \cup B) \cap (A^\prime \cap B^\prime) = (A \cup B) \cap (A \cup B) = A \cup B$
Assuming that the $^\prime$ is a notation for complement, the above is misstaken.
\begin{align*}(A \cup B) \cap (A^\prime \cap B^\prime)&=(A \cup B) \cap (A\cup B)^\prime \\&=\emptyset \end{align*}
4. ## Re: Proving independence
Thank you for your replies guys
Basically, I wanted to investigate whether $\displaystyle (A' \cap B') \& (A \cup B)$ are independent and or mutually exclusive.
I understand that for them to be mutually exclusive, the following condition must be satifised: $\displaystyle (A' \cap B') \cap (A \cup B) = \emptyset$
however for them to be independent, it must satisfy $\displaystyle P(A' \cap B') \cup P(A \cup B) =P(A' \cap B')*P(A \cup B)$
You guys mention, it's mutually exclusive, but how can I justify that.
5. ## Re: Proving independence
Originally Posted by cooltowns
Thank you for your replies guys
Basically, I wanted to investigate whether $\displaystyle (A' \cap B') \& (A \cup B)$ are independent and or mutually exclusive.
I understand that for them to be mutually exclusive, the following condition must be satifised: $\displaystyle (A' \cap B') \cap (A \cup B) = \emptyset$
however for them to be independent, it must satisfy $\displaystyle P(A' \cap B') \cup P(A \cup B) =P(A' \cap B')*P(A \cup B)$
You guys mention, it's mutually exclusive, but how can I justify that.
Plato showed you how to see that they are mutually exclusive (my post was in error).
Just apply DeMorgan's Law to $(A^\prime \cap B^\prime)$
6. ## Re: Proving independence
thanks guys, got it, appreciate the help
7. ## Re: Proving independence
Originally Posted by cooltowns
Thank you for your replies guys
Basically, I wanted to investigate whether $\displaystyle (A' \cap B') \& (A \cup B)$ are independent and or mutually exclusive.
I understand that for them to be mutually exclusive, the following condition must be satifised: $\displaystyle (A' \cap B') \cap (A \cup B) = \emptyset$
however for them to be independent, it must satisfy $\displaystyle P(A' \cap B') \cup P(A \cup B) =P(A' \cap B')*P(A \cup B)$
You guys mention, it's mutually exclusive, but how can I justify that.
That is what I posted. Two sets are mutually exclusive if their intersection is null.
You seem to be asking or thinking of events in a probability space? However, you gave no information whatever about the probability sets and/or probability measure.
The point is that ordinary sets can be said to be mutually exclusive but the concept of independence is not applied in general. In vector space certain vectors can be independent of one another.
Now suppose we have a probability space. The subsets of the space are the events.
Events $A~\&~B$ are mutually exclusive provided: $A\cap B=\emptyset,~\mathcal{P}(A\cup B)=\mathcal{P}(A)+\mathcal{P}(B)~\&~\mathcal{P}(A \cap B)=0$
Events $A~\&~B$ are independent provided: $\mathcal{P}(A\cap B)=\mathcal{P}(A)\cdot\mathcal{P}( B)$ | 1,055 | 3,717 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4 | 4 | CC-MAIN-2019-43 | latest | en | 0.842468 |
https://domyhomework123.com/blog/how-to-write-hypothesis/ | 1,669,539,118,000,000,000 | text/html | crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00524.warc.gz | 241,142,745 | 48,627 | # Learn How To Write Hypothesis In Minutes
If you are reading this, it means you are looking to learn how to write a hypothesis the right way. As you probably already know, hypotheses are a very important part of research papers. Without them, you would not be able to write these academic papers. This is why it is so important to learn how to state hypothesis.
In this blog post, we will show you what a hypothesis is and why it is important. In addition, we will talk about dependent and independent variables, as well as the various types of hypotheses. Finally, you will learn about falsifiability and get some tips and tricks from our expert academic writers. Finally, we will show you plenty of good examples of hypotheses, as well as a few examples of bad hypotheses. Want to write hypothesis sentences in just 10 minutes? Let’s get started!
So, how do you write a hypothesis? The good news is that it is not as difficult as you think. However, before we get to the technical part and the examples, let’s make sure you understand exactly what a hypothesis is and why it is important.
When writing a research hypothesis, you need to keep in mind that it is a proposed explanation that you make based on very limited evidence. You are basically making an educated guess regarding the results of your research or observations. Let’s consider that you are looking to write a research paper about ways to improve a student’s grades. Here is one valid hypothesis you can make right from the start: A student’s study efficiency and grades improve when the student gets 8 hours of sleep every night and takes a 10 minute break every hour of study.
Keep in mind that a hypothesis can be true or false. If your observations prove your hypothesis to be false, it does not mean that the hypothesis is bad. So don’t try to write your hypotheses in a way that always makes them true.
Bottom line, the hypothesis in a research paper should be a specific and testable prediction. Yes, because it is a prediction, you need to include at least two variables: the dependent and the independent variable.
### Understanding Independent And Dependent Variables
If you want to learn how to start a hypothesis sentence with ease, you need to know a few things about the structure of the hypothesis. And this takes us to the dependent and independent variables. You won’t be able to learn how to write a scientific hypothesis without learning what each of these variables is.
The independent variable is basically the cause. This value is independent of all other variables you are studying or observing. Here is an example of a few independent variables: students get 8 hours a sleep per night, students eat 3 meals a day, students take a 10 minute break every hour of studying.
The dependent variable is the effect. The effect is clearly dependent on the cause. For example: Students’ grades increase when [insert any of the independent variables above].
Of course, a hypothesis can have more than one independent variable and more than one dependent variable. Here is an example that should clear things out for you: Students improve their grades and study faster when they sleep 8 hours a day and take a 10 minute break every hour of studying.
To write a hypothesis correctly, you need to learn about falsifiability. This is an extremely important part of a valid hypothesis.
Basically, falsifiability is the ability to test the thesis scientifically by proving it to be false. In other words, the claim should be possible to be proven false. This does not mean that the claim is false though.
Let’s take an example: All yellow watermelons are sweet. This is a falsifiable claim because we can falsify the hypothesis simply by tasting a yellow watermelon that is sour. We hope everything is clear now.
### Types Of Hypotheses
Now, to write a hypothesis the right way, you also need to know about the various types of hypotheses. In fact, knowing the various types of hypotheses can be very important for students who write complex research papers. So, to help you out, we have organized all the types in an easy to understand list:
• Simple hypothesis. But what is simple hypothesis, you ask. Well, it’s the easiest to understand hypothesis. It predicts a relationship between a single independent variable and a single dependent variable. Let’s take an example: Eating sweets every day will cause your blood sugar to spike.
• Null hypothesis. To test this hypothesis, you need an alternative hypothesis. For example, a null hypothesis can be something like “Studying efficiency suffers no change whether the student sleeps 6 hours or 9 hours.”
• Alternative hypothesis. This is the claim that contradicts the null hypothesis. For instance, you can write an alternative hypothesis like “Studying efficiency improves when students get 8 hours of sleep, as opposed to 6 hours of sleep.”
• Complex hypothesis. This predicts the relationship between two or more independent variables and two or more dependent variables. For example: “Studying efficiency and grades improve when students sleep 8 hours every night and eat 3 meals a day.”
• Logical hypothesis. This statement proposes the most plausible explanation with minimal evidence. Let’s take an example: Studying efficiency improves when students on Europa’s Moon sleep 8 hours per night instead of just 6 hours per night.
• Statistical hypothesis. This statement uses statistical information to examine something. For example: 50% of Europe’s population has received the second dose of the Covid-19 vaccine.
• Empirical hypothesis. This is a hypothesis you can use when you are experimenting or are observing something (and the hypothesis can change). Here is an example: Men taking vitamin C supplements are 20% less likely to get infected by Covid-19.
### Hypothesis Sentence Format
Don’t know how to write a testable hypothesis? Don’t worry about it too much; our experts will explain everything to you right now. We will even show you some good and some bad examples of hypotheses a bit later. For now, let’s talk about the most common hypothesis sentence format. It’s very important if you want to learn how to write a hypothesis statement quickly. Here is everything you need to know about it:
1. Most hypotheses follow the default format: If [this happens] then [this will happen]
2. If you want to take independent and dependent variables into consideration, then the default format should be: If [this changes in an independent variable] then [this will change in one or more dependent variables].
3. Take a look at a simplistic example to understand the format better: Drivers who answer their texts while driving have a braking distance 2 meters longer than drivers who do not answer their texts while driving.
Now that you know how to format a hypothesis, it’s time to take a look at some good examples. These should help you write an excellent one in no time.
### Good Examples Of Hypothesis Sentences (And Some Bad Ones)
Still don’t know how to form a hypothesis? If you aren’t sure whether or not your hypothesis is OK, you should take a look at some good examples of hypothesis.
To make sure there is no confusion with how you formulate a hypothesis, here are some bad examples to learn from:
• “Redesigning the Checkout page using Design A will increase sales.” This is bad because it lacks specificity. A better choice would be: “Redesigning the Checkout page using Design A will increase sales by X% for users aged Y to Z.”
• “If our universe starts to shrinks, then the multiverse will start shrinking as well.” This is a bad hypothesis because of several issues, the most important being the inability to test it and prove it right or wrong. You should change your topic if you ever find yourself trying to write such a hypothesis.
• “Smoking cigarettes daily leads to cancer.” Again, this hypothesis lacks specificity. How many cigarettes? What kind of cancer? A better choice would be: “Smoking more than 5 cigarettes a day leads to lung cancer.”
• “The average time spent by people on Facebook is 2 hours per day.” This is a null hypothesis. It’s something that is already established and should not be tested again.
• “If the church walls were higher, the angels would have larger wings.” This is known as a non-basic statement. The problem with this hypothesis is that it is not falsifiable. It cannot be proven wrong. The size of the wings is not the problem. The problem lies in our lack of means to identify angels.
### Tips On How To Make A Hypothesis Great
Do you want to know how to make a hypothesis great? After all, this is extremely important when writing any kind of academic paper. Your hypotheses are the building blocks of your essay. They are basically what you are trying to explain with your research, experimentation and analysis. To show you how to create a hypothesis the right way, we have compiled a list of the best tips, tricks and advice you can use to your advantage:
• You should always define the variables in your hypothesis
• You need to clearly state the problem you aim to solve in your paper. This is basically the focus of your research.
• Whatever you do, refrain from writing the hypothesis as a question. You want to demonstrate something, not ask your readers for their opinions.
• It’s a great idea to try to write the hypothesis as an if-then statement.
• Keep the hypothesis clear and to the point. Remember, one sentence should usually be enough.
• The hypothesis should be empirically testable.
• The composition of your hypothesis should be based on some existing knowledge. You will have to make an educated guess.
• Always ask yourself: What problem do you want to solve?
• Make sure your hypothesis is measurable (it can be proven right or wrong by research or an experiment)
### Get Some Homework Help Today
We know, even if you conduct ample research or set up a good experiment, you may sometimes get a grade lower than what you had expected. This is because students are not scientists (yet). They are not academic writers either. To make sure you get a top grade you need some homework help from our reliable professionals.
We can help any high school, college or university student with his or her homework for any class with excellent hypotheses. Just say “help me with my science homework” and we’ll get on it right away. You can get cheap academic assignment writing help from our experts in minutes. Get all the assistance you need from our writers and we’ll make sure you get a top notch research paper in no time. Your professor will love it!
All our writers are not only native English speakers, but also highly experienced academic writers. And yes, they all have at least one Master’s or PhD degree. They know all the methods, as well as the best format, for your hypotheses. Get in touch with our experts and we’ll make sure your teacher gives you at least an A on your next research paper!
Let's stand with the heroes
As Putin continues killing civilians, bombing kindergartens, and threatening WWIII, Ukraine fights for the world's peaceful future.
Donate Directly to Ukraine | 2,274 | 11,142 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.625 | 3 | CC-MAIN-2022-49 | longest | en | 0.963743 |
http://encyclopedia2.thefreedictionary.com/macrophage+colony+stimulating+factor | 1,505,906,862,000,000,000 | text/html | crawl-data/CC-MAIN-2017-39/segments/1505818687255.13/warc/CC-MAIN-20170920104615-20170920124615-00302.warc.gz | 110,080,092 | 11,900 | # factor
(redirected from macrophage colony stimulating factor)
Also found in: Dictionary, Thesaurus, Medical, Legal, Financial.
Related to macrophage colony stimulating factor: M-CSF
## factor,
in arithmetic, any number that divides a given number evenly, i.e., without any remainder. The factors of 12 are 1, 2, 3, 4, 6, and 12. Similarly in algebra, any one of the algebraic expressions multiplied by another to form a product is a factor of that product, e.g., a+b and ab are factors of a2b 2, since (a+b)(ab)=a2b2. In general, if r is a rootroot,
in mathematics, number or quantity r for which an equation f(r)=0 holds true, where f is some function. If f is a polynomial, r is called a root of f; for example, r=3 and r
of a polynomialpolynomial,
mathematical expression which is a finite sum, each term being a constant times a product of one or more variables raised to powers. With only one variable the general form of a polynomial is a0xn+a1x
equation f(x)=0, then (xr) is a factor of the polynomial f(x).
## factor
[′fak·tər]
(mathematics)
For an integer n, any integer which gives n when multiplied by another integer.
For a polynomial p, any polynomial which gives p when multiplied by another polynomial.
For a graph G, a spanning subgraph of G with at least one edge.
(statistics)
A quantity or a variable being studied in an experiment as a possible cause of variation.
## factor
1. Maths
a. one of two or more integers or polynomials whose product is a given integer or polynomial
b. an integer or polynomial that can be exactly divided into another integer or polynomial
2. Med any of several substances that participate in the clotting of blood
3. Law, Commerce a person who acts on another's behalf, esp one who transacts business for another
4. former name for a gene
5. Commercial law a person to whom goods are consigned for sale and who is paid a factorage
6. (in Scotland) the manager of an estate
## factor
A quantity which is multiplied by another quantity. | 500 | 1,994 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.1875 | 3 | CC-MAIN-2017-39 | latest | en | 0.89581 |
https://math.stackexchange.com/questions/3431542/finding-the-image-in-a-computable-way | 1,653,269,469,000,000,000 | text/html | crawl-data/CC-MAIN-2022-21/segments/1652662552994.41/warc/CC-MAIN-20220523011006-20220523041006-00328.warc.gz | 440,650,090 | 66,298 | # Finding the image in a computable way
Suppose we have a partial computable injective function $$f: \mathbb N\to \mathbb N$$ whose image is computable. I'm trying to show that then the image of every computable set is computable, and that this is false for non-injective functions (by a counterexample).
Suppose $$X$$ is computable and one wants to check if $$y$$ lies in $$f(X)$$. The problem asks to construct an algorithm of how to do this. What I can think of is this: the algorithm accepts $$y$$. Since $$im(f)$$ is computable, we one can tell whether $$y\in im(f)$$. If this is false, then $$y\notin f(X)$$ either. Suppose $$y\in im(f)$$. Then for some $$x\in dom(f)$$, $$f(x)=y$$. Now it would be natural to find such an $$x$$ (which would be unique due to injectivity). (And then one would be able to tell whether $$x\in X$$ or not.) But how to find an algorithm/program doing this? The domain of $$f$$ may be infinite, so it may be not possible (in finite time) to check for every $$x\in dom(f)$$ if $$f(x)=y$$. Also I don't see how the computability of $$f$$ can be used here.
As for the counterexample, once I understand how to find $$x$$ with $$f(x)=y$$, the most trivial counterexample should work, like $$\{1,2\}\to \mathbb N, 1,2\mapsto 1$$. Is that right?
• Do you want the image or the preimage of a computable set $X$? The question says preimage, but judging by your work it looks like you want the image. Nov 12, 2019 at 3:01
• @HallaSurvivor There was a typo in the title. Nov 17, 2019 at 17:32
For the first problem, you have the right idea. In fact in your question you have already given an informal description of an algorithm to compute $$f(X)$$.
Here's the part you seem confused by: you have some $$y$$ which you know to be in the range of $$f$$ and you want to find $$x$$ such that $$f(x) = y$$. Yes, there are infinitely many possible $$x$$'s, but the point is that if you start searching for one, eventually you will be successful because you know that such an $$x$$ exists. The one tricky part is that since $$f$$ is a partial function, you may run into some candidate $$x$$'s where $$f(x)$$ does not converge. The answer is just to try them out in parallel.
Here's what I mean: first check if $$f(0)$$ converges in one step of computation. Then check if either $$f(0)$$ or $$f(1)$$ converge in two steps of computation. Then check if any of $$f(0)$$, $$f(1)$$, and $$f(2)$$ in three steps. Etc. Eventually you will see some $$f(x)$$ converge and equal $$y$$ then you can stop the search. Once again, the search is guaranteed to stop after a finite number of steps because $$y$$ is in the range of $$f$$.
The ability to perform this kind of unbounded search is what makes computation stronger than e.g. primitive recursion.
For your second problem, nothing with finite range will work because the image of any set will be finite, and therefore computable.
One way to find a counterexample is to recall that every r.e. set is the image of a partial computable function. Start with a partial computable function whose range is a noncomputable r.e. set. Then modify this function in some way so that its range becomes computable but knowing its image on some particular set would tell you the original r.e. set. Or you can just do it by some kind of direct diagonalization.
• What do you mean by "in one/two step(s) of computation"? The machinery I know for checking if $f(0)$ converges is Kleene's T predicate. This is done by checking whether there is an $x$ such that $T(f,0,x)$ holds. If $f(0)$ diverges, than the process of finding this $x$ runs forever. So how can we check if $f(0)$ converges in finite time if in the event of divergence of $f(0)$ the program will run forever? Nov 17, 2019 at 17:36
• In terms of the Kleene T predicate, "checking if $f(0)$ converges in one step of computation" means "checking if $T(f, 0, 1)$ holds." The point is that you don't first try to look for $x$ such that $T(f, 0, x)$ holds before doing anything else: you try the first few possible $x$'s and then move on to $f(1)$ then come back to $f(0)$ and try a few more possible $x$'s and so on. Nov 18, 2019 at 16:40 | 1,143 | 4,149 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 40, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.546875 | 4 | CC-MAIN-2022-21 | latest | en | 0.926337 |
http://quizlet.com/4411245/geometry-vocabulary-list-2-flash-cards/ | 1,386,442,766,000,000,000 | text/html | crawl-data/CC-MAIN-2013-48/segments/1386163055701/warc/CC-MAIN-20131204131735-00040-ip-10-33-133-15.ec2.internal.warc.gz | 211,484,507 | 17,008 | # Geometry Vocabulary List 2
## 14 terms
### justify
show or prove to be right
### conjecture
an opinion or conclusion formed on the basis of incomplete information
### triangle
a plane figure with three sides and three angles
### obtuse
4.greater than 90° and less than 180
### acute
greater than 0° and less than 90°
### right angle
an angle that measure 90°
### isosceles
a triangle with two congruent sides
### equilateral
a triangle with all sides congruent
### scalene
a triangle with no congruent sides
### protractor
an instrument for measuring angles, typically in the form of a flat semicircle marked with degrees along the curved edge
### supplementary
either of two angles whose sum is 180°
### complementary
either of two angles whose sum is 90° | 190 | 783 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.53125 | 3 | CC-MAIN-2013-48 | longest | en | 0.853537 |
https://ccssmathanswers.com/into-math-grade-5-module-2-lesson-2-answer-key/ | 1,725,928,180,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00670.warc.gz | 133,407,319 | 60,088 | # Into Math Grade 5 Module 2 Lesson 2 Answer Key Represent Division with 2-Digit Divisors
We included HMH Into Math Grade 5 Answer Key PDF Module 2 Lesson 2 Represent Division with 2-Digit Divisors to make students experts in learning maths.
## HMH Into Math Grade 5 Module 2 Lesson 2 Answer Key Represent Division with 2-Digit Divisors
I Can find the quotient of numbers up to four digits divided by 2-digit divisors using visual models.
You are a game designer designing a treasure hunt game similar to the one shown.
The game board is a grid with a treasure chest located behind one of the squares.
The rectangular grid will have 96 squares. If the length of the grid is greater than 10 squares, how wide can the grid be?
The game board grid can be ___ squares wide.
Given,
The rectangular grid will have 96 squares.
If the length of the grid is greater than 10 squares.
Area of rectangle = length x width
96 = 10 x width
width = 96/10 = 9.6
Therefore, The game board grid can be 9.6 squares wide.
Turn and Talk Compare your game board grid to the game board grids of other classmates. How do they compare?
Build Understanding
1. You are designing another game in which the player needs to arrange flower pots in 12 equal-sized groups. There are 156 flowerpots. How many flowerpots are in each group?
A. What multiplication and division equations can you write to model the number of flowerpots, p, in each group?
______________________
B. How can you break apart 156 into a sum of multiples of 12? Use your multiples to make an area model.
______________________
By area model,
area = length x width
156 = 12 x width
width = 156 ÷ 12
Therefore, width = 13
C. What do the number of rows and the number of columns in the area model represent?
______________________
______________________
______________________
Rows represent length
D. How is the dividend represented in the area model?
______________________
______________________
The dividend is the total area.
E. How is the quotient represented in the area model? What is the quotient?
______________________
______________________
quotient represents width
F. How many flowerpots are in each group?
______________________
13 flowerpots are in each group.
Turn and Talk How can you use your equations to explain how to find the number of flowerpots in each group?
2. There are 675 gold bars hidden throughout the game. The same number of bars are placed in each of 25 treasure chests. How many bars are in each treasure chest?
A. What division equation models this situation?
________________
675 ÷ 25
B. Break apart the dividend into multiples of the divisor and use the area model to find the quotient.
___________________
We know.
area = length x width
675 = 25 x width
width = 675 ÷ 25 = 25
C. Explain what the smaller rectangles represent in terms of the division equation.
___________________
___________________
___________________
Answer: smaller rectangles represent length and width.
D. How many gold bars are in each treasure chest?
___________________
Answer: The number of gold bars in each treasure chest are 625.
Turn and Talk Compare your area model to the area models of other classmates. What can you conclude?
Build Understanding
3. Archaeologists find a sunken chest of 3,420 bronze coins. If the coins are to be shared equally among 20 museums, how many coins does each museum get?
A. How can you model this situation using a division equation?
_________________________
B. Draw an area model to find the quotient. How can using a greater multiple of the divisor help you draw the area model?
_________________________
C. How many coins does each museum get? How does your area model show this?
_________________________
Area = length x width
3420 = 20 x width
width = 3420 ÷ 20 = 171
Therefore, each museum gets 171 coins.
Check Understanding Math Board
Question 1.
There are 76 bottles of paint in the art studio. The art teacher divides them equally among all 19 students in the class. How many bottles of paint does each student get?
Given,
There are 76 bottles of paint in the art studio.
The art teacher divides them equally among all 19 students in the class.
So, 76 ÷ 19 = 4
Therefore, Each student get 4 bottles of paint.
Question 2.
Model with Mathematics Lucinda has 81 tulip bulbs. She plants 3 bulbs in each row. How many rows of tulip bulbs does she plant? Write a division equation to model this situation.
Given,
Lucinda has 81 tulip bulbs.
She plants 3 bulbs in each row.
So, 81 ÷ 3 = 27
Therefore, she plants 3 rows of tulip bulbs.
Question 3.
Model with Mathematics Write a division equation that is related to the application of the Distributive Property shown.
12 × (100 + 20 + 4) = (12 × 100) + (12 × 20) + (12 × 4)
= 1,200 + 240 + 48
= 1,488
Question 4.
Miguel volunteers at the library. He needs to arrange 576 books on shelves for a book sale. There are 16 empty shelves. Miguel puts an equal number of books on each shelf.
How many books does he put on each shelf? _______
Given,
Miguel volunteers at the library.
He needs to arrange 576 books on shelves for a book sale.
There are 16 empty shelves.
Miguel puts an equal number of books on each shelf.
So, 576 ÷ 16 = 36
Therefore, she puts 36 books on each shelf.
Question 5.
Model with Mathematics Multiplication of two numbers results in the partial products 200, 10, and 3. One of the factors is 30. Write a multiplication equation and a division equation to model this situation. Show your thinking.
Given,
Multiplication of two numbers results in the partial products 200, 10, and 3.
One of the factors is 30.
From the given data,
let the other number be a
So, 30 x a = 200 x 10 x 3
30 x a = 6000
a = 6000 ÷ 30 = 200
Therefore, other factor is 200.
Question 6.
STEM There are 455 grams of dissolved salt in a 13-kilogram sample of seawater. How many grams of dissolved salt are in 1 kilogram of seawater? _______
Given,
There are 455 grams of dissolved salt in a 13-kilogram sample of seawater.
So, 455 ÷ 13 = 35
Therefore, dissolved salt in 1 kilogram of seawater is 35 grams.
Use Tools Use an area model to represent the division equation and find the quotient.
Question 7.
925 ÷ 25 = c
c = 925 ÷ 25
Area = 925
length = 25
we know area = length x width
width = area / length
= 925 / 25 = 37
Therefore, 925 ÷ 25 = c = 37.
Question 8.
q = 2,750 ÷ 10
q = 2750 ÷ 10
Area = 2750
length = 10
we know area = length x width
width = area / length
= 2750 / 10 = 275
Therefore, 2750 ÷ 10 = q = 275.
Question 9.
t = 1,134 ÷ 54
t = 1134 ÷ 54
Area = 1134
length = 54
we know area = length x width
width = area / length
= 1134 / 54 = 21
Therefore, 1134 ÷ 54 = t = 21.
Question 10.
672 ÷ 24 = g
g = 672 ÷ 24
Area = 672
length = 24
we know area = length x width
width = area / length
= 672 / 24 = 28
Therefore, 672 ÷ 24 = g = 28.
Question 11.
Model with Mathematics The area of the bottom of a swimming pool is 1,250 square meters. The length of the pool is 50 meters. What is the width of the swimming pool? Write a division equation to model this situation.
Given,
The area of the bottom of a swimming pool is 1,250 square meters.
The length of the pool is 50 meters.
We know, area = length x width
1250 = 50 x width
width = 1250 ÷ 50 = 25
Therefore, width of the swimming pool is 25 square meters.
I’m in a Learning Mindset!
What part of representing division of 2-digit divisors am I comfortable solving on my own?
_____________________________
Scroll to Top | 1,948 | 7,464 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.84375 | 5 | CC-MAIN-2024-38 | latest | en | 0.865376 |
http://www.autoitscript.com/forum/topic/86040-standard-deviation-calculator/?p=651197 | 1,429,604,246,000,000,000 | text/html | crawl-data/CC-MAIN-2015-18/segments/1429246641054.14/warc/CC-MAIN-20150417045721-00177-ip-10-235-10-82.ec2.internal.warc.gz | 344,318,754 | 28,413 | # Standard Deviation Calculator
27 replies to this topic
### #1 Ealric
Ealric
Universalist
• Active Members
• 521 posts
Posted 15 December 2008 - 03:49 AM
I'm pretty big into statistical analysis and have been working on my own statistical site for college football:
NCAA Stat Pages
So, I like working with Standard Deviation and created a quick calculator. It will give you the number of arguments (numbers) you are comparing, the mean, and the standard deviation up to 5 decimal places.
AutoIt
```#cs
===============================================================================
|| Program Name: Standard Deviation Calculator
|| Description: Program for sports predictions
|| Version: 2.0
|| Author(s): Ealric/Drabin
===============================================================================
#ce
#include <Array.au3>
#include <Constants.au3>
#include <EditConstants.au3>
#include <File.au3>
#include <GUIConstants.au3>
#include <GUIConstantsEx.au3>
#include <GuiListView.au3>
#include <Misc.au3>
#include <WinAPI.au3>
; Create our main gui interface
; Define all of our Globals and Arrays
Global \$author = "Ealric/Drabin", \$version = "1.0.0"
Global \$backgrndcolor = 0x0FFCC66, \$commandcolor = 0x066ffff, \$optioncolor = 0x0ffdd88, \$scriptcolor = 0x0ffddff, \$buttoncolor = 0x0660000, \$hctrlcolor = 0x000000, \$_font = 0x0FFFFFF
Global \$parent = GUICreate("Standard Deviation Calculator, Version: " & \$version, 383, 680)
GUISetBkColor(\$backgrndcolor)
; FILE MENU START
; POSITION -1,0
GUICtrlSetState(\$filenewitem, \$GUI_DEFBUTTON)
GUICtrlSetState(\$fileopenitem, \$GUI_DISABLE)
\$fileitem2 = GUICtrlCreateMenuItem("Test File Item 2", \$filemenu)
; FILE MENU END
; HELP MENU START
; POSITION -1,3
; HELP MENU END
; CREATE GUI BUTTONS FOR MACROS
; EACH BUTTON CAN THEN BE REPLACED BY A NEW ONE DEFINED FROM USER INPUT
Global \$boxonehctrl = _GUICtrlCreateGroupBox("Input", 10, 17, 3, 361, 182)
GUICtrlSetColor(\$boxonehctrl, \$hctrlcolor)
Global \$labelinput = GUICtrlCreateLabel("Enter numbers separated by commas" & @CRLF & "E.g: 10,20,30,40,50",30,35,300,40)
Global \$inputlabel = GUICtrlCreateInput("", 40, 85, 300, 60)
GUICtrlSetBkColor(\$inputlabel, \$_font)
Global \$calculatebtn = GUICtrlCreateButton("Calculate",140,165,100,20)
Global \$boxtwohctrl = _GUICtrlCreateGroupBox("Output", 10, 232, 3, 361, 182)
GUICtrlSetColor(\$boxtwohctrl, \$hctrlcolor)
GUICtrlCreateLabel("Total Numbers:",30,250,100,20)
GUICtrlCreateLabel("Mean (Average):",30,270,100,20)
GUICtrlCreateLabel("Standard Deviation:",30,290,100,20)
Global \$numlabel = GUICtrlCreateLabel("",170,250,100,20)
Global \$meanlabel = GUICtrlCreateLabel("",170,270,100,20)
Global \$deviatelabel = GUICtrlCreateLabel("",170,290,100,20)
GUISetState()
#cs
Main GUI Script Begins below.
#ce
While 1
\$msg = GUIGetMsg()
Select
Case \$msg = \$GUI_EVENT_CLOSE Or \$msg = \$exititem
Exit
Case \$msg = \$calculatebtn
_arrayfunc()
EndSelect
WEnd
GUIDelete()
#cs
Main GUI Script Ends.
#ce
Func _arrayfunc()
GUICtrlSetData(\$numlabel,'')
GUICtrlSetData(\$meanlabel,'')
GUICtrlSetData(\$deviatelabel,'')
Local \$i,\$x,\$number,\$mean,\$deviation
If UBound(\$numarray) <= 1 Then ; Provide an error on the number of arguments(numbers separated by commas) present.
MsgBox(48,"Error: Number of Arguments","The number of arguments must be greater than one and none of them can be empty.")
Else
\$mean = _Mean(\$numarray)
\$number = UBound(\$numarray)
\$deviate = _StdDev(\$numarray, 5)
GuiCtrlSetdata(\$numlabel, \$number)
GUICtrlSetData(\$meanlabel, \$mean)
GUICtrlSetData(\$deviatelabel, \$deviate)
EndIf
EndFunc
; || _GUICtrlCreateEdge() function (by GaryFrost)
Func _GUICtrlCreateEdge(\$i_x, \$i_y, \$i_width, \$i_height, \$v_color)
GUICtrlCreateGraphic(\$i_x, \$i_y, \$i_width, \$i_height, 0x1000)
GUICtrlSetBkColor(-1, \$v_color)
EndFunc ;==>_GUICtrlCreateEdge
; || _GUICtrlCreateGroupBox() function (by GaryFrost)
; || Usage _GUICtrlCreateGroupBox(Left, Top, LineWeight, Width, Height, Color)
Func _GUICtrlCreateGroupBox(\$sText, \$i_x, \$i_y, \$i_weight, \$i_width, \$i_height, \$v_color = -1)
Local \$hdc = _WinAPI_GetDC(0)
Local \$tSize = _WinAPI_GetTextExtentPoint32(\$hdc, \$sText)
If (\$v_color == -1) Then \$v_color = 0x000000
; left vertical line
_GUICtrlCreateEdge(\$i_x, \$i_y, \$i_weight, \$i_height, \$v_color)
Local \$h_ctlid = GUICtrlCreateLabel(\$sText, \$i_x + 4, \$i_y - (DllStructGetData(\$tSize, "Y") / 2))
GUICtrlSetBkColor(-1, \$GUI_BKCOLOR_TRANSPARENT)
; top horizontal line
_GUICtrlCreateEdge(\$i_x + DllStructGetData(\$tSize, "X") - 4, \$i_y, \$i_width - DllStructGetData(\$tSize, "X") + 4, \$i_weight, \$v_color)
; right vertical line
_GUICtrlCreateEdge(\$i_width + \$i_x - 1, \$i_y, \$i_weight, \$i_height, \$v_color)
; bottom horizontal line
_GUICtrlCreateEdge(\$i_x, \$i_height + \$i_y - 1, \$i_width + \$i_weight - 1, \$i_weight, \$v_color)
Return \$h_ctlid
EndFunc ;==>_GUICtrlCreateGroupBox
#cs
INCLUDE IS BELOW
#ce
#include-Once
; #FUNCTION# ;===============================================================================
;
; Name...........: _StdDev
; Description ...: Returns the standard deviation between all numbers stored in an array
; Syntax.........: _StdDev(\$anArray, \$iStdFloat)
; Parameters ....: \$anArray - An array containing 2 or more numbers
; \$iStdFloat - (Optional) The number of decimal places to round for STD
; \$iType - (Optional) Decides the type of Standard Deviation to use:
; |1 - Method One (Standard method using Mean)
; |2 - Method Two (Non-Standard method using Squares)
; Return values .: Success - Standard Deviation between multiple numbers
; Failure - Returns empty and Sets @Error:
; |0 - No error.
; |1 - Invalid \$anArray (not an array)
; |2 - Invalid \$anArray (contains less than 2 numbers)
; |3 - Invalid \$iStdFloat (cannot be negative)
; |4 - Invalid \$iStdFloat (not an integer)
; Author ........: Ealric
; Modified.......:
; Remarks .......:
; Related .......: _StdDev
; Example .......; Yes;
;
;==========================================================================================
Func _StdDev(ByRef \$anArray, \$iStdFloat = 0, \$iType = 1)
If Not IsArray(\$anArray) Then Return SetError(1, 0, "") ; Set Error if not an array
If UBound(\$anArray) <= 1 Then Return SetError(2, 0, "") ; Set Error if array contains less than 2 numbers
If \$iStdFloat <= -1 Then Return SetError(3, 0, "") ; Set Error if argument is negative
If Not IsInt(\$iStdFloat) Then Return SetError(4, 0, "") ; Set Error if argument is not an integer
Local \$n = 0, \$nSum = 0
Local \$iMean = _Mean(\$anArray)
Local \$iCount = _StatsCount(\$anArray)
Switch \$iType
Case 1
For \$i = 0 To \$iCount - 1
\$n += (\$anArray[\$i] - \$iMean)^2
Next
If (\$iStdFloat = 0) Then
Local \$nStdDev = Sqrt(\$n / (\$iCount-1))
Else
Local \$nStdDev = Round(Sqrt(\$n / (\$iCount-1)), \$iStdFloat)
EndIf
Return \$nStdDev
Case 2
For \$i = 0 To \$iCount - 1
\$n = \$n + \$anArray[\$i]
\$nSum = \$nSum + (\$anArray[\$i] * \$anArray[\$i])
Next
If (\$iStdFloat = 0) Then
Local \$nStdDev = Sqrt((\$nSum - (\$n * \$n) / \$iCount) / (\$iCount - 1))
Else
Local \$nStdDev = Round(Sqrt((\$nSum - (\$n * \$n) / \$iCount) / (\$iCount - 1)), \$iStdFloat)
EndIf
Return \$nStdDev
EndSwitch
EndFunc ;==>_StdDev
; #FUNCTION#;===============================================================================
;
; Name...........: _Mean
; Description ...: Returns the mean of a data set, choice of Pythagorean means
; Syntax.........: _Mean(Const ByRef \$anArray[, \$iStart = 0[, \$iEnd = 0[, \$iType = 1]]])
; Parameters ....: \$anArray - 1D Array containing data set
; \$iStart - Starting index for calculation inclusion
; \$iEnd - Last index for calculation inclusion
; \$iType - One of the following:
; |1 - Arithmetic mean (default)
; |2 - Geometric mean
; |3 - Harmonic mean
; Return values .: Success - Mean of data set
; Failure - Returns "" and Sets @Error:
; |0 - No error.
; |1 - \$anArray is not an array or is multidimensional
; |2 - Invalid mean type
; |3 - Invalid boundaries
; Author ........: Andybiochem
; Modified.......:
; Remarks .......:
; Related .......:
; Example .......;
;
;;==========================================================================================
Func _Mean(Const ByRef \$anArray, \$iStart = 0, \$iEnd = 0, \$iType = 1)
If Not IsArray(\$anArray) Or UBound(\$anArray, 0) <> 1 Then Return SetError(1, 0, "")
If Not IsInt(\$iType) Or \$iType < 1 Or \$iType > 3 Then Return SetError(2, 0, "")
Local \$iUBound = UBound(\$anArray) - 1
If Not IsInt(\$iStart) Or Not IsInt(\$iEnd) Then Return SetError(3, 0, "")
If \$iEnd < 1 Or \$iEnd > \$iUBound Then \$iEnd = \$iUBound
If \$iStart < 0 Then \$iStart = 0
If \$iStart > \$iEnd Then Return SetError(3, 0, "")
Local \$nSum = 0, \$iN = (\$iEnd - (\$iStart - 1))
Switch \$iType
Case 1;Arithmetic mean
For \$i = \$iStart To \$iEnd
\$nSum += \$anArray[\$i]
Next
Return \$nSum / \$iN
Case 2;Geometric mean
For \$i = \$iStart To \$iEnd
\$nSum *= \$anArray[\$i]
If \$i = \$iStart Then \$nSum += \$anArray[\$i]
Next
Return \$nSum ^ (1 / \$iN)
Case 3;Harmonic mean
For \$i = \$iStart To \$iEnd
\$nSum += 1 / \$anArray[\$i]
Next
Return \$iN / \$nSum
EndSwitch
EndFunc ;==>_Mean
Func _StatsSum(ByRef \$a_Numbers)
If Not IsArray(\$a_Numbers) Then SetError(1, 0, "") ;If not an array of value(s) then error and return a blank string
Local \$i_Count = _StatsCount(\$a_Numbers)
Local \$n_SumX = 0
For \$i = 0 To \$i_Count - 1 Step 1
\$n_Sum += \$a_Numbers[\$i]
Next
Return \$n_Sum
EndFunc ;==>_StatsSum
Func _StatsCount(ByRef \$a_Numbers)
Return UBound(\$a_Numbers)
EndFunc ;==>_StatsCount
Func _StatsCp(\$n_USL, \$n_LSL, \$n_StdDev)
If Not IsNumber(\$n_USL) Then SetError(1, 0, "")
If Not IsNumber(\$n_LSL) Then SetError(2, 0, "")
If Not IsNumber(\$n_StdDev) Then SetError(3, 0, "")
Return (\$n_USL - \$n_LSL) / (6 * \$n_StdDev)
EndFunc ;==>_StatsCp
Func _StatsCpk(\$n_USL, \$n_LSL, \$n_StdDev, \$n_Mean)
If Not IsNumber(\$n_USL) Then SetError(1, 0, "")
If Not IsNumber(\$n_LSL) Then SetError(2, 0, "")
If Not IsNumber(\$n_StdDev) Then SetError(3, 0, "")
If Not IsNumber(\$n_Mean) Then SetError(4, 0, "")
Local \$n_AboveMean = (\$n_USL - \$n_Mean) / (3 * \$n_StdDev)
Local \$n_BelowMean = (\$n_Mean - \$n_LSL) / (3 * \$n_StdDev)
If \$n_AboveMean < \$n_BelowMean Then
Return \$n_AboveMean
Else
Return \$n_BelowMean
EndIf
EndFunc ;==>_StatsCpk```
Edited by Ealric, 02 March 2009 - 07:55 PM.
### #2 JSThePatriot
JSThePatriot
carpe diem. vita brevis.
• MVPs
• 3,692 posts
Posted 27 February 2009 - 03:12 PM
Ealric,
I have worked at a previous company whereby we had to calculate Standard Deviation, as well as Estimated Deviation. I like your work. We should make some statistical functions for AutoIt as a library. Now I am working at a company that may interest you and your stats program as we work with several stats programs for sports. Company is Sound & Video Creations, Inc. branding ClickEffects.
Let me know what you think.
Regards,
Jarvis
[color=#3333FF;]File-String Hash Plugin[/color][color=#FF0000;] Updated! 04-02-2008 Plugins have been discontinued. I just found out.[/color]
[color=#3333FF;]ComputerGetInfo UDF's[/color][color=#FF0000;] Updated! 11-23-2006[/color]
[color=#3333FF;]Vortex Revolutions[/color] Engineer / Inventor [color=#999999;](Web, Desktop, and Mobile Applications, Hardware Gizmos, Consulting, and more)[/color]
### #3 Andreik
Andreik
Bishop
• Active Members
• 2,604 posts
Posted 27 February 2009 - 03:27 PM
Nice but could be easy to calculate with STDEV or STDEVP with Excel.
When the words fail... music speaks
### #4 JSThePatriot
JSThePatriot
carpe diem. vita brevis.
• MVPs
• 3,692 posts
Posted 27 February 2009 - 04:22 PM
Nice but could be easy to calculate with STDEV or STDEVP with Excel.
While that is very true, that would also require whoever used the library to have Excel, and I don't like limiting peoples ability to use a script/library I write by having to have some proprietary program that cost a good sum of change. If it were a free program, then I probably wouldn't mind, but I also like keeping my scripts independent of most anything if possible.
Thanks,
Jarvis
[color=#3333FF;]File-String Hash Plugin[/color][color=#FF0000;] Updated! 04-02-2008 Plugins have been discontinued. I just found out.[/color]
[color=#3333FF;]ComputerGetInfo UDF's[/color][color=#FF0000;] Updated! 11-23-2006[/color]
[color=#3333FF;]Vortex Revolutions[/color] Engineer / Inventor [color=#999999;](Web, Desktop, and Mobile Applications, Hardware Gizmos, Consulting, and more)[/color]
### #5 Andreik
Andreik
Bishop
• Active Members
• 2,604 posts
Posted 27 February 2009 - 04:31 PM
While that is very true, that would also require whoever used the library to have Excel, and I don't like limiting peoples ability to use a script/library I write by having to have some proprietary program that cost a good sum of change. If it were a free program, then I probably wouldn't mind, but I also like keeping my scripts independent of most anything if possible.
Thanks,
Jarvis
I said that because I hope that he take a look in help for STDEV in Excel Help File (if he didn't do that - there is describe the algorithm) and maybe then can write a code more good ( or more bad ).
When the words fail... music speaks
### #6 Ealric
Ealric
Universalist
• Active Members
• 521 posts
Posted 27 February 2009 - 04:42 PM
Ealric,
I have worked at a previous company whereby we had to calculate Standard Deviation, as well as Estimated Deviation. I like your work. We should make some statistical functions for AutoIt as a library. Now I am working at a company that may interest you and your stats program as we work with several stats programs for sports. Company is Sound & Video Creations, Inc. branding ClickEffects.
Let me know what you think.
Regards,
Jarvis
Hi Jarvis,
Thanks for the interest in this. I have been working on an extremely diverse statistics program which I call GSA. I've compiled a Flash projector executable which you can download here:
http://ncaastatpages.com/GSA.exe
I tested the program in the ESPN bowl challenge and went 22 - 12 for the challenge with GSA. I saved all of the bowl comparisons for analysis. What I try to do is work towards a 5% P-value in all my statistical formulas. I'm right at around 28% P-value so I have a lot of work still to do. But, it did predict the national champion and you can play with it and let me know what you think. I would be happy to work on a statistics library if you want some help with it.
Thanks.
### #7 FireFox
FireFox
It slips through our fingers, like a fist full of sand.
• MVPs
• 5,394 posts
Posted 27 February 2009 - 04:57 PM
@Ealric
this is only some statistics calculations for me
your 'mean average' is a 'class center' for example :
```\$A = 20
\$B = 30
If \$A > \$B Then
\$C = \$B - \$A
\$D = \$C / 2
\$E = \$A - \$D
MsgBox(64, 'CC', \$E)
Else
\$C = \$A - \$B
\$D = \$C / 2
\$E = \$B - \$D
MsgBox(64, 'CC', \$E)
EndIf```
Cheers, FireFox.
Edited by FireFox, 27 February 2009 - 04:58 PM.
OS : Win XP SP2 (32 bits) / Win 7 SP1 (64 bits) / Win 8 (64 bits) | Autoit version: latest stable / beta.
Hardware : Intel® Core™ i5-2400 CPU @ 3.10Ghz / 8 GiB RAM DDR3.
My UDFs : Skype UDF | TrayIconEx UDF | GUI Panel UDF | Excel XML UDF | Is_Pressed_UDF
My Projects : YouTube Multi-downloader | FTP Easy-UP | Lock'n | WinKill | AVICapture | Skype TM | Tap Maker | ShellNew | Scriptner | Const Replacer | FT_Pocket | Chrome theme maker
My Examples : Capture toolIP Camera | Crosshair | Draw Captured Region | Picture Screensaver | Jscreenfix | Drivetemp | Picture viewer
My Snippets : Basic TCP | Systray_GetIconIndex | Intercept End task | Winpcap various | Advanced HotKeySet | Transparent Edit control
Updated 07 November, 2013 - If you find dead links please send me a PM, do not post in the topics !
### #8 Ealric
Ealric
Universalist
• Active Members
• 521 posts
Posted 28 February 2009 - 02:43 PM
Use Top Post for UDF.
Edited by Ealric, 02 March 2009 - 07:48 PM.
### #9 Ealric
Ealric
Universalist
• Active Members
• 521 posts
Posted 28 February 2009 - 03:02 PM
JS,
I propose that if we do work on multiple UDFs that it be implemented in an include called "Statistics". The Standard Deviation UDF is just one - but I can think of another 2 or 3 that I use often in calculations for variance. Let me know which UDFs you want to work on with me, or give me an idea of what types of UDF you are interested in.
Thanks.
### #10 andybiochem
andybiochem
Universalist
• Active Members
• 308 posts
Posted 01 March 2009 - 08:34 AM
Hi!
1) Not sure why you'd want to include a Mean function in an SD function? what's wrong with _Mean() on its own??
2) You need to set the float for the mean calc too ... i.e. what's the mean of 0.002,0.003,0.001 ????
3) why not make the rounding optional?
4) I think you're asking for problems including the zero index element in calculations
For comparison, here are the SD, Mean, Sum, CV (Coefficient of variation), and Var (Variance) I wrote for my stats packages:
AutoIt
```Func _SD(ByRef \$aArray)
\$iN = UBound(\$aArray) - 1
\$iMean = _Mean(\$aArray)
\$iSD = 0
For \$i = 1 To \$iN
\$iSD += (\$iMean - \$aArray[\$i])^2
Next
Return Sqrt(\$iSD/\$iN)
EndFunc
Func _CV(ByRef \$aArray)
Return (_SD(\$aArray) / _Mean(\$aArray)) * 100
EndFunc
Func _Var(ByRef \$aArray)
Return _SD(\$aArray)^2
EndFunc
Func _Mean(ByRef \$aArray)
Return _Sum(\$aArray) / (UBound(\$aArray) - 1)
EndFunc
Func _Sum(ByRef \$aArray)
\$iSum = 0
For \$i = 1 To (UBound(\$aArray) - 1)
\$iSum += \$aArray[\$i]
Next
Return \$iSum
EndFunc```
- Table UDF - create simple data tables - Line Graph UDF GDI+ - quickly create simple line graphs with x and y axes (uses GDI+ with double buffer) - Line Graph UDF - quickly create simple line graphs with x and y axes (uses AI native graphic control) - Barcode Generator Code 128 B C - Create the 1/0 code for barcodes. - WebCam as BarCode Reader - use your webcam to read barcodes - Stereograms!!! - make your own stereograms in AutoIT - Ziggurat Gaussian Distribution RNG - generate random numbers based on normal/gaussian distribution - Box-Muller Gaussian Distribution RNG - generate random numbers based on normal/gaussian distribution - Elastic Radio Buttons - faux-gravity effects in AutoIT (from javascript)- Morse Code Generator - Generate morse code by tapping your spacebar!
### #11 Ealric
Ealric
Universalist
• Active Members
• 521 posts
Posted 01 March 2009 - 03:40 PM
Other posts updated.
Edited by Ealric, 02 March 2009 - 03:43 PM.
### #12 andybiochem
andybiochem
Universalist
• Active Members
• 308 posts
Posted 01 March 2009 - 08:32 PM
...Mean is a part of standard deviation - always will be. There's no need to apply a secondary function to calculate the mean. One function returns either result, when specified.
Then why not call your function "_Pearson_Product_Moment_Correlation_Coefficient()" ...that calculation includes SD and Mean too!!!
I don't mean to be negative - I'd like to see some good stats functions in AI too - but function names should indicate what the function does (especially if you want the UDF adding to the AI install). "_StandardDeviation()" suggests the function returns SD nothing else. Once you start to write complicated equations using mean and SD the result will be very confusing when using just one function with different flags.
The AI help file would also be unusable if functions were hidden inside other ones.
Clarity is key here.
- Table UDF - create simple data tables - Line Graph UDF GDI+ - quickly create simple line graphs with x and y axes (uses GDI+ with double buffer) - Line Graph UDF - quickly create simple line graphs with x and y axes (uses AI native graphic control) - Barcode Generator Code 128 B C - Create the 1/0 code for barcodes. - WebCam as BarCode Reader - use your webcam to read barcodes - Stereograms!!! - make your own stereograms in AutoIT - Ziggurat Gaussian Distribution RNG - generate random numbers based on normal/gaussian distribution - Box-Muller Gaussian Distribution RNG - generate random numbers based on normal/gaussian distribution - Elastic Radio Buttons - faux-gravity effects in AutoIT (from javascript)- Morse Code Generator - Generate morse code by tapping your spacebar!
### #13 Ealric
Ealric
Universalist
• Active Members
• 521 posts
Posted 01 March 2009 - 09:09 PM
Then why not call your function "_Pearson_Product_Moment_Correlation_Coefficient()" ...that calculation includes SD and Mean too!!!
I don't mean to be negative - I'd like to see some good stats functions in AI too - but function names should indicate what the function does (especially if you want the UDF adding to the AI install). "_StandardDeviation()" suggests the function returns SD nothing else. Once you start to write complicated equations using mean and SD the result will be very confusing when using just one function with different flags.
The AI help file would also be unusable if functions were hidden inside other ones.
Clarity is key here.
Your point is noted and yes you are sounding negative and drawing too much into your observations. Nothing else is being changed here.
Thanks.
Edit: Just wanted to offer another point of clarification. The return of just the mean is "optional". The function without options returns the standard deviation as noted. So, again, I don't understand why you are up in arms over having an option in a function to return just the mean. I do understand your reasoning but I don't necessarily agree with it. My UDF follows "every guideline" posted by GaryFrost. I'll wait for further feedback from some more experienced coders before deciding any further changes.
Edited by Ealric, 01 March 2009 - 09:21 PM.
### #14 andybiochem
andybiochem
Universalist
• Active Members
• 308 posts
Posted 01 March 2009 - 09:17 PM
Well, good luck with it.
http://www.autoitscript.com/autoit3/udfs/UDF_Standards.htm
- Table UDF - create simple data tables - Line Graph UDF GDI+ - quickly create simple line graphs with x and y axes (uses GDI+ with double buffer) - Line Graph UDF - quickly create simple line graphs with x and y axes (uses AI native graphic control) - Barcode Generator Code 128 B C - Create the 1/0 code for barcodes. - WebCam as BarCode Reader - use your webcam to read barcodes - Stereograms!!! - make your own stereograms in AutoIT - Ziggurat Gaussian Distribution RNG - generate random numbers based on normal/gaussian distribution - Box-Muller Gaussian Distribution RNG - generate random numbers based on normal/gaussian distribution - Elastic Radio Buttons - faux-gravity effects in AutoIT (from javascript)- Morse Code Generator - Generate morse code by tapping your spacebar!
### #15 Ealric
Ealric
Universalist
• Active Members
• 521 posts
Posted 01 March 2009 - 09:25 PM
Well, good luck with it.
http://www.autoitscript.com/autoit3/udfs/UDF_Standards.htm
Read my edited post earlier. I've followed all of those guidelines. Is there something within that link you want to point out specifically that you feel "I'm not" following?
### #16 andybiochem
andybiochem
Universalist
• Active Members
• 308 posts
Posted 01 March 2009 - 09:47 PM
Whatever.
Don't make the mistake of thinking that my low post count means I'm not experienced.
- Table UDF - create simple data tables - Line Graph UDF GDI+ - quickly create simple line graphs with x and y axes (uses GDI+ with double buffer) - Line Graph UDF - quickly create simple line graphs with x and y axes (uses AI native graphic control) - Barcode Generator Code 128 B C - Create the 1/0 code for barcodes. - WebCam as BarCode Reader - use your webcam to read barcodes - Stereograms!!! - make your own stereograms in AutoIT - Ziggurat Gaussian Distribution RNG - generate random numbers based on normal/gaussian distribution - Box-Muller Gaussian Distribution RNG - generate random numbers based on normal/gaussian distribution - Elastic Radio Buttons - faux-gravity effects in AutoIT (from javascript)- Morse Code Generator - Generate morse code by tapping your spacebar!
### #17 Ealric
Ealric
Universalist
• Active Members
• 521 posts
Posted 01 March 2009 - 09:55 PM
Whatever.
Don't make the mistake of thinking that my low post count means I'm not experienced.
Andy, putting differences of opinion aside, I do not think you are "inexperienced". When I mention more experienced coders, I'm referring to coders that are more experienced "than I am". I listen to my peers - even to you (why do you think I changed the UDF to account for two changes you posted?). I make changes when they definitely are needed and you posted a very valid reason which required a change in the UDF. I did that.
I really do understand what you are getting at with the _mean function but rather than make assumptions and disect my UDF and add more functions, I simply want to wait and get some feedback from some more experienced coders. One thing that is very important to me as well as you is clarity. What I'd like to eventually do is have a library of statistics called "Statistics.au3" that houses statistical functions. However, I'm not certain that others would want _mean in that library. It's not my decision. I can only offer my input on a few UDFs that I enjoy using personally, standardize them, and make them available to others. If someone wants to put together a standardized library that is a collaborative effort, I'm all for that.
Again, just because I respond to you doesn't mean I'm berating you. Relax mate.
I've read your posts - you present some very good and solid ideas. Show me one person on this board that agrees with everyone 100% of the time. I don't know that person...
### #18 JSThePatriot
JSThePatriot
carpe diem. vita brevis.
• MVPs
• 3,692 posts
Posted 02 March 2009 - 03:51 AM
@Andy
It would seem you came to this thread with a bit of an attitude, and I don't believe anyone has challenged your coding skills. We certainly value your input as we move forward. It did seem to me you were being a bit arrogant with your statements. I don't really care as I will do whatever, but I do value input.
@Ealric
I already named my file Stats.au3, we could call it Statistics.au3 as that would be fine. I did more like Andy, and broke the functions down that way in corresponding functions we could call the same routines from each of them. Tomorrow I will post what I already have. I like your StdDev function. Looks good.
Regards,
Jarvis
[color=#3333FF;]File-String Hash Plugin[/color][color=#FF0000;] Updated! 04-02-2008 Plugins have been discontinued. I just found out.[/color]
[color=#3333FF;]ComputerGetInfo UDF's[/color][color=#FF0000;] Updated! 11-23-2006[/color]
[color=#3333FF;]Vortex Revolutions[/color] Engineer / Inventor [color=#999999;](Web, Desktop, and Mobile Applications, Hardware Gizmos, Consulting, and more)[/color]
### #19 andybiochem
andybiochem
Universalist
• Active Members
• 308 posts
Posted 02 March 2009 - 08:40 AM
@Andy
It would seem you came to this thread with a bit of an attitude, and I don't believe anyone has challenged your coding skills. We certainly value your input as we move forward. It did seem to me you were being a bit arrogant with your statements. I don't really care as I will do whatever, but I do value input.
Jarvis
My apologies if I came across as arrogant... my intention was to be pragmatic.
I'm very enthusiastic about AI having some good native stats functions, perhaps I have been over-critical in my enthusiasm.
In the interest of a Statistics UDF, here's a Mean function:
Plain Text
```; #FUNCTION#;===============================================================================
;
; Name...........: _Mean
; Description ...: Returns the mean of a data set, choice of Pythagorean means
; Syntax.........: _Mean(Const ByRef \$anArray[, \$iStart = 0[, \$iEnd = 0[, \$iType = 1]]])
; Parameters ....: \$anArray - 1D Array containing data set
; \$iStart - Starting index for calculation inclusion
; \$iEnd - Last index for calculation inclusion
; \$iType - One of the following:
; |1 - Arithmetic mean (default)
; |2 - Geometric mean
; |3 - Harmonic mean
; Return values .: Success - Mean of data set
; Failure - Returns "" and Sets @Error:
; |0 - No error.
; |1 - \$anArray is not an array or is multidimensional
; |2 - Invalid mean type
; |3 - Invalid boundaries
; Author ........: Andybiochem
; Modified.......:
; Remarks .......:
; Related .......:
; Example .......;
;
;;==========================================================================================
Func _Mean(Const ByRef \$anArray, \$iStart = 0, \$iEnd = 0, \$iType = 1)
;----- check array -----
If Not IsArray(\$anArray) Or UBound(\$anArray, 0) <> 1 Then Return SetError(1, 0, "")
;----- check type -----
If Not IsInt(\$iType) Or \$iType < 1 Or \$iType > 3 Then Return SetError(2, 0, "")
;----- Check bounds -----
Local \$iUBound = UBound(\$anArray) - 1
If Not IsInt(\$iStart) Or Not IsInt(\$iEnd) Then Return SetError(3, 0, "")
If \$iEnd < 1 Or \$iEnd > \$iUBound Then \$iEnd = \$iUBound
If \$iStart < 0 Then \$iStart = 0
If \$iStart > \$iEnd Then Return SetError(3, 0, "")
;----- Calculate means -----
Local \$nSum = 0, \$iN = (\$iEnd - (\$iStart - 1))
Switch \$iType
Case 1;Aritmetic mean
For \$i = \$iStart To \$iEnd
\$nSum += \$anArray[\$i]
Next
Return \$nSum / \$iN
Case 2;Geometric mean
For \$i = \$iStart To \$iEnd
\$nSum *= \$anArray[\$i]
If \$i = \$iStart Then \$nSum += \$anArray[\$i]
Next
Return \$nSum ^ (1 / \$iN)
Case 3;Harmonic mean
For \$i = \$iStart To \$iEnd
\$nSum += 1 / \$anArray[\$i]
Next
Return \$iN / \$nSum
EndSwitch
EndFunc;==>_Mean```
Criticism welcome
[EDIT] - tidied a bit
Edited by andybiochem, 02 March 2009 - 10:15 AM.
- Table UDF - create simple data tables - Line Graph UDF GDI+ - quickly create simple line graphs with x and y axes (uses GDI+ with double buffer) - Line Graph UDF - quickly create simple line graphs with x and y axes (uses AI native graphic control) - Barcode Generator Code 128 B C - Create the 1/0 code for barcodes. - WebCam as BarCode Reader - use your webcam to read barcodes - Stereograms!!! - make your own stereograms in AutoIT - Ziggurat Gaussian Distribution RNG - generate random numbers based on normal/gaussian distribution - Box-Muller Gaussian Distribution RNG - generate random numbers based on normal/gaussian distribution - Elastic Radio Buttons - faux-gravity effects in AutoIT (from javascript)- Morse Code Generator - Generate morse code by tapping your spacebar!
### #20 JSThePatriot
JSThePatriot
carpe diem. vita brevis.
• MVPs
• 3,692 posts
Posted 02 March 2009 - 02:38 PM
My apologies if I came across as arrogant... my intention was to be pragmatic.
I'm very enthusiastic about AI having some good native stats functions, perhaps I have been over-critical in my enthusiasm.
In the interest of a Statistics UDF, here's a Mean function:
Plain Text
```; #FUNCTION#;===============================================================================
;
; Name...........: _Mean
; Description ...: Returns the mean of a data set, choice of Pythagorean means
; Syntax.........: _Mean(Const ByRef \$anArray[, \$iStart = 0[, \$iEnd = 0[, \$iType = 1]]])
; Parameters ....: \$anArray - 1D Array containing data set
; \$iStart - Starting index for calculation inclusion
; \$iEnd - Last index for calculation inclusion
; \$iType - One of the following:
; |1 - Arithmetic mean (default)
; |2 - Geometric mean
; |3 - Harmonic mean
; Return values .: Success - Mean of data set
; Failure - Returns "" and Sets @Error:
; |0 - No error.
; |1 - \$anArray is not an array or is multidimensional
; |2 - Invalid mean type
; |3 - Invalid boundaries
; Author ........: Andybiochem
; Modified.......:
; Remarks .......:
; Related .......:
; Example .......;
;
;;==========================================================================================
Func _Mean(Const ByRef \$anArray, \$iStart = 0, \$iEnd = 0, \$iType = 1)
;----- check array -----
If Not IsArray(\$anArray) Or UBound(\$anArray, 0) <> 1 Then Return SetError(1, 0, "")
;----- check type -----
If Not IsInt(\$iType) Or \$iType < 1 Or \$iType > 3 Then Return SetError(2, 0, "")
;----- Check bounds -----
Local \$iUBound = UBound(\$anArray) - 1
If Not IsInt(\$iStart) Or Not IsInt(\$iEnd) Then Return SetError(3, 0, "")
If \$iEnd < 1 Or \$iEnd > \$iUBound Then \$iEnd = \$iUBound
If \$iStart < 0 Then \$iStart = 0
If \$iStart > \$iEnd Then Return SetError(3, 0, "")
;----- Calculate means -----
Local \$nSum = 0, \$iN = (\$iEnd - (\$iStart - 1))
Switch \$iType
Case 1;Aritmetic mean
For \$i = \$iStart To \$iEnd
\$nSum += \$anArray[\$i]
Next
Return \$nSum / \$iN
Case 2;Geometric mean
For \$i = \$iStart To \$iEnd
\$nSum *= \$anArray[\$i]
If \$i = \$iStart Then \$nSum += \$anArray[\$i]
Next
Return \$nSum ^ (1 / \$iN)
Case 3;Harmonic mean
For \$i = \$iStart To \$iEnd
\$nSum += 1 / \$anArray[\$i]
Next
Return \$iN / \$nSum
EndSwitch
EndFunc;==>_Mean```
Criticism welcome
[EDIT] - tidied a bit
No biggie about the arrogance, I am happy for your enthusiasm. I like how you added the different means in your code as I didn't do that in mine. I only had the Arithmetic mean. I noticed a small typo in your comment for the Arithmetic mean (you spelled it Aritmetic).
Now to post what I have come up with, but you guys have already made your's UDF Standards Compliant. I always make working functions, then add error checking, then all the documentation needed for the Standards Include.
Thanks for contributing!
Jarvis
[color=#3333FF;]File-String Hash Plugin[/color][color=#FF0000;] Updated! 04-02-2008 Plugins have been discontinued. I just found out.[/color]
[color=#3333FF;]ComputerGetInfo UDF's[/color][color=#FF0000;] Updated! 11-23-2006[/color] | 9,762 | 34,544 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2015-18 | latest | en | 0.47425 |
https://www.manula.com/manuals/red-road-telecom/trackmyad/1/en/topic/calls-results-costs | 1,516,101,152,000,000,000 | text/html | crawl-data/CC-MAIN-2018-05/segments/1516084886416.17/warc/CC-MAIN-20180116105522-20180116125522-00294.warc.gz | 918,936,863 | 5,010 | The Calls, Results, and Costs graph is similar to the Calls, Minutes, Results graph. It includes a calculated cost per call and cost per result. For example:
This graph shows the same calculation for results as in the previous example.. In addition, it shows the cost per call and cost per result. This is calculated as follows:
The total ad cost was given as \$1,000 in the Numbers table. The system checks to see whether the entire run of the ad was competed before the end date of the stats display. If not, the cost is adjusted accordingly. In this case the full run was complete, so we take the full \$1,000 as the cost of the ad. Dividing that by the number of calls, 7, gives the result for cost per call: \$142.86.
The same logic is applied to calculating the cost per result. The value of the “result” calculation is 3.67. Divide \$1,000 by 3.67 and that gives the cost per result: \$272.48.
Need more help with this? | 222 | 930 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.125 | 3 | CC-MAIN-2018-05 | longest | en | 0.952188 |
https://community.ptc.com/t5/Mathcad/Equation-solving-challenge-Filtration/td-p/561580 | 1,701,202,977,000,000,000 | text/html | crawl-data/CC-MAIN-2023-50/segments/1700679099942.90/warc/CC-MAIN-20231128183116-20231128213116-00459.warc.gz | 218,969,239 | 49,208 | cancel
Showing results for
Did you mean:
cancel
Showing results for
Did you mean:
11-Garnet
## Equation solving challenge-Filtration
Hi everyone, for a second time I’ll ask for your help, I still working on a particle retention research and I would be really thankful if someone could help me out with this.
Let me explain the situation: I need to solve an equation that allows me to match experimental data to the Filippov model and here begins the problems, the experimental data represents a discrete distribution for particle and pore size distribution, but the Filippov model used a continuous log-normal distribution function for particles and bilog-normal distribution function for membrane pores, I’ve been trying to use the “dlnorm” function from Mathcad but I suppose that something is wrong because it doesn’t work like it supposed to do. I’ve been reading a paper that claims to solve the integro-diffrential equations for flux J(t) and for the injected volume V(t), this paper however, does not give much detail about it.
Thank you in advance, I am open to advices related to this particular challenge.
4 REPLIES 4
23-Emerald I
(To:jmt7)
You have a complex problem!
You're right, the article is hard to read and doesn't give much detail.
Unfortunately your worksheet is hard (for me) to understand too. It's not exactly clear what you're trying to do. Some of your sheet has units, some does not, gets very challenging to track and understand.
Maybe a little more description of what you're trying to do?
11-Garnet
(To:Fred_Kohlhepp)
Thanks for your answer, maybe this worksheet could help. Like I said, I'm trying to plot J(t) and V(t) that it is the solution of the IDE (integro-differential equation).
Regards
23-Emerald I
(To:jmt7)
Okay. Look at the attached sheet.
I've started putting units to some of your quantities (not just typing them beside unitless values.
Then I got as far as needing to define flux J(t) as a function so it could be evaluated. Can't find (in the article) how to do that so I'm stumped unless you can shed some light.
11-Garnet
(To:Fred_Kohlhepp)
Thanks for your answer, I simplified a lot of things on the worksheet . Like I said, I'm trying to plot J(t) and V(t) that it is the solution of the IDE (integro-differential equation). The part that is confusing, it is that probability used J(t) to be calculated, as you can see in the equation, this is one of the equation that I want to plot (the probability was already substituted assuming continuous distrution)
Regards
Announcements | 597 | 2,557 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.53125 | 3 | CC-MAIN-2023-50 | latest | en | 0.947943 |
https://www.coursehero.com/file/6816067/Ch3-SJP-version/ | 1,487,592,518,000,000,000 | text/html | crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00410-ip-10-171-10-108.ec2.internal.warc.gz | 798,993,406 | 354,983 | Ch3_SJP_version
# Ch3_SJP_version - Lecture notes (these are from ny earlier...
This preview shows pages 1–3. Sign up to view the full content.
3-1 Lecture notes (these are from ny earlier version of the course - we may follow these at a slightly different order, but they should still be relevant!) Physics 3220, Steve Pollock. Basic Principles of Quantum Mechanics The first part of Griffith's Ch 3 is in many ways a review of what we've been talking about, just stated a little formally, introducing some new notation and a few new twists. We will keep coming come back to it - First, a quick review of ordinary vectors . Think carefully about each of these - nothing here is unfamiliar (though the notation may feel a little abstract), but if you get comfortable with each idea for regular vectors, you'll find it much easier to generalize them to more abstract vectors! Vectors live in N-dimensional space. (You're used to 3-D!) We have (or can choose) basis vectors : ˆ e i (N of them in N-dim space.) (Example in an "older Phys 1110 notation" of these would be the old familiar unit vectors: ˆ i , ˆ j , ˆ k They are orthonormal : ˆ e i ˆ e j = δ ij (This is the scalar , or inner , or dot product.) They are complete : This means any vector v = v i ˆ e i i is a unique linear combo of basis vectors. The basis set spans ordinary space. This is like completeness, but backwards - every linear combination of basis vectors is again an N-Dim vector, and all such linear combos generate all possible vectors in the space. We can choose a specific representation of v , namely {v 1 ,v 2 3 , v n } , but it is not unique, it depends on the choice of basis. (e.g. polar vs. rectangular, and even which particular rotation of the rectangular axes.) Each number v i is the projection of v in the ˆ e i direction, and can be obtained by the formula v i = v ˆ e i . (This involves the same scalar product, again, as we used above in the statement of orthonormality. ) You can prove the last formula by using orthogonality and completeness. Addition, or multiplication by a scalar (number), keeps you in the same N-dim "vector space". (adding or scaling vectors gives another vector.)
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
3-2 We can make very powerful analogies to all of the above in the world of square integrable functions: Note the one to one correspondence between each of the following statements about functions, with the preceding ones about vectors. Square integrable functions live in Hilbert space. (Never mind what this means for now!) We have (or can choose) basis functions : u n ( x ) ( Infinitely many of them.) (This infinity might be countable (discrete), or it might be uncountable, in which case you can't use integers as labels, but need real numbers.) We have already met some examples of both such types of u n 's, as eigenfunctions of operators. They are orthonormal : dx u n ( x ) u m ( x ) = δ nm . This is apparently our new way of writing the inner product. (If the labeling is continuous, the right side will be a Dirac delta function!) They are complete : Any function ψ ( x ) = c n u n ( x ) n is a unique linear combo of the basis vectors. (If the labeling is continuous, then ( x ) = dE c ( E ) u E ( x ) ) The basis spans Hilbert space. This is similar to
This is the end of the preview. Sign up to access the rest of the document.
## This note was uploaded on 02/27/2012 for the course PHYSICS 3220 taught by Professor Stevepollock during the Fall '08 term at Colorado.
### Page1 / 24
Ch3_SJP_version - Lecture notes (these are from ny earlier...
This preview shows document pages 1 - 3. Sign up to view the full document.
View Full Document
Ask a homework question - tutors are online | 910 | 3,791 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.8125 | 4 | CC-MAIN-2017-09 | longest | en | 0.912275 |
http://stackoverflow.com/questions/9231745/determine-how-many-states-the-minimal-dfa-will-have | 1,406,151,160,000,000,000 | text/html | crawl-data/CC-MAIN-2014-23/segments/1405997883468.51/warc/CC-MAIN-20140722025803-00142-ip-10-33-131-23.ec2.internal.warc.gz | 161,157,652 | 15,692 | # Determine how many states the minimal DFA will have
This is the pumping lemma to demonstare that a language is not regular:If L is a regular language,there is a const N such that, for each z in L, with |z|>=N, is possibile to divide z in three sub-strings (uvw=z)such that:
1)|uv|<=N;
2)|v|>=1;
3)For each k>=0, uv^kw in L.
N must be less or equal than the minumum number of states of the DFA accepting L.So to apply the pumping lemma I need to know how many states will have the minimal DFA accepting L.Is there a way to know how many states will have backwards?So is possibile to know the minimal number of states without building the minimal DFA?
-
N must be less or equal than the minumum number of states of the DFA accepting L
N cannot be less than the number of states in a minimal DFA accepting L; otherwise, the DFA couldn't accept L (if it could, you would have a DFA accepting L smaller than the minimal DFA accepting L, a contradiction). We can safely assume that N is equal to the number of states in the minimal DFA accepting L (such DFAs are unique).
So to apply the pumping lemma I need to know how many states will have the minimal DFA accepting L
This is not strictly true. In most pumping lemma proofs, it doesn't matter what N actually is; you just have to make sure that the target string satisfies the other properties. It is possible, given a DFA, to determine how many states a minimal DFA will have; however, if you have a DFA, there's no need to bother with the pumping lemma, since you already know L is regular. In fact, determining an N such that there's a minimal DFA with N states accepting L constitutes a valid proof that the language in question is indeed regular.
So is possibile to know the minimal number of states without building the minimal DFA?
By analyzing the description of the language and using the Myhill-Nerode theorem, it is possible to construct a proof that a language is regular and find the number of states in a minimal DFA, without actually building the minimal DFA (although once you have completed such a proof using Myhill-Nerode, construction of a minimal DFA is a trivial exercise). You can also use Myhill-Nerode as an alternative to the pumping lemma to prove languages aren't regular, by showing a minimal DFA for the language would need to have infinitely many states, a contradiction. | 527 | 2,361 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.625 | 4 | CC-MAIN-2014-23 | latest | en | 0.934789 |
http://nrich.maths.org/439/solution | 1,484,658,914,000,000,000 | text/html | crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00119-ip-10-171-10-70.ec2.internal.warc.gz | 210,819,781 | 5,701 | ### A Knight's Journey
This article looks at knight's moves on a chess board and introduces you to the idea of vectors and vector addition.
### 8 Methods for Three by One
This problem in geometry has been solved in no less than EIGHT ways by a pair of students. How would you solve it? How many of their solutions can you follow? How are they the same or different? Which do you like best?
### Which Twin Is Older?
A simplified account of special relativity and the twins paradox.
##### Stage: 5 Challenge Level:
This excellent solution came from Shu Cao of Oxford High School. Well done Shu!
A convex quadrilateral $Q$ is made from four rigid rods with flexible joints at the vertices so that the shape of $Q$ can be changed while keeping the lengths of the sides constant.
Let ${\bf a}_1$, ${\bf a}_2$, ${\bf a}_3$ and ${\bf a}_4$ be vectors representing the sides (in this order) so that ${\bf a}_1+{\bf a}_2+{\bf a}_3+{\bf a}_4 = {\bf 0}$ (the zero vector). Now let ${\bf d}_1$ and ${\bf d}_2$ be the vectors representing the diagonals of $Q$. We may choose these so that ${\bf d}_1={\bf a}_4+{\bf a}_1$ and ${\bf d}_2={\bf a}_3+{\bf a}_4$.
As ${\bf d}_1={\bf a}_4 + {\bf a}_1$ and ${\bf d}_2 = {\bf a}_3 + {\bf a}_4$ it follows that ${\bf a}_1 + {\bf a}_2 = -{\bf d}_2,\ {\bf a}_2 + {\bf a}_3 = -{\bf d}_1.$
\eqalign{ {\bf a}_2^2+{\bf a}_4^2-{\bf a}_1^2-{\bf a}_3^2 &=({\bf a}_2^2-{\bf a}_1^2)+({\bf a}_4^2-{\bf a}_3^2) \cr &=({\bf a}_2-{\bf a}_1)({\bf a}_2+{\bf a}_1)+({\bf a}_4-{\bf a}_3)({\bf a}_4+{\bf a}_3)\cr &=-{\bf d}_2({\bf a}_2-{\bf a}_1)+{\bf d}_2({\bf a}_4-{\bf a}_3)\cr &={\bf d}_2({\bf a}_4-{\bf a}_3-{\bf a}_2+{\bf a}_1)\cr &={\bf d}_2(({\bf a}_4+{\bf a}_1)-({\bf a}_3+{\bf a}_2))\cr &={\bf d}_2({\bf d}_1+{\bf d}_1)\cr &=2{\bf d}_2 {\bf .} {\bf d}_1 }.
Now ${\bf a}_1+{\bf a}_2+{\bf a}_3+{\bf a}_4=0$ implies that ${\bf a}_4=-{\bf a}_1-{\bf a}_2-{\bf a}_3$.
\eqalign{ {\bf a}_1 \cdot {\bf a}_3-{\bf a}_2 {\bf .} {\bf a}_4 &= {\bf a}_1 {\bf .} {\bf a}_3-{\bf a}_2(-{\bf a}_1-{\bf a}_2-{\bf a}_3) \cr &={\bf a}_1 {\bf .} {\bf a}_3+{\bf a}_2 {\bf .} {\bf a}_1+{\bf a}_2{\bf .} {\bf a}_2+{\bf a}_2{\bf .} {\bf a}_3 \cr &={\bf a}_1({\bf a}_2+{\bf a}_3)+{\bf a}_2({\bf a}_2+{\bf a}_3) \cr &=({\bf a}_1+{\bf a}_2)({\bf a}_3+{\bf a}_2) \cr &=(-{\bf d}_1)(-{\bf d}_2) \cr &={\bf d}_1{\bf .} {\bf d}_2}.
Hence
\eqalign{ 2({\bf a}_1 {\bf .} {\bf a}_3-{\bf a}_2 {\bf .} {\bf a}_4) &=2{\bf d}_1 {\bf .} {\bf d}_2 \cr &={\bf a}_2^2+{\bf a}_4^2-{\bf a}_1^2-{\bf a}_3^2 .}
If the diagonals of $Q$ are perpendicular in one position of $Q$, then $2{\bf d}_1 {\bf .} {\bf d}_2={\bf a}_2^2+{\bf a}_4^2-{\bf a}_1^2-{\bf a}_3^2 =0$. As ${\bf a}_1,{\bf a}_2,{\bf a}_3,{\bf a}_4$ are constant in length ${\bf a}_2^2+{\bf a}_4^2-{\bf a}_1^2-{\bf a}_3^2$ will always be zero which implies that ${\bf d}_1 {\bf .} {\bf d}_2=0$, so they are perpendicular in all variations of $Q$. | 1,234 | 2,889 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.59375 | 5 | CC-MAIN-2017-04 | longest | en | 0.728795 |
https://minuteshours.com/30-79-hours-in-hours-and-minutes | 1,638,571,874,000,000,000 | text/html | crawl-data/CC-MAIN-2021-49/segments/1637964362919.65/warc/CC-MAIN-20211203212721-20211204002721-00248.warc.gz | 485,422,789 | 5,089 | # 30.79 hours in hours and minutes
## Result
30.79 hours equals 30 hours and 47.4 minutes
You can also convert 30.79 hours to minutes.
## Converter
Thirty point seven nine hours is equal to thirty hours and forty-seven point four minutes. | 62 | 243 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.625 | 3 | CC-MAIN-2021-49 | latest | en | 0.855946 |
http://forums.omnigroup.com/showthread.php?t=18288 | 1,537,976,753,000,000,000 | text/html | crawl-data/CC-MAIN-2018-39/segments/1537267165261.94/warc/CC-MAIN-20180926140948-20180926161348-00357.warc.gz | 99,938,167 | 8,924 | These forums are now read-only. Please visit our new forums to participate in discussion. A new account will be required to post in the new forums. For more info on the switch, see this post. Thank you!
User Name Remember Me? Password
FAQ Members List Calendar Search Today's Posts Mark Forums Read
Setting resource availabilty and allocations Thread Tools Search this Thread Display Modes
gms Member 2010-10-01, 12:22 PM As a new user, I have a perhaps naive question: When creating resources, we can designate their availability up to 100% - ok - this person has 100% of their time available. THEN, when assigning them to a task we can allocate up to 100% - so the resource allocation on a given task then reads as X% of X% Are these relative or absolute? E.g., if it reads 50% of 50%, is it actually 25% of that individual's time? or are we to read it as "the whole 50% of this person's available 50%" (which I would interpret as 100% of 50%) thanks gms Post 1
Quote:
Originally Posted by gms Are these relative or absolute? E.g., if it reads 50% of 50%, is it actually 25% of that individual's time? or are we to read it as "the whole 50% of this person's available 50%" (which I would interpret as 100% of 50%)
Some experimentation will help in coming to grips with this. The first experiment I would suggest is assigning 51% of 50% :-)
Yes, if you assign something as 50% of 50%, that means that the resource will spend half of its total time on that task ("the whole 50% of the available 50%"). 40% of 50% means that the resource will spend 40% of its time on that task.
The diagram below shows me working (100%) on a 4 hour task, which takes 4 hours. Tom, working 50% of his hours on this project, takes 6 elapsed hours to do a 3 hour duration task, flitting back and forth between the important stuff and reading the Omni Forums (not shown as the other 50%). When we are both free (and not before — an important thing to note about the scheduling algorithm), we start work on a project that requires 5 hours of effort. It takes us 4 hours, as I'm working 100% on it, and Tom is only putting in 25% of his total effort, so it takes him those same 4 hours to do an honest hour's work. Duration is 4 hours, effort is 5 hours.
Adding to the fun is the calendar for the resources. If Tom's calendar only shows him working 4 hours on this project instead of the default 8 when I made the screenshots, he would put in 4 hours the first day, and 2 hours the second day on that first task of his, and we would start work on the joint task partway through the second day. If you gave Tom another 25% of 50% task as Task 4, it would run concurrently with Task 3, and bring his utilization up to the full 50% of 50% while both were active.
If you still aren't confused, there's also an efficiency figure in the resource :-)
gms Member 2010-10-04, 06:35 AM Efficiency? no thanks, suitably confused for the moment. thanks for this - I'll noodle it thru. Post 3
Thread Tools Search this Thread Search this Thread: Advanced Search Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post pankajkakkar OmniObjectMeter 2 2013-04-30 09:39 PM t-awesome OmniPlan General 1 2010-05-11 03:51 PM upriser OmniPlan General 3 2008-03-05 05:25 AM
All times are GMT -8. The time now is 07:45 AM.
-- Omni Style -- Lithographica Contact Us - The Omni Group Forums - Archive - Privacy Statement - Top | 883 | 3,428 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.125 | 3 | CC-MAIN-2018-39 | latest | en | 0.947012 |
https://cs.stackexchange.com/questions/159485/the-value-of-r-with-r%E2%89%A4-b-that-minimizes-the-expression-b-rn2r-in-t | 1,723,739,635,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722641299002.97/warc/CC-MAIN-20240815141847-20240815171847-00501.warc.gz | 142,396,928 | 41,167 | # The value of $r$, with $r≤ b$, that minimizes the expression $(b/r)(n+2^r)$ in the analysis of the radix-sort algorithm
In chapter 8 of the book "Introduction to Algorithms" by Cormen, Leiserson, Rivest, and Stein, lemma 8.4 is proved. (my question is after the proof of the lemma)
Given $$n$$ $$b$$-bit numbers and any positive integer $$r≤b$$, RADIX-SORT correctly sorts these numbers in $$Θ((b/r)(n+2^r))$$ time if the stable sort it uses takes $$Θ(n+k)$$ time for inputs in the range $$0$$ to $$k$$.
Proof:
For a value $$r≤b$$, we view each key as having $$d=⌈b/r⌉$$ digits of $$r$$ bits each. Each digit is an integer in the range $$0$$ to $$2^r-1$$, so that we can use counting sort with $$k = 2^r-1$$. (For example, we can view a 32-bit word as having four 8-bit digits, so that $$b = 32$$, $$r=8$$, $$k = 2^r-1 = 255$$, and $$d = b/r = 4$$.) Each pass of counting sort takes time $$Θ(n+k)=Θ(n+2^r)$$ and there are $$d$$ passes, for a total running time of $$Θ(d(n+2^r))= Θ((b/r)(n+2^r)).$$
goes on... and in this part I will ask my question. In the quote I will mark the points of the question.
For given values of $$n$$ and $$b$$, we wish to choose the value of $$r$$, with $$r ≤ b$$, that minimizes the expression $$(b/r)(n+2^r)$$. If $$b < ⌊\lg n⌋$$ (question : what is the motivation that led to the choice of this inequality?), then for any value of $$r \leq b$$, we have that $$(n+2^r)=Θ(n)$$. Thus, choosing $$r = b$$ yields a running time of $$(b/b)(n+2^r)=Θ(n)$$, which is asymptotically optimal. If $$b \geq ⌊\lg n⌋$$, then choosing $$r= ⌊\lg n⌋$$ gives the best time to within a constant factor, which we can see as follows. Choosing $$r = ⌊\lg n⌋$$ nc yields a running time of $$Θ(bn(\lg n)$$. As we increase $$r$$ above $$⌊\lg n⌋$$, the $$2^r$$ term in the numerator increases faster than the $$r$$ term in the denominator, and so increasing $$r$$ above $$⌊\lg n⌋$$ yields a running time of $$Ω(bn/\lg n)$$. If instead we were to decrease $$r$$ below $$⌊\lg n⌋$$, then the $$b/r$$ term increases and the $$n+2^r$$ term remains at $$\Theta(n)$$.
For fixed $$n$$, let $$f(r)=(b/r)(n+2^r)$$ where $$r>0$$.
Since $$f(r)>(b/r)n$$, $$\ f(0^+)=\infty$$.
Since $$f(r)>(b/r)2^r$$, $$\ f(\infty)=\infty.$$ Hence there exists $$m$$ such that $$f(r)$$ reaches its global minimum at $$r=m$$. By the extreme value theorem, we have $$f'(m)=0,$$ which means $$-\frac b{m^2}(n+2^m)+\frac bm(2^m\log_e2)=0$$. $$n=2^m(m\log_e2-1).$$
What we are interested in is what happens when $$n$$ goes to infinity. So we consider $$m$$ as a function of $$n$$ that is determined by the equation above.
As $$n$$ goes to infinity, we see that $$m$$ must go to infinity as well. Taking $$\log_2(\cdot)$$ of both sides, we get $$\log_2n=m +\log_2(m\log_e2-1).$$ Since $$\lim_{m\to\infty}\frac{\log_2(m\log_e2-1)}{m}=0$$, we see that $$\lim_{n\to\infty}\frac{m}{\log_2n}=1$$
Recall that $$f(r)$$ takes the minimum value at $$r=m$$ for a fixed $$n$$. That is why the book checks the value of $$f(r)$$ at or near the point $$r=\log_2n$$. | 1,083 | 3,033 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 73, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.1875 | 4 | CC-MAIN-2024-33 | latest | en | 0.845292 |
https://binary2hex.com/numberconverter.html?id=26594&print=1 | 1,660,700,005,000,000,000 | text/html | crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00544.warc.gz | 146,973,269 | 7,349 | https://binary2hex.com/numberconverter.html?id=26594
# Transfer 13c96a3742f64906 from hexadecimal in triple number system
Enter a number:
His number system:
Binary
Ternary
Octal
Decimal
Binary-decimal
Other
let\'s translate to decimal like this:
1∙1615+3∙1614+12∙1613+9∙1612+6∙1611+10∙1610+3∙169+7∙168+4∙167+2∙166+15∙165+6∙164+4∙163+9∙162+0∙161+6∙160 = 1∙1152921504606846976+3∙72057594037927936+12∙4503599627370496+9∙281474976710656+6∙17592186044416+10∙1099511627776+3∙68719476736+7∙4294967296+4∙268435456+2∙16777216+15∙1048576+6∙65536+4∙4096+9∙256+0∙16+6∙1 = 1152921504606846976+216172782113783808+54043195528445952+2533274790395904+105553116266496+10995116277760+206158430208+30064771072+1073741824+33554432+15728640+393216+16384+2304+0+6 = 142578754261865498210
got It: 13c96a3742f6490616 =142578754261865498210
Translate the number 142578754261865498210 в ternary like this:
the Integer part of the number is divided by the base of the new number system:
1425787542618654982 3 -1425787542618654981 475262514206218327 3 1 -475262514206218326 158420838068739442 3 1 -158420838068739441 52806946022913147 3 1 -52806946022913147 17602315340971049 3 0 -17602315340971047 5867438446990349 3 2 -5867438446990347 1955812815663449 3 2 -1955812815663447 651937605221149 3 2 -651937605221148 217312535073716 3 1 -217312535073714 72437511691238 3 2 -72437511691236 24145837230412 3 2 -24145837230411 8048612410137 3 1 -8048612410137 2682870803379 3 0 -2682870803379 894290267793 3 0 -894290267793 298096755931 3 0 -298096755930 99365585310 3 1 -99365585310 33121861770 3 0 -33121861770 11040620590 3 0 -11040620589 3680206863 3 1 -3680206863 1226735621 3 0 -1226735619 408911873 3 2 -408911871 136303957 3 2 -136303956 45434652 3 1 -45434652 15144884 3 0 -15144882 5048294 3 2 -5048292 1682764 3 2 -1682763 560921 3 1 -560919 186973 3 2 -186972 62324 3 1 -62322 20774 3 2 -20772 6924 3 2 -6924 2308 3 0 -2307 769 3 1 -768 256 3 1 -255 85 3 1 -84 28 3 1 -27 9 3 1 -9 3 3 0 -3 1 0
the result of the conversion was:
142578754261865498210 = 1001111102212122012201001000122122201113
the Final answer: 13c96a3742f6490616 = 1001111102212122012201001000122122201113
Permanent link to the result of this calculation | 998 | 2,209 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.078125 | 3 | CC-MAIN-2022-33 | latest | en | 0.247046 |
https://git.scc.kit.edu/mpp/mluq/-/commit/79c1540bb506fe5b7ee74b0b0de8dbe8c62b7261 | 1,656,308,081,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656103328647.18/warc/CC-MAIN-20220627043200-20220627073200-00467.warc.gz | 324,622,494 | 33,725 | Commit 79c1540b by niklas.baumgarten
worked on slides
parent 976a60fc
\documentclass[18pt]{beamer} \usepackage{defs} \usepackage{templates/beamerthemekit} \begin{document} \begin{frame} \titlepage \end{frame} \begin{frame}{Outline} \tableofcontents \end{frame} \section*{Elliptic model problem} \input{src/model_problem} \input{src/example_fields} \section*{Monte Carlo methods} \input{src/monte_carlo_methods} \input{src/experimental_setup} \section*{Intro} \input{src/numerical_results} \section*{Stochastic Linear Transport problem} \section*{Stochastic Convection-Diffusion-Reaction problem} \section*{Acoustic wave propagation} % \input{src/alternative_acoustic} \section*{Outlook and Conclusion} % \input{src/outlook} \begin{frame}{References} \bibliographystyle{acm} \tiny{\bibliography{lit}} \end{frame} \section*{Backup} % \input{src/the_random_field_model} % \input{src/circulant_embedding} % \input{src/weak_formulation} % \input{src/existence} % \input{src/regularity} % \input{src/finite_element_error} % \input{src/mlmc_algorithm} \end{document} \ No newline at end of file
\begin{frame}{Monte Carlo Methods I} \begin{frame}{Multilevel Monte Carlo Method I} \begin{itemize} \item Goal: Estimate $\EE[\goal]$, where $\goal$ is some functional of the random \item Goal: Estimate the expectation $\EE[\goal(\omega)]$, where $\goal$ is some functional of the random solution $u(\omega, x)$. \item Assume $u_h(\omega, x)$ is the corresponding FEM solution with the convergence rate $\alpha$, thus \begin{equation*} \vert \mathbb{E}[\goal_h - \goal] \vert \lesssim h^\alpha, \quad \vert \mathbb{E}[\goal_h - \goal] \vert \lesssim N^{-\alpha}, \quad N = \dim(V_h) \end{equation*} where , the cost to solve one realization is \begin{equation*} \mathcal{C}(\goal_h) \lesssim h^{-\gamma}, \quad \mathcal{C}(\goal_h) \lesssim N^{\gamma} \end{equation*} \item Assume: $u_h(\omega, x)$ is the corresponding FEM solution with the convergence rate $\alpha > 0$, i.e. \label{eq:alpha-assumption} \abs{\EE[\goal_h - \goal]} \lesssim h^\alpha, \quad \abs{\EE[\goal_h - \goal]} \lesssim N^{-\alpha}, \quad N = \dim(V_h) and that the cost for one sample can be bounded with $\gamma > 0$ by \label{eq:gamma-assumption} \cost(\goal_h(\omega_m)) \lesssim h^{-\gamma}, \quad \cost(\goal_h) \lesssim N^{\gamma / d} and the variance of the difference $\goal_{h_l} - \goal_{h_{l-1}}$ decays with \begin{equation*} \norm{\mathbb{V}[\goal_{h_l} - \goal_{h_{l-1}}]} \lesssim h^\beta, \quad \vert \mathbb{V}[\goal_{h_l} - \goal_{h_{l-1}}] \| \lesssim N^{-\beta}. \end{equation*} \label{eq:beta-assumption} \abs{\mathbb{V}[\goal_{h_l} - \goal_{h_{l-1}}]} \lesssim h^\beta, \quad \abs{\mathbb{V}[\goal_{h_l} - \goal_{h_{l-1}}]} \lesssim N^{-\beta / d} \end{itemize} \end{frame} \begin{frame}{Monte Carlo Methods II} \begin{frame}{Multilevel Monte Carlo Method I} \begin{itemize} \item The standard Monte Carlo estimator for the approximated functional is \item The Monte Carlo estimator for the approximated functional is \begin{equation*} \widehat{Q}_{h,M}^{MC} = \frac{1}{M} \sum_{i=1}^M Q_h(\omega_i). \widehat{\goal}_{h,M}^{MC} = \frac{1}{M} \sum_{m=1}^M \goal_h(\omega_m) \end{equation*} \item The RMSE is then given by \item The root mean square error (RMSE) is then given by \begin{equation*} e(\widehat{Q}^{MC}_{h,M})^2 = \mathbb{E} \left[ (\widehat{Q}^{MC}_{h,M} - \mathbb{E}[Q])^2 \right] = \underbrace{M^{-1} \mathbb{V}[Q_h]}_{\text{estimator error}} + \underbrace{\left( \mathbb{E}[Q_h - Q] \right)^2}_{\text{FEM error}}. e(\widehat{\goal}^{MC}_{h,M})^2 = \EE \left[ (\widehat{\goal}^{MC}_{h,M} - \EE[\goal])^2 \right] = \underbrace{M^{-1} \VV[\goal_h]}_{\text{estimator error}} + \underbrace{\left( \EE[Q_h - Q] \right)^2}_{\text{FEM error}}. \end{equation*} \item This yields a cost of ($\mathcal{C}_\epsilon(\widehat{Q}_h)$ is the cost to achieve $e(\widehat{Q}_h) < \epsilon$) \item This yields a total cost of ($\cost_\epsilon(\widehat{Q}_h)$ is the cost to \item achieve $e(\widehat{Q}_h) < \epsilon$) \begin{equation*} \mathcal{C}(\widehat{Q}^{MC}_{h,M}) \lesssim M \cdot N^\gamma, \quad \mathcal{C}_{\epsilon}(\widehat{Q}^{MC}_{h,M}) \lesssim \epsilon^{-2 -\frac{\gamma}{\alpha}}. \cost(\widehat{Q}^{MC}_{h,M}) \lesssim M \cdot N^\gamma, \quad \cost_{\epsilon}(\widehat{Q}^{MC}_{h,M}) \lesssim \epsilon^{-2 -\frac{\gamma}{\alpha}}. \end{equation*} \end{itemize} \end{frame} ... ... @@ -63,20 +72,20 @@ \begin{itemize} \item The RMSE is then given by \begin{equation*} e(\widehat{Q}^{MLMC}_{h,\{ M_l \}_{l=0}^L})^2 = \mathbb{E}\left[( \widehat{Q}^{MLMC}_{h,\{ M_l \}_{l=0}^L} - \mathbb{E}[Q])^2 \right] = \underbrace{\sum_{l=0}^L \frac{1}{M_l} \mathbb{V}[Y_l]}_{\text{estimator error}} + \underbrace{\left( \mathbb{E}[Q_h - Q] \right)^2}_{\text{FEM error}}. e(\widehat{Q}^{MLMC}_{h,\{ M_l \}_{l=0}^L})^2 = \mathbb{E}\left[( \widehat{Q}^{MLMC}_{h,\{ M_l \}_{l=0}^L} - \mathbb{E}[Q])^2 \right] = \underbrace{\sum_{l=0}^L \frac{1}{M_l} \VV[Y_l]}_{\text{estimator error}} + \underbrace{\left( \mathbb{E}[Q_h - Q] \right)^2}_{\text{FEM error}}. \end{equation*} \item This leads leads to a better computational cost since: \begin{itemize} \item Assume $Q_h \rightarrow Q$, then $\mathbb{V}[\left( Q_{h_l}(\omega_i) - Q_{h_{l-1}}(\omega_i) \right)] \rightarrow 0$. \item Assume $Q_h \rightarrow Q$, then $\VV[\left( Q_{h_l}(\omega_i) - Q_{h_{l-1}}(\omega_i) \right)] \rightarrow 0$. \item The $Q_{h_0}(\omega_i)$ are not getting more expensive for more accuracy. \item The optimal choice for the sequence $M_l$ is given by \begin{equation*} M_l = \left\lceil 2 \epsilon^{-2} \sqrt{\frac{\mathbb{V}[Y_l]}{\mathcal{C}_l}} \left( \sum_{l=0}^L \sqrt{\mathbb{V}[Y_l] \mathcal{C}_l} \right) \right\rceil. M_l = \left\lceil 2 \epsilon^{-2} \sqrt{\frac{\VV[Y_l]}{\cost_l}} \left( \sum_{l=0}^L \sqrt{\VV[Y_l] \cost_l} \right) \right\rceil. \end{equation*} \end{itemize} \item This gives an overall cost of (given $\mathcal{C}_{\epsilon}$ is best case) \item This gives an overall cost of (given $\cost_{\epsilon}$ is best case) \begin{equation*} \mathcal{C}(\widehat{Q}^{MLMC}_{h,\{ M_l \}_{l=0}^L}) = \sum_{l=0}^L M_l \mathcal{C}_l, \quad \mathcal{C}_{\epsilon}(\widehat{Q}^{MLMC}_{h,\{ M_l \}_{l=0}^L}) \lesssim \epsilon^{-2}. \cost(\widehat{Q}^{MLMC}_{h,\{ M_l \}_{l=0}^L}) = \sum_{l=0}^L M_l \cost_l, \quad \cost_{\epsilon}(\widehat{Q}^{MLMC}_{h,\{ M_l \}_{l=0}^L}) \lesssim \epsilon^{-2}. \end{equation*} \end{itemize} \end{frame}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first! | 2,465 | 6,559 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.765625 | 3 | CC-MAIN-2022-27 | latest | en | 0.364303 |
http://mathhelpforum.com/calculus/76835-weierstrass-elliptic-function.html | 1,529,645,790,000,000,000 | text/html | crawl-data/CC-MAIN-2018-26/segments/1529267864354.27/warc/CC-MAIN-20180622045658-20180622065658-00301.warc.gz | 200,035,098 | 9,024 | ## Weierstrass elliptic function
now, this should be basic, but I can't see it
why Weierstrass \rho function is homogeneous of degree -2? i.e. p(cz,c\tau) = c^{-2} p(z,\tau).
If I just use the definition of this function as a sum, I don't see how we can take c^-2 outside...
Weierstrass's elliptic functions. Who is Weierstrass's elliptic functions? What is Weierstrass's elliptic functions? Where is Weierstrass's elliptic functions? Definition of Weierstrass's elliptic functions. Meaning of Weierstrass's elliptic function
sure, we have cz and mc\tau there...but n is not multiplied by c. | 168 | 596 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.546875 | 3 | CC-MAIN-2018-26 | latest | en | 0.883504 |
https://essayteachers.com/post-932/ | 1,611,591,904,000,000,000 | text/html | crawl-data/CC-MAIN-2021-04/segments/1610703587074.70/warc/CC-MAIN-20210125154534-20210125184534-00614.warc.gz | 323,905,481 | 25,949 | +1-316-444-1378
Course Project Part IIIntroductionYou will assume that you still work as a financial analyst for the Coca Cola Co. The company is considering a capital investment and you are in charge of helping them launching a new product based on (1) a given rate of return of 13% (Task 4) and (2) the firm s cost of capital (Task 5). Task 4. Capital Budgeting for a ProductA few months have now passed and Coca Cola Co. is considering launching a new product a flavored soda! The anticipated cash flows for the project are as follows:Year 1 \$1350000Year 2 \$1580000Year 3 \$1900000Year 4 \$930000Year 5 \$2400000You have now been tasked with providing a recommendation for the project based on the results of a Net Present Value Analysis. Assuming that the required rate of return is 12% and the initial cost of project is \$5000000:1. What is the project s IRR? (10 pts)2. What is the project s NPV? (10 pts)3. Calculate the project s payback period. (10 pts)4. In order to conduct this project Coca Cola has hired a market analyst to determine demand for the new product. The cost of these services will be \$4000000. How would this cost be incorporated into the project cash flows? Explain your rationale (10 pts)5. Provide examples for each the following concepts as they relates to the project. Please make sure that your examples are applicable to Coca Cola s idea of launching a new product. (5 pts each)a. Allocated Costsb. Incremental Costsc. Financing Costs6. Explain how you would conduct a scenario and sensitivity analysis of the project. What would be some project-specific risks and market risks related to this project? (20 pts) Task 5: Cost of CapitalCoca Cola Co. is now considering that the appropriate discount rate for the new product should be the cost of capital and would like to determine it. You will assist in the process of obtaining this rate. 1. Compute the cost of debt. a. Assume that Coke has received a loan from a bank for 5% annual interest for the next seven years. If the tax rate for Coke is 24% what is the after-tax cost of debt (5 pts) b. Would you expect the cost of debt to be higher or lower than the cost of equity? Explain your rationale. (5 pts)c. Explain how Coca Cola Co. can estimate the cost of debt using market observation of rates. Compare and contrast this method to using YTM of bonds. (10 pts) d. Assume that instead Coke uses the YTM method. They have currently bonds that sell for \$1045 offer a coupon of 8% and mature in 5 years. What is the YTM of these bonds? (5 pts) 2. Compute the cost of common equity using the CAPM model. For beta use the average beta of three selected competitors. Assume the risk free rate to be 3% and the market risk premium to be 9%. a. What is the cost of common equity? (5 pts)b. Explain how flotation costs affect the cost of common equity. (5 pts)c. Explain why is said that the cost of retained earnings is the same as the cost of equity except for flotation costs. (5 pts)3. Cost of preferred equitya. Why would the cost of preferred equity be lower than the cost of common equity? (5 pts) b. What would be the price of preferred equity for Coke assuming dividends of \$5 at the end of the year and the cost of preferred stock is 8%? (5 pts) 4. Assuming that the market value weights of these capital sources are 30% bonds 40% common equity and 30% preferred equity what is the weighted cost of capital of the firm? (10 pts) 5. Should the firm use market or book values to compute the cost of capital? Explain and provide examples as appropriate. (10 pts)6. Explain how hard rationing and soft rationing may affect your recommendation on pursuing this project. (5 pts)
Categories: Uncategorized | 883 | 3,698 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.65625 | 3 | CC-MAIN-2021-04 | latest | en | 0.938818 |
https://www.jiskha.com/questions/1604590/for-every-boy-taking-classes-at-the-music-school-there-are-3-girls-who-are-taking-classes | 1,600,665,746,000,000,000 | text/html | crawl-data/CC-MAIN-2020-40/segments/1600400198942.13/warc/CC-MAIN-20200921050331-20200921080331-00056.warc.gz | 913,168,142 | 5,058 | # Math
For every boy taking classes at the music school, there are 3 girls who are taking classes at the school. If there 128 students taking classes, write and solve a proportion to predict the number of girls taking classes at the school.
1. 👍 0
2. 👎 0
3. 👁 262
1. boy : girls = 1 : 3 or x : 3x
x + 3x = 128
4x = 128
x = 32
so 32 girls and 3x or 96 girls take music
or
3/4 = n/128 , where n is the number of girls
4n = 384
n= 384/4 = 96
1. 👍 0
2. 👎 1
## Similar Questions
1. ### math
In a survey of 150 students, 90 were taking algebra, and 30 taking biology. a. what is the least number of students who could have been taking both classes? b. what is the greatest number of students who could be taking both
2. ### Language Arts need help now!
"Tara aren't you going to try out for the school play?" asked Bill. "Auditions are in a few minutes." "No, Nita is trying out for the lead. She's a much better singer than I am. I don't have a chance." Bill was a kind and helpful
3. ### Civics
Which of the following is NOT part of the naturalization process? a. Filing a Declaration of Intention b. Taking a US history, gov't, or English language class c. Living in the US for 10 years d. Taking an Oath
4. ### Math
For every girl taking classes at the martial arts school, there are 3 boys who are taking classes at the school. If there are 236 students taking classes, write and solve a proportion to predict the number of boys taking classes
1. ### maths
out of 40 students,14 are taking english and 29 are taking chemistry.if 5 students are in both classes,how many are in neither class?how many students are in either classes?
2. ### Math
An elementary school is offering 3 language classes: one in Spanish, one in French, and one in German. These classes are open to any of the 107 students in the school. There are 42 in the Spanish class, 32 in the French class, and
3. ### math
Celia has a large container in which four different kinds of coins are thoroughly mixed. She wants to take a sample of her coins to estimate which kind of coin she has the most. Which of the following methods is the best way for
4. ### algebra
In a survey of a TriDelt chapter with 50 members, 21 were taking mathematics, 33 were taking English, and 6 were taking both. How many were not taking either of these subjects? members
1. ### math
need to make a venn diagram using 3 circles. A survey on subject being taken by 250 students at a certain college revealed the following info: 1. 90 were taking math 2. 145 were taking history 3. 88 were taking english 4. 25 were
2. ### math
An elementary school is offering 3 language classes: one in Spanish, one in French, and one in German. These classes are open to any of the 93 students in the school. There are 35 in the Spanish class, 31 in the French class, and
3. ### English
1. I like taking walks. 2. I like taking a walk. (Which one is OK? Are both right?) 3. You should obey the school rules. 4. You should keep the school rules. 5. You should follow the school rules. (Are all the same?) 6. He helped
4. ### math
In a group of 200 high school students, 36 are taking biology, 52 are taking Spanish, and 126 are taking neither biology nor Spanish. If one of these 200 students is to be chosen at random, what is the probability that the student | 885 | 3,316 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.90625 | 4 | CC-MAIN-2020-40 | latest | en | 0.958023 |
http://mathhelpforum.com/pre-calculus/211749-expanding-subtracting-binomials.html | 1,498,516,921,000,000,000 | text/html | crawl-data/CC-MAIN-2017-26/segments/1498128320869.68/warc/CC-MAIN-20170626221252-20170627001252-00422.warc.gz | 258,420,384 | 10,982 | # Thread: Expanding and subtracting binomials
1. ## Expanding and subtracting binomials
I'm working on expanding binomials and I currently understand how to expand basic ones using the binomial theorem. But due to transferring to my Precalc class late, I missed the lecture on them. Could someone explain how to expand and simplify this expression using the binomial theorem?
$2(x-3)^4 + 5(x-3)^2$
2. ## Re: Expanding and subtracting binomials
Originally Posted by zsf1990
I'm working on expanding binomials and I currently understand how to expand basic ones using the binomial theorem. But due to transferring to my Precalc class late, I missed the lecture on them. Could someone explain how to expand and simplify this expression using the binomial theorem?
$2(x-3)^4 + 5(x-3)^2$
OH, Come on!
$2(x-3)^4 + 5(x-3)^2=(x-3)^2[2(x-3)^2+5][$
3. ## Re: Expanding and subtracting binomials
I don't think I understand what you mean. The book gives $2x^4-24x^3+113x^2-246x+207$ as the solution. I got $2x^4 - 8x^3 3 + 17x^2 9 - 14x27 + 126$ by applying the 2 and 5 after expanding, then combining like terms. I'm not sure where I messed up.
4. ## Re: Expanding and subtracting binomials
Originally Posted by zsf1990
I don't think I understand what you mean. The book gives $2x^4-24x^3+113x^2-246x+207$ as the solution. I got $2x^4 - 8x^3 3 + 17x^2 9 - 14x27 + 126$ by applying the 2 and 5 after expanding, then combining like terms. I'm not sure where I messed up.
5. ## Re: Expanding and subtracting binomials
Knowing the answer does me no good if I don't know how to get there. I know how to expand the binomial $(x - 3)^4$ and how to subtract the two expanded terms. What I don't understand is what to do with the 2 and the 5.
6. ## Re: Expanding and subtracting binomials
Originally Posted by zsf1990
Knowing the answer does me no good if I don't know how to get there. I know how to expand the binomial $(x - 3)^4$ and how to subtract the two expanded terms. What I don't understand is what to do with the 2 and the 5.
Once you expand each of the terms using the Binomial Theorem, multiply each term through by 2 or 5 (depending on which number is out the front).
However, Plato's advice gives you an easier alternative form to work with. | 661 | 2,252 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.34375 | 4 | CC-MAIN-2017-26 | longest | en | 0.917038 |