repo
stringlengths 27
90
| file
stringlengths 57
176
| language
stringclasses 2
values | license
stringclasses 13
values | content
stringlengths 25
180k
|
---|---|---|---|---|
https://github.com/iceghost/resume | https://raw.githubusercontent.com/iceghost/resume/main/5-projects/4-monoticity.typ | typst | === Monotonicity Table Maker
#place(right + top)[Sep 2020]
/ Links: #link("https://iceghost.github.io/ve-bbt")[Demo]
#sym.dot.c #link("https://github.com/iceghost/ve-bbt")[Source].
This tool parses simple monotonicity table construction language and
generates MathJax/LaTeX code. This saves a lot of time spent making those
tables by hand.
_Result_: My underclassmen and friends used and liked it.
This project was my first _usable_ software product. I learned functional
programming and its principles, such as pure functions and side effects. This
project helped me improve my programming skills and mindset.
|
|
https://github.com/lkndl/typst-bioinfo-thesis | https://raw.githubusercontent.com/lkndl/typst-bioinfo-thesis/main/README.md | markdown | # typst-bioinfo-thesis
This is a [typst](https://typst.app/) thesis template with front matter for TUM+LMU [bioinformatics](https://www.cit.tum.de/cit/studium/studiengaenge/master-bioinformatik/abschlussarbeit/#c2494) and TUM [informatics](https://www.cit.tum.de/cit/studium/studierende/abschlussarbeit-abschluss/informatik/#c4295). Therefore, it generally supports English and German as main document languages. It comes with ready-to-use outlines, configurable page numbers adapting to front and back matter, as well as flexible headers that can imitate `scrbook`. I also implemented `sidecap` and a basic `wrapfig` equivalent.
Although totally workable, this template is somewhat under development - just as Typst is. If you find a bug, please feel free to open an issue!
To get started, edit `main.typ` or make a new minimal `thesis.typ`:
```rs
#import "modules/template.typ": *
#show: doc.with(
title: [all beginnings are hard],
name: [silly old me])
= introduction
...
```
---
The TUM informatics and bioinformatics cover pages:
![tum cover pages](images/screen_00.png)
Table of contents with numbering up to level 2 headings, well-aligned fill characters and roman page numbers for the appendix:
![a dummy table of contents](images/screen_01.png)
![overkill header and wrap figure](images/screen_03.png "an overkill left-hand page header and a wrapfig")
![example header and caption](images/screen_02.png "right-hand page header with section info")
Defining a figure title for the list-of-figures is now less hacky:
```rs
#figure(
image("/images/dingos.jpg", width: 100%),
caption: flex-caption(
[Another example full-width image],
[. Consumers are generally unaware that ...]),
) <dingos>
``````
![list of figures](images/screen_04.png) |
|
https://github.com/smorad/um_cisc_7026 | https://raw.githubusercontent.com/smorad/um_cisc_7026/main/lecture_3_neural_networks.typ | typst | #import "@preview/polylux:0.3.1": *
#import themes.university: *
#import "@preview/cetz:0.2.2": canvas, draw, plot
#import "common.typ": *
// TODO: Missing x^2 term when we show polynomial+multivariate example (not 2^3, should be 3^2 + 1)
#set math.vec(delim: "[")
#set math.mat(delim: "[")
#let la = $angle.l$
#let ra = $angle.r$
#let redm(x) = {
text(fill: color.red, $#x$)
}
// TODO: Deeper neural networks are more efficient
// FUTURE TODO: Label design matrix as X bar instead of X_D in linear regression lectures
// FUTURE TODO: Should not waste m/n in linear regression, use c for count and d_x, d_y
// TODO: Fix nn image indices
// TODO: Implement XOR is transposed
// TODO: is xor network actually wide?
// TODO: Handle subscripts for input dim rather than sample
// TODO: Emphasize importance of very deep/wide nn
#let argmin_plot = canvas(length: 1cm, {
plot.plot(size: (8, 4),
x-tick-step: 1,
y-tick-step: 2,
{
plot.add(
domain: (-2, 2),
x => calc.pow(1 + x, 2),
label: $ (x + 1)^2 $
)
})
})
#show: university-theme.with(
aspect-ratio: "16-9",
short-title: "CISC 7026: Introduction to Deep Learning",
short-author: "<NAME>",
short-date: "Lecture 1: Introduction"
)
#title-slide(
title: [Neural Networks],
subtitle: "CISC 7026: Introduction to Deep Learning",
institution-name: "University of Macau",
//logo: image("logo.jpg", width: 25%)
)
#slide(title: [Notation Change])[
*Notation change:* Previously $x_i, y_i$ referred to data $i$ #pause
Moving forward, I will differentiate between *data* indices $x_[i]$ and other indices $x_i$ #pause
$ bold(X)_D = vec(bold(x)_[1], dots.v, bold(x)_[n]) = mat(x_([1], 1), x_([1], 2), dots; dots.v, dots.v, dots.v; x_([n], 1), x_([n], 2), dots) $ #pause
$ bold(x) = vec(x_1, x_2, dots.v), quad bold(X) = mat(x_(1,1), dots, x_(1, n); dots.v, dots.down, dots.v; x_(m, 1), dots, x_(m, n)) $
]
#let agenda(index: none) = {
let ag = (
[Review],
[Multivariate linear regression],
[Limitations of linear regression],
[History of neural networks],
[Biological neurons],
[Artificial neurons],
[Wide neural networks],
[Deep neural networks],
[Practical considerations]
)
for i in range(ag.len()){
if index == i {
enum.item(i + 1)[#text(weight: "bold", ag.at(i))]
} else {
enum.item(i + 1)[#ag.at(i)]
}
}
}
#slide(title: [Agenda])[#agenda(index: none)]
#slide(title: [Agenda])[#agenda(index: 0)]
#slide(title: [Review])[
Since you are very educated, we focused on how education affects life expectancy #pause
Studies show a causal effect of education on health #pause
- _The causal effects of education on health outcomes in the UK Biobank._ Davies et al. _Nature Human Behaviour_. #pause
- By staying in school, you are likely to live longer #pause
- Being rich also helps, but education alone has a *causal* relationship with life expectancy
]
#slide(title: [Review])[
*Task:* Given your education, predict your life expectancy #pause
$X in bb(R)_+:$ Years in school #pause
$Y in bb(R)_+:$ Age of death #pause
$Theta in bb(R)^2:$ Parameters #pause
$ f: X times Theta |-> Y $ #pause
*Approach:* Learn the parameters $theta$ such that
$ f(x, theta) = y; quad x in X, y in Y $
]
#slide(title: [Review])[
Started with a linear function $f$ #pause
#align(center, grid(
columns: 2,
align: center,
column-gutter: 2em,
$ f(x, bold(theta)) = f(x, vec(theta_1, theta_0)) = theta_1 x + theta_0 $,
cimage("figures/lecture_2/example_regression_graph.png", height: 50%)
)) #pause
Then, we derived the square error function #pause
$ "error"(f(x, bold(theta)), y) = (f(x, bold(theta)) - y)^2 $
]
#slide(title: [Review])[
We wrote the loss function for a single datapoint $x_[i], y_[i]$ using the square error
$ cal(L)(x_[i], y_[i], bold(theta)) = "error"(f(x_[i], bold(theta)), y_[i]) = (f(x_[i], bold(theta)) - y_[i])^2 $ #pause
But we wanted to learn a model over *all* the data, not a single datapoint #pause
We wanted to make *new* predictions, to *generalize* #pause
$ bold(x) = mat(x_[1], x_[2], dots, x_[n])^top, bold(y) = mat(y_[1], y_[2], dots, y_[n])^top $ #pause
$
cal(L)(bold(x), bold(y), bold(theta)) = sum_(i=1)^n "error"(f(x_[i], bold(theta)), y_[i]) = sum_(i=1)^n (f(x_[i], bold(theta)) - y_[i])^2
$
]
#slide(title: [Review])[
Our objective was to find the parameters that minimized the loss function over the dataset #pause
We introduced the $argmin$ operator #pause
#side-by-side[ $f(x) = (x + 1)^2$][#argmin_plot] #pause
$ argmin_x f(x) = -1 $
]
#slide(title: [Review])[
With the $argmin$ operator, we formally wrote our optimization objective #pause
$
#text(fill: color.red)[$argmin_bold(theta)$] cal(L)(bold(x), bold(y), bold(theta)) &= #text(fill: color.red)[$argmin_bold(theta)$] sum_(i=1)^n "error"(f(x_[i], bold(theta)), y_[i]) \ &= #text(fill: color.red)[$argmin_bold(theta)$] sum_(i=1)^n (f(x_[i], bold(theta)) - y_[i])^2
$
]
#slide(title: [Review])[
We defined the design matrix $bold(X)_D$ #pause
$ bold(X)_D = mat(bold(x), bold(1)) = mat(x_[1], 1; x_[2], 1; dots.v, dots.v; x_[n], 1) $ #pause
We use the design matrix to find an *analytical* solution to the optimization objective #pause
$ bold(theta) = (bold(X)_D^top bold(X)_D )^(-1) bold(X)_D^top bold(y) $
]
#slide(title: [Review])[
With this analytical solution, we were able to learn a linear model #pause
#cimage("figures/lecture_2/linear_regression.png", height: 60%)
]
#slide(title: [Review])[
Then, we used a trick to extend linear regression to nonlinear models #pause
$ bold(X)_D = mat(x_[1], 1; x_[2], 1; dots.v, dots.v; x_[n], 1) => bold(X)_D = mat(log(1 + x_[1]), 1; log(1 + x_[2]), 1; dots.v, dots.v; log(1 + x_[n]), 1) $
]
#slide(title: [Review])[
We extended to polynomials, which are *universal function approximators* #pause
$ bold(X)_D = mat(x_[1], 1; x_[2], 1; dots.v, dots.v; x_[n], 1) => bold(X)_D = mat(
x_[1]^m, x_[1]^(m-1), dots, x_[1], 1;
x_[2]^m, x_[2]^(m-1), dots, x_[2], 1;
dots.v, dots.v, dots.down;
x_[n]^m, x_[n]^(m-1), dots, x_[n], 1
) $ #pause
$ f: X times Theta |-> bb(R) $ #pause
$ Theta in bb(R)^2 => Theta in bb(R)^(m+1) $
]
#slide(title: [Review])[
Finally, we discussed overfitting #pause
$ f(x, bold(theta)) = theta_m x^m + theta_(m - 1) x^(m - 1), dots, theta_1 x^1 + theta_0 $ #pause
#grid(
columns: 3,
row-gutter: 1em,
image("figures/lecture_2/polynomial_regression_n2.png"),
image("figures/lecture_2/polynomial_regression_n3.png"),
image("figures/lecture_2/polynomial_regression_n5.png"),
$ m = 2 $,
$ m = 3 $,
$ m = 5 $
)
]
#slide(title: [Review])[
We care about *generalization* in machine learning #pause
So we should always split our dataset into a training dataset and a testing dataset #pause
#cimage("figures/lecture_2/train_test_regression.png", height: 60%)
]
// 16:00 fast
#slide[#agenda(index: 0)]
#slide[#agenda(index: 1)]
#slide[
Last time, we assumed a single-input system #pause
Years of education: $X in bb(R)$ #pause
But sometimes we want to consider multiple input dimensions #pause
Years of education, BMI, GDP: $X in bb(R)^3$ #pause
We can solve these problems using linear regression too
]
#slide[
For multivariate problems, we will define the input dimension as $d_x$ #pause
$ bold(x) in X; quad X in bb(R)^(d_x) $ #pause
We will write the vectors as
$ bold(x)_[i] = vec(
x_([i], 1),
x_([i], 2),
dots.v,
x_([i], d_x)
) $ #pause
$x_([i], 1)$ refers to the first dimension of training data $i$
]
#slide[
The design matrix for a *multivariate* linear system is
$ bold(X)_D = mat(
x_([1], d_x), x_([1], d_x - 1), dots, x_([1], 1), 1;
x_([2], d_x), x_([2], d_x - 1), dots, x_([2], 1), 1;
dots.v, dots.v, dots.down, dots.v;
x_([n], d_x), x_([n], d_x - 1), dots, x_([n], 1), 1
) $ #pause
Remember $x_([n], d_x)$ refers to dimension $d_x$ of training data $n$ #pause
The solution is the same as before
$ bold(theta) = (bold(X)_D^top bold(X)_D )^(-1) bold(X)_D^top bold(y) $
]
// 22:00 fast
#slide(title: [Agenda])[
#agenda(index: 1)
]
#slide(title: [Agenda])[
#agenda(index: 2)
]
#slide(title: [Limitations of Linear Regression])[
Linear models are useful for certain problems #pause
+ Analytical solution #pause
+ Low data requirement #pause
Issues arise with other problems #pause
+ Poor scalability #pause
+ Polynomials do not generalize well
]
#slide(title: [Limitations of Linear Regression])[
Issues arise with other problems
+ *Poor scalability*
+ Polynomials do not generalize well
]
#slide(title: [Limitations of Linear Regression])[
So far, we have seen: #pause
#side-by-side[
One-dimensional polynomial functions
$ bold(X)_D = mat(
x_[1]^m, x_[1]^(m-1), dots, x_[1], 1;
x_[2]^m, x_[2]^(m-1), dots, x_[2], 1;
dots.v, dots.v, dots.down;
x_[n]^m, x_[n]^(m-1), dots, x_[n], 1
) $ #pause][
Multi-dimensional linear functions
$ bold(X)_D = mat(
x_([1], d_x), x_([1], d_x - 1), dots, 1;
x_([2], d_x), x_([2], d_x - 1), dots, 1;
dots.v, dots.v, dots.down, dots.v;
x_([n], d_x), x_([n], d_x - 1), dots, 1
) $ #pause
]
Combine them to create multi-dimensional polynomial functions #pause
]
#slide(title: [Limitations of Linear Regression])[
Let us do an example #pause
#side-by-side[*Task:* predict how many #text(fill: color.red)[#sym.suit.heart] a photo gets on social media][#cimage("figures/lecture_1/dog.png", height: 30%)] #pause
$ f: X times Theta |-> Y; quad X: "Image", quad Y: "Number of " #redm[$#sym.suit.heart$] $ #pause
$ X in bb(Z)_+^(256 times 256) = bb(Z)_+^(65536); quad Y in bb(Z)_+ $ #pause
Highly nonlinear task, use a polynomial with order $m=20$
]
#slide(title: [Limitations of Linear Regression])[
$ bold(X)_D = mat(bold(x)_(D, [1]), dots, bold(x)_(D, [n]))^top $ #pause
$ &bold(x)_(D, [i]) = \ &mat(
underbrace(x_([i], d_x)^m x_([i], d_x - 1)^m dots x_([i], 1)^m, (d_x => 1, x^m)),
underbrace(x_([i], d_x)^m x_([i], d_x - 1)^m dots x_([i], 2)^m, (d_x => 2, x^m)),
dots,
underbrace(x_([i], d_x)^(m-1) x_([i], d_x - 1)^(m-1) dots x_([i], 1)^m, (d_x => 1, x^(m-1))),
dots,
)
$
*Question:* How many columns in this matrix? #pause
*Hint:* $d_x = 2, m = 3$: $x^3 + y^3 + x^2 y + y^2 x + x y + x + y + 1$ #pause
*Answer:* $(d_x)^m = 65536^20 + 1 approx 10^96$
]
#slide(title: [Limitations of Linear Regression])[
How big is $10^96$? #pause
*Question:* How many atoms are there in the universe? #pause
*Answer:* $10^82$ #pause
There is not enough matter in the universe to represent one row #pause
#side-by-side[We cannot predict how many #text(fill: color.red)[#sym.suit.heart] the picture will get][#cimage("figures/lecture_1/dog.png", height: 30%)] #pause
Polynomial regression does not scale to large inputs
]
#slide(title: [Limitations of Linear Regression])[
Issues arise with other problems
+ *Poor scalability*
+ Polynomials do not generalize well
]
#slide(title: [Limitations of Linear Regression])[
Issues arise with other problems
+ Poor scalability
+ *Polynomials do not generalize well*
]
#slide(title: [Limitations of Linear Regression])[
What happens to polynomials outside of the support (dataset)? #pause
Take the limit of polynomials to see their behavior #pause
#side-by-side[$ lim_(x -> oo) theta_m x^m + theta_(m-1) x^(m-1) + dots $][Equation of a polynomial] #pause
#side-by-side[$ lim_(x -> oo) x^m (theta_m + theta_(m-1) / x + dots) $][Factor out $x^m$] #pause
#side-by-side[$ lim_(x -> oo) x^m dot lim_(x-> oo) (theta_m + theta_(m-1) / x + dots) $][Split the limit (limit of products)]
]
#slide(title: [Limitations of Linear Regression])[
#side-by-side[$ lim_(x -> oo) x^m dot lim_(x-> oo) (theta_m + theta_(m-1) / x + dots) $][Split the limit (limit of products)]
#side-by-side[$ (lim_(x -> oo) x^m) dot (theta_m + 0 + dots) $][Evaluate right limit] #pause
#side-by-side[$ theta_m lim_(x -> oo) x^m $][Rewrite] #pause
#side-by-side[$ theta_m lim_(x -> oo) x^m = oo $][If $theta_m > 0$] #pause
#side-by-side[$ theta_m lim_(x -> oo) x^m = -oo $][If $theta_m < 0$]
]
#slide(title: [Limitations of Linear Regression])[
Polynomials quickly tend towards $-oo, oo$ outside of the support #pause
$ f(x) = x^3-2x^2-x+2 $ #pause
#cimage("figures/lecture_3/polynomial_generalize.png", height: 50%) #pause
Remember, to predict new data we want our functions to generalize
]
#slide(title: [Limitations of Linear Regression])[
Linear regression has issues #pause
+ Poor scalability #pause
+ Polynomials do not generalize well
]
// 38:00 fast
#slide(title: [Limitations of Linear Regression])[
We can use neural networks as an alternative to linear regression #pause
Neural network benefits: #pause
+ Scale to large inputs #pause
+ Slightly better generalization #pause
Drawbacks: #pause
+ No analytical solution #pause
+ High data requirement
//#cimage("figures/lecture_1/timeline.svg", height: 50%)
]
// 40:00 fast
#slide(title: [Agenda])[#agenda(index: 2)]
#slide(title: [Agenda])[#agenda(index: 3)]
#slide(title: [History of Neural Networks])[
In 1939-1945, there was a World War #pause
Militaries invested funding for research, and invented the computer #pause
#cimage("figures/lecture_3/turing.jpg", height: 70%)
]
#slide(title: [History of Neural Networks])[
#side-by-side[Meanwhile, a neuroscientist and mathematician (McCullough and Pitts) were trying to understand the human brain][#cimage("figures/lecture_3/mccullough-pitts.png", height: 70%)] #pause
They designed the theory for the first neural network
]
#slide(title: [History of Neural Networks])[
Rosenblatt implemented this neural network theory on a computer a few years later #pause
#side-by-side[
At the time, computers were very slow and expensive
][#cimage("figures/lecture_3/original_nn.jpg", height: 70%)]
]
#slide(title: [History of Neural Networks])[
Through advances in theory and hardware, neural networks became slightly better #pause
#cimage("figures/lecture_1/timeline.svg", height: 40%) #pause
Around 2012, these improvements culminated in neural networks that perform like humans
]
#slide(title: [History of Neural Networks])[
So what is a neural network? #pause
It is a function, inspired by how the brain works #pause
$ f: X times Theta |-> Y $
]
#slide(title: [History of Neural Networks])[
Brains and neural networks rely on *neurons* #pause
*Brain:* Biological neurons $->$ Biological neural network #pause
*Computer:* Artificial neurons $->$ Artificial neural network #pause
First, let us review biological neurons #pause
*Note:* I am not a neuroscientist! I may make simplifications or errors with biology
]
#slide(title: [Agenda])[#agenda(index: 3)]
#slide(title: [Agenda])[#agenda(index: 4)]
#slide(title: [Biological Neurons])[
#cimage("figures/lecture_3/neuron_anatomy.jpg")
A simplified neuron consists of many parts
]
// 47:00 fast
#slide(title: [Biological Neurons])[
#cimage("figures/lecture_3/neuron_anatomy.jpg")
Neurons send messages based on messages received from other neurons
]
#slide(title: [Biological Neurons])[
#cimage("figures/lecture_3/neuron_anatomy.jpg")
Incoming electrical signals travel along dendrites
]
#slide(title: [Biological Neurons])[
#cimage("figures/lecture_3/neuron_anatomy.jpg")
Electrical charges collect in the Soma (cell body)
]
#slide(title: [Biological Neurons])[
#cimage("figures/lecture_3/neuron_anatomy.jpg")
The axon outputs an electrical signal to other neurons
]
#slide(title: [Biological Neurons])[
#cimage("figures/lecture_3/neuron_anatomy.jpg")
The axon terminals will connect to dendrites of other neurons through a synapse
]
#slide(title: [Biological Neurons])[
#cimage("figures/lecture_3/synapse.png", height: 60%)
The synapse converts electrical signal, to chemical signal, back to electrical signal #pause
Synaptic weight determines how well a signal crosses the gap
]
#slide(title: [Biological Neurons])[
#cimage("figures/lecture_3/neuron_anatomy.jpg")
For our purposes, we can model the axon terminals, dendrites, and synapses to be one thing
]
#slide(title: [Biological Neurons])[
#cimage("figures/lecture_3/neuron_anatomy.jpg")
The neuron takes many inputs, and produces a single output
]
#slide(title: [Biological Neurons])[
#cimage("figures/lecture_3/neuron_anatomy.jpg")
The neuron will only output a signal down the axon ("fire") at certain times
]
#slide(title: [Biological Neurons])[
How does a neuron decide to send an impulse ("fire")? #pause
#side-by-side[Incoming impulses (via dendrites) change the electric potential of the neuron][ #cimage("figures/lecture_3/bio_neuron_activation.png", height: 50%)] #pause
In a parallel circuit, we can sum voltages together #pause
Many active dendrites will add together and trigger an impulse
]
#slide(title: [Biological Neurons])[
#side-by-side[Pain triggers initial nerve impulse, starts a chain reaction into the brain][#cimage("figures/lecture_3/nervous-system.jpg")]
]
#slide(title: [Biological Neurons])[
#side-by-side[When the signal reaches the brain, we will think][#cimage("figures/lecture_3/nervous-system.jpg")]
]
#slide(title: [Biological Neurons])[
#side-by-side[After thinking, we will take action][#cimage("figures/lecture_3/nervous-system.jpg")]
]
// 57:00
#slide(title: [Agenda])[#agenda(index: 4)]
#slide(title: [Agenda])[#agenda(index: 5)]
#slide(title: [Artificial Neurons])[
#cimage("figures/lecture_3/neuron_anatomy.jpg", height: 50%) #pause
*Question:* How could we write a neuron as a function? $quad f: "___" |-> "___"$ #pause
*Answer*:
$ f: underbrace(bb(R)^(d_x), "Dendrite voltages") times underbrace(bb(R)^(d_x), "Synaptic weight") |-> underbrace(bb(R), "Axon voltage") $
]
#slide(title: [Artificial Neurons])[
Let us implement an artifical neuron as a function #pause
#side-by-side[#cimage("figures/lecture_3/neuron_anatomy.jpg")][
#only((2,3))[
Neuron has a structure of dendrites with synaptic weights
]
#only(3)[
$ f(
#redm[$vec(theta_1, theta_2, dots.v, theta_(d_x))$])
$
$ f(#redm[$bold(theta)$]) $
]
#only((4,5))[
Each incoming dendrite has some voltage potential
]
#only(5)[
$ f(#redm[$vec(x_(1), dots.v, x_(d_x))$], vec(theta_(1), dots.v, theta_(d_x)) ) $
$ f(#redm[$bold(x)$], bold(theta)) $
]
#only((6, 7))[
Voltage potentials sum together to give us the voltage in the cell body
]
#only(7)[
$ f(vec(x_(1), dots.v, x_(d_x)), vec(theta_(1), dots.v, theta_(d_x)) ) = #redm[$sum_(j=1)^(d_x) theta_j x_(j)$] $
$ f(bold(x), bold(theta)) = #redm[$bold(theta)^top bold(x)$] $
]
#only((8, 9, 10))[
The axon fires only if the voltage is over a threshold
]
#only((9, 10))[
$ sigma(x)= H(x) = #image("figures/lecture_3/heaviside.png", height: 30%) $
]
#only(10)[
$ f(vec(x_(1), dots.v, x_(n)), vec(theta_(1), dots.v, theta_(n)) ) = #redm[$sigma$] (sum_(j=1)^(d_x) theta_j x_(j) ) $
]
]
]
// 1:05
#slide(title: [Artificial Neurons])[
#side-by-side[Maybe we want to vary the activation threshold][#cimage("figures/lecture_3/bio_neuron_activation.png", height: 30%)][#image("figures/lecture_3/heaviside.png", height: 30%)] #pause
$ f(vec(#redm[$1$], x_(1), dots.v, x_(d_x)), vec(#redm[$theta_0$], theta_(1), dots.v, theta_(d_x)) ) = sigma(#redm[$theta_0$] + sum_(j=1)^(d_x) theta_j x_j) = sigma(sum_(#redm[$j=0$])^(d_x) theta_j x_j) $ #pause
$ overline(bold(x)) = vec(1, bold(x)), quad f(bold(x), bold(theta)) = sigma(bold(theta)^top overline(bold(x))) $
]
#slide(title: [Artificial Neurons])[
$ f(bold(x), bold(theta)) = sigma(bold(theta)^top overline(bold(x))) $ #pause
This is the artificial neuron! #pause
Let us write out the full equation for a neuron #pause
$ f(bold(x), bold(theta)) = sigma( theta_0 1 + theta_1 x_1 + dots + theta_(d_x) x_(d_x) ) $ #pause
*Question:* Does this look familiar to anyone? #pause
*Answer:* Inside $sigma$ is the multivariate linear model!
$ f(bold(x), bold(theta)) = theta_(d_x) x_(d_x) + theta_(d_x - 1) x_(d_x - 1) + dots + theta_0 1 $
]
#slide(title: [Artificial Neurons])[
We model a neuron using a linear model and activation function #pause
#side-by-side(gutter: 4em)[#cimage("figures/lecture_3/neuron_anatomy.jpg", height: 40%)
][
#cimage("figures/lecture_3/neuron.svg", height: 40%)]
$ f(bold(x), bold(theta)) = sigma(bold(theta)^top overline(bold(x))) $
]
#slide(title: [Artificial Neurons])[
$ f(bold(x), bold(theta)) = sigma(bold(theta)^top overline(bold(x))) $ #pause
Sometimes, we will write $bold(theta)$ as a bias and weight $b, bold(w)$ #pause
$ bold(theta) = vec(b, bold(w)); quad vec(theta_0, theta_1, dots.v, theta_(d_x)) = vec(b_" ", w_1, dots.v, w_(d_x)) $ #pause
$ f(bold(x), vec(b, bold(w))) = b + bold(w)^top bold(x) $
]
// 1:15
#focus-slide[Relax]
#slide(title: [Artificial Neurons])[
#side-by-side[#cimage("figures/lecture_3/neuron.svg") #pause][
#align(left)[
In machine learning, we represent functions #pause
What kinds of functions can our neuron represent? #pause
Let us consider some *boolean* functions #pause
Let us start with a logical AND function
]
]
]
#slide(title: [Artificial Neurons])[
#side-by-side[#cimage("figures/lecture_3/neuron.png")][
#align(left)[
*Review:* Activation function (Heaviside step function) #pause
#cimage("figures/lecture_3/heaviside.png", height: 50%)
$
sigma(x) = H(x) = cases(
1 "if" x > 0,
0 "if" x <= 0
)
$
]
]
]
#slide(title: [Artificial Neurons])[
Implement AND using an artificial neuron #pause
$ f(mat(x_1, x_2)^top, mat(theta_0, theta_1, theta_2)^top) = sigma(theta_0 1 + theta_1 x_1 + theta_2 x_2) $ #pause
$ bold(theta) = mat(theta_0, theta_1, theta_2)^top = mat(-1, 1, 1)^top $ #pause
#align(center, table(
columns: 5,
inset: 0.4em,
$x_1$, $x_2$, $y$, $f(x_1, x_2, bold(theta))$, $hat(y)$,
$0$, $0$, $0$, $sigma(-1 dot 1 + 1 dot 0 + 1 dot 0) = sigma(-1)$, $0$,
$0$, $1$, $0$, $sigma(-1 dot 1 + 1 dot 0 + 1 dot 1) = sigma(0)$, $0$,
$1$, $0$, $0$, $sigma(-1 dot 1 + 1 dot 1 + 1 dot 0) = sigma(0)$, $0$,
$1$, $1$, $1$, $sigma(-1 dot 1 + 1 dot 1 + 1 dot 1) = sigma(1)$, $1$
))
]
#slide(title: [Artificial Neurons])[
Implement OR using an artificial neuron #pause
$ f(mat(x_1, x_2)^top, mat(theta_0, theta_1, theta_2)^top) = sigma(theta_0 1 + theta_1 x_1 + theta_2 x_2) $ #pause
$ bold(theta) = mat(theta_0, theta_1, theta_2)^top = mat(0, 1, 1)^top $ #pause
#align(center, table(
columns: 5,
inset: 0.4em,
$x_1$, $x_2$, $y$, $f(x_1, x_2, bold(theta))$, $hat(y)$,
$0$, $0$, $0$, $sigma(1 dot 0 + 1 dot 0 + 1 dot 0) = sigma(0)$, $0$,
$0$, $1$, $0$, $sigma(1 dot 0 + 1 dot 1 + 1 dot 0) = sigma(1)$, $1$,
$1$, $0$, $1$, $sigma(1 dot 0 + 1 dot 0 + 1 dot 1) = sigma(1)$, $1$,
$1$, $1$, $1$, $sigma(1 dot 0 + 1 dot 1 + 1 dot 1) = sigma(2)$, $1$
))
]
// Approx 1:30
#slide(title: [Artificial Neurons])[
Implement XOR using an artificial neuron #pause
$ f(mat(x_1, x_2)^top, mat(theta_0, theta_1, theta_2)^top) = sigma(theta_0 1 + theta_1 x_2 + theta_2 x_2) $ #pause
$ bold(theta) = mat(theta_0, theta_1, theta_2)^top = mat(?, ?, ?)^top $ #pause
#align(center, table(
columns: 5,
inset: 0.4em,
$x_1$, $x_2$, $y$, $f(x_1, x_2, bold(theta))$, $hat(y)$,
$0$, $0$, $0$, [This is IMPOSSIBLE!], $$,
$0$, $1$, $1$, $$, $$,
$1$, $0$, $1$, $$, $$,
$1$, $1$, $0$, $$, $$
))
]
#slide(title: [Artificial Neurons])[
Why can't we represent XOR using a neuron? #pause
$ f(mat(x_1, x_2)^top, mat(theta_0, theta_1, theta_2)^top) = sigma(1 theta_0 + x_1 theta_1 + x_2 theta_2) $ #pause
We can only represent $sigma("linear function")$ #pause
XOR is not a linear combination of $x_1, x_2$! #pause
We want to represent any function, not just linear functions #pause
Let us think back to biology, maybe it has an answer
]
#slide(title: [Artificial Neurons])[
*Brain:* Biological neurons $->$ Biological neural network #pause
*Computer:* Artificial neurons $->$ Artificial neural network
]
#slide(title: [Artificial Neurons])[
Connect artificial neurons into a network
#grid(
columns: 2,
align: center,
column-gutter: 2em,
cimage("figures/lecture_3/neuron.svg", width: 80%), cimage("figures/lecture_3/deep_network.png", height: 75%),
[Neuron], [Neural Network]
)
]
#slide(title: [Artificial Neurons])[
#side-by-side[
#cimage("figures/lecture_3/deep_network.png", width: 100%)
][
Adding neurons in *parallel* creates a *wide* neural network #pause
Adding neurons in *series* creates a *deep* neural network #pause
Today's powerful neural networks are both *wide* and *deep*
]
]
#slide(title: [Agenda])[#agenda(index: 5)]
#slide(title: [Agenda])[#agenda(index: 6)]
#slide(title: [Wide Neural Networks])[
How do we express a *wide* neural network mathematically? #pause
A single neuron:
$ f: bb(R)^(d_x) times Theta |-> bb(R) $
$ Theta in bb(R)^(d_x + 1) $ #pause
$d_y$ neurons (wide):
$ f: bb(R)^(d_x) times Theta |-> bb(R)^(d_y) $
$ Theta in bb(R)^((d_x + 1) times d_y) $
]
#slide(title: [Wide Neural Networks])[
For a single neuron:
$ f(vec(x_1, dots.v, x_(d_x)), vec(theta_0, theta_1, dots.v, theta_(d_x)) ) = sigma(sum_(i=0)^(d_x) theta_i overline(x)_i) $ #pause
$ f(bold(x), bold(theta)) = sigma(b + bold(w)^top bold(x)) $
]
#slide[
// Must be m by n (m rows, n cols)
#text(size: 24pt)[
For a wide network:
$ f(vec(x_1, x_2, dots.v, x_(d_x)), mat(theta_(0,1), theta_(0,2), dots, theta_(0,d_y); theta_(1,1), theta_(1,2), dots, theta_(1, d_y); dots.v, dots.v, dots.down, dots.v; theta_(d_x, 1), theta_(d_x, 2), dots, theta_(d_x, d_y)) ) = vec(
sigma(sum_(i=0)^(d_x) theta_(i,1) overline(x)_i ),
sigma(sum_(i=0)^(d_x) theta_(i,2) overline(x)_i ),
dots.v,
sigma(sum_(i=0)^(d_x) theta_(i,d_y) overline(x)_i ),
)
$
$ f(bold(x), bold(theta)) =
sigma(bold(theta)^top overline(bold(x))); quad bold(theta)^top in bb(R)^( d_y times (d_x + 1) )
$ #pause
$
f(bold(x), vec(bold(b), bold(W))) = sigma( bold(b) + bold(W)^top bold(x) ); quad bold(b) in bb(R)^(d_y), bold(W) in bb(R)^( d_x times d_y )
$
]
]
#slide[#agenda(index: 6)]
#slide[#agenda(index: 7)]
#slide(title: [Deep Neural Networks])[
How do we express a *deep* neural network mathematically? #pause
A wide network and deep network have a similar function signature:
$ f: bb(R)^(d_x) times Theta |-> bb(R)^(d_y) $ #pause
But the parameters change!
Wide: $Theta in bb(R)^((d_x + 1) times d_y)$ #pause
Deep: $Theta in bb(R)^((d_x + 1) times d_h) times bb(R)^((d_h + 1) times d_h) times dots times bb(R)^((d_h + 1) times d_y)$ #pause
$ bold(theta) = mat(bold(theta)_1, bold(theta)_2, dots, bold(theta)_ell)^top = mat(bold(phi), bold(psi), dots, bold(xi))^top $
]
#slide(title: [Deep Neural Networks])[
A wide network:
$ f(bold(x), bold(theta)) = sigma(bold(theta)^top overline(bold(x))) $ #pause
A deep network has many internal functions
$ f_1(bold(x), bold(phi)) = sigma(bold(phi)^top overline(bold(x))) quad
f_2(bold(x), bold(psi)) = sigma(bold(psi)^top overline(bold(x))) quad
dots quad
f_(ell)(bold(x), bold(xi)) = sigma(bold(xi)^top overline(bold(x))) $ #pause
$ f(bold(x), bold(theta)) = f_(ell) (dots f_2(f_1(bold(x), bold(phi)), bold(psi)) dots bold(xi) ) $
]
#slide(title: [Deep Neural Networks])[
Written another way
$ bold(z)_1 = f_1(bold(x), bold(phi)) = sigma(bold(phi)^top overline(bold(x))) $ #pause
$ bold(z)_2 = f_2(bold(z_1), bold(psi)) = sigma(bold(psi)^top overline(bold(z))_1) $ #pause
$ dots.v $ #pause
$ bold(y) = f_(ell)(bold(x), bold(xi)) = sigma(bold(xi)^top overline(bold(z))_(ell - 1)) $
We call each function a *layer* #pause
A deep neural network is made of many layers
]
/*
#slide[
Implement XOR using a deep neural network #pause
$ f(x_1, x_2, bold(theta)) = sigma( & theta_(3, 0) \
+ & theta_(3, 1) quad dot quad sigma(theta_(1,0) + x_1 theta_(1,1) + x_2 theta_(1,2)) \
+ & theta_(3, 2) quad dot quad sigma(theta_(2,0) + x_1 theta_(2,1) + x_2 theta_(2,2))) $ #pause
$ bold(theta) = mat(
theta_(1,0), theta_(1,1), theta_(1,2);
theta_(2,0), theta_(2,1), theta_(2,2);
theta_(3,0), theta_(3,1), theta_(3,2)
) = mat(
-0.5, 1, 1;
-1.5, 1, 1;
-0.5, 1, -2
) $ #pause
]
*/
/*
#slide(title: [Deep Neural Networks])[
What functions can we represent using a deep neural network? #pause
Consider a one-dimensional arbitrary function $g(x) = y$ #pause
We can approximate $g$ using our neural network $f$ #pause
$ f(x_1, x_2, bold(theta)) = sigma( & theta_(3, 0) \
+ & theta_(3, 1) quad dot quad sigma(theta_(1,0) + x_1 theta_(1,1) + x_2 theta_(1,2)) \
+ & theta_(3, 2) quad dot quad sigma(theta_(2,0) + x_1 theta_(2,1) + x_2 theta_(2,2))) $
]
*/
#slide(title: [Deep Neural Networks])[
What functions can we represent using a deep neural network? #pause
*Proof Sketch:* Approximate a continuous function $g: bb(R) |-> bb(R)$ using a linear combination of Heaviside functions #pause
#only(2)[#cimage("figures/lecture_3/function_noapproximation.svg", height: 50%)]
#only((3,4))[#cimage("figures/lecture_3/function_approximation.svg", height: 50%)]
#only(4)[$exists (bold(theta) in bb(R)^(1 times d_h), bold(phi) in bb(R)^((d_h + 1) times d_1)) "such that" lim_(d_h |-> oo) [ bold(phi)^top sigma(overline(bold(theta)^top overline(x)))]$
]
]
#slide(title: [Deep Neural Networks])[
A deep neural network is a *universal function approximator* #pause
It can approximate *any* continuous function $g(x)$ to precision $epsilon$ #pause
$ | g(bold(x)) - f(bold(x), bold(theta)) | < epsilon $ #pause
Making the network deeper or wider decreases $epsilon$ #pause
#align(center)[#underline[Very powerful finding! The basis of deep learning.]] #pause
#side-by-side[*Task:* predict how many #text(fill: color.red)[#sym.suit.heart] a photo gets on social media][#cimage("figures/lecture_1/dog.png", height: 30%)]
]
#slide(title: [Agenda])[#agenda(index: 7)]
#slide(title: [Agenda])[#agenda(index: 8)]
#slide(title: [Practical Considerations])[
We call wide neural networks *perceptrons* #pause
We call deep neural networks *multi-layer perceptrons* (MLP) #pause
#cimage("figures/lecture_3/timeline.svg", width: 85%)
]
#slide(title: [Practical Considerations])[
*All* the models we examine in this course will use MLPs #pause
- Recurrent neural networks #pause
- Graph neural networks #pause
- Transformers #pause
- Chatbots #pause
It is very important to understand MLPs! #pause
I will explain them again very simply
]
#slide(title: [Practical Considerations])[
A *layer* is a linear operation and an activation function
$ f(bold(x), vec(bold(b), bold(W))) = sigma(bold(b) + bold(W)^top bold(x)) $
#side-by-side[Many layers makes a deep neural network][
#text(size: 22pt)[
$ bold(z)_1 &= f(bold(x), vec(bold(b)_1, bold(W)_1)) \
bold(z)_2 &= f(bold(z)_1, vec(bold(b)_2, bold(W)_2)) \ quad bold(y) &= f(bold(z)_2, vec(bold(b)_2, bold(W)_2)) $
]
]
]
#slide(title: [Practical Considerations])[
Let us create a wide neural network in colab! https://colab.research.google.com/drive/1bLtf3QY-yROIif_EoQSU1WS7svd0q8j7?usp=sharing
]
#slide(title: [Practical Considerations])[
#side-by-side(align: left + top)[
Linear regression: #pause
$+$ Analytical solution #pause
$+$ Low data requirement #pause
$-$ Poor scalability #pause
$-$ Poor polynomials generalization #pause
][
Neural networks: #pause
$-$ No analytical solution #pause
$-$ High data requirement #pause
$+$ Scale to large inputs #pause
$+$ Slightly better generalization #pause
]
Next time, we will find out how to train our neural network #pause
Unlike linear regression, finding $theta$ is much more difficult for neural networks
]
#slide[#agenda(index: none)]
#slide(title: [Conclusion])[
There might be a quiz next time #pause
Always bring paper and a pen #pause
You should be able to write a neural network layer mathematically #pause
You should also know the shapes of $bold(theta), bold(x), overline(bold(x)), bold(y)$
]
/*
#slide[
#text(size: 21pt)[
```python
import torch
from torch import nn
class MyNetwork(nn.Module):
def __init__(self):
super().__init__() # Required by pytorch
self.input_layer = nn.Linear(5, 3) # 3 neurons, 5 inputs each
self.output_layer = nn.Linear(3, 1) # 1 neuron with 3 inputs
def forward(self, x):
z = torch.heaviside(self.input_layer(x))
y = self.output_layer(z)
return y
```
]
]
#slide[
#text(size: 21pt)[
```python
import jax, equinox
from jax import numpy as jnp
from equinox import nn
class MyNetwork(equinox.Module):
input_layer: nn.Linear # Required by equinox
output_layer: nn.Linear
def __init__(self):
self.input_layer = nn.Linear(5, 3, key=jax.random.PRNGKey(0))
self.output_layer = nn.Linear(3, 1, key=jax.random.PRNGKey(1))
def __call__(self, x):
z = jnp.heaviside(self.input_layer(x))
y = self.output_layer(z)
return y
```
]
]
#slide[
#side-by-side[#cimage("figures/neuron.png", width: 80%)][#cimage("figures/heaviside.png", height: 50%)] #pause
*Question:* What kind of functions can we represent with our neuron? #pause
*Hint:* The neuron is linear regression with an activation function
]
#slide[
#side-by-side[#cimage("figures/neuron.png", width: 80%)][#cimage("figures/heaviside.png", height: 50%)] #pause
*Answer:* Linear functions with cutoff
]
#slide[
#side-by-side[#cimage("figures/neuron.png") #pause][
The output of the neuron depends on the activation function $sigma$
]
]
#slide[
#side-by-side[#cimage("figures/neuron.png") #pause][
*Question:* What functions can a single neuron represent?
*Hint:* Think back to linear regression #pause
*Answer:*
]
]
#slide[
#side-by-side[#cimage("figures/neuron.png") #pause][
Many biological neurons (brain) $->$ many artificial neurons (deep neural network)
]
]
*/
|
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/visualize/shape-aspect-06.typ | typst | Other | // Size cannot be relative because we wouldn't know
// relative to which axis.
// Error: 15-18 expected length or auto, found ratio
#square(size: 50%)
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/text/smartquotes.typ | typst | Apache License 2.0 | // Test setting custom smartquotes
---
// Use language quotes for missing keys, allow partial reset
#set smartquote(quotes: "«»")
"Double and 'Single' Quotes"
#set smartquote(quotes: (double: auto, single: "«»"))
"Double and 'Single' Quotes"
---
// Allow 2 graphemes
#set smartquote(quotes: "a\u{0301}a\u{0301}")
"Double and 'Single' Quotes"
#set smartquote(quotes: (single: "a\u{0301}a\u{0301}"))
"Double and 'Single' Quotes"
---
// Error: 25-28 expected 2 characters, found 1 character
#set smartquote(quotes: "'")
---
// Error: 25-35 expected 2 quotes, found 4 quotes
#set smartquote(quotes: ("'",) * 4)
---
// Error: 25-45 expected 2 quotes, found 4 quotes
#set smartquote(quotes: (single: ("'",) * 4))
|
https://github.com/RY997/Thesis | https://raw.githubusercontent.com/RY997/Thesis/main/thesis_typ/abstract_en.typ | typst | MIT License | #let abstract_en() = {
set page(
margin: (left: 30mm, right: 30mm, top: 40mm, bottom: 40mm),
numbering: none,
number-align: center,
)
let body-font = "New Computer Modern"
let sans-font = "New Computer Modern Sans"
set text(
font: body-font,
size: 12pt,
lang: "en"
)
set par(
leading: 1em,
justify: true
)
// --- Abstract (DE) ---
v(1fr)
align(center, text(font: body-font, 1em, weight: "semibold", "Abstract"))
text[
In response to the challenges posed by larger class sizes in computer science education, Artemis, an open-source learning platform, adopts interactive learning principles to enhance scalability and adaptability. However, the platform faces shortcomings in catering to the diverse skill levels and learning needs of students. This motivates the exploration of adaptive programming exercise generation, aiming to offer tailored challenges for individual learners. Adaptive exercises have the potential to both challenge advanced students and support beginners effectively, fostering intellectual growth and confidence. Leveraging advancements in Artificial Intelligence (AI), particularly Large Language Models (LLMs), this thesis introduces a novel approach to create adaptive programming exercises. The objectives include the development of a chatbot named Iris for intelligent exercise planning, enabling LLMs for dynamic exercise change plan execution, and seamlessly integrating exercise changes into the Artemis platform. This innovative methodology promises to revolutionize computer science education by providing personalized learning experiences and promoting continuous growth among students.
]
v(1fr)
} |
https://github.com/Mc-Zen/quill | https://raw.githubusercontent.com/Mc-Zen/quill/main/examples/fault-tolerant-toffoli2.typ | typst | MIT License | #import "../src/quill.typ": *
#let group = gategroup.with(stroke: (dash: "dotted", thickness: .5pt))
#quantum-circuit(
fill-wires: false,
group(3, 3, padding: (left: 1.5em)), lstick($|0〉$), $H$, ctrl(2), ctrl(3), 3,
group(2, 1),ctrl(1), 1, group(3, 1), ctrl(2), $X$, 1, rstick($|x〉$), [\ ],
lstick($|0〉$), $H$, ctrl(0), 1, ctrl(3), 2, $Z$, $X$, 2, group(2, 1),
ctrl(1), rstick($|y〉$), [\ ],
lstick($|0〉$), 1, targ(), 2, targ(), 1, mqgate($Z$, target: -1, wire-count: 2), 1,
targ(fill: auto), 1, targ(fill: auto), rstick($|z plus.circle x y〉$), [\ ],
lstick($|x〉$), 2, targ(), 6, meter(target: -3), setwire(2), ctrl(-1, wire-count: 2), [\ ],
lstick($|y〉$), 3, targ(), 3, meter(target: -3), setwire(2), ctrl(-2, wire-count: 2), [\ ],
lstick($|z〉$), 4, ctrl(-3), $H$, meter(target: -3)
) |
https://github.com/T1mVo/shadowed | https://raw.githubusercontent.com/T1mVo/shadowed/main/examples/lorem.typ | typst | MIT License | #import "../src/lib.typ": shadowed
#set page(margin: 15pt, height: auto)
#set par(justify: true)
#shadowed(radius: 4pt, inset: 12pt)[
#lorem(50)
]
|
https://github.com/sinchang/typst-react | https://raw.githubusercontent.com/sinchang/typst-react/master/resume.typ | typst | // https://github.com/skyzh/typst-cv-template/blob/master/cv.typ
#show heading: set text(font: "Linux Biolinum")
#show link: underline
#set page(
margin: (x: 0.9cm, y: 1.3cm),
)
#set par(justify: true)
#let chiline() = {v(-3pt); line(length: 100%); v(-5pt)}
= <NAME>
<EMAIL> |
#link("https://github.com/example")[github.com/example] | #link("https://example.com")[example.com]
== Education
#chiline()
*#lorem(2)* #h(1fr) 2333/23 -- 2333/23 \
#lorem(5) #h(1fr) #lorem(2) \
- #lorem(10)
*#lorem(2)* #h(1fr) 2333/23 -- 2333/23 \
#lorem(5) #h(1fr) #lorem(2) \
- #lorem(10)
== Work Experience
#chiline()
*#lorem(2)* #h(1fr) 2333/23 -- 2333/23 \
#lorem(5) #h(1fr) #lorem(2) \
- #lorem(20)
- #lorem(30)
- #lorem(40)
*#lorem(2)* #h(1fr) 2333/23 -- 2333/23 \
#lorem(5) #h(1fr) #lorem(2) \
- #lorem(20)
- #lorem(30)
- #lorem(40)
== Projects
#chiline()
*#lorem(2)* #h(1fr) 2333/23 -- 2333/23 \
#lorem(5) #h(1fr) #lorem(2) \
- #lorem(20)
- #lorem(30)
- #lorem(40)
*#lorem(2)* #h(1fr) 2333/23 -- 2333/23 \
#lorem(5) #h(1fr) #lorem(2) \
- #lorem(20)
- #lorem(30)
- #lorem(40) |
|
https://github.com/jamesrswift/dining-table | https://raw.githubusercontent.com/jamesrswift/dining-table/main/tests/topdown/header-none/test.typ | typst | The Unlicense | #import "../ledger.typ": *
#set text(size: 11pt)
#set page(height: 3.5cm, margin: 1em)
#dining-table.make(columns: example,
header: none,
data: data,
notes: dining-table.note.display-list
)
#dining-table.make(columns: example,
data: data,
notes: dining-table.note.display-list
) |
https://github.com/jneug/schule-typst | https://raw.githubusercontent.com/jneug/schule-typst/main/src/wp.typ | typst | MIT License | #import "_imports.typ": *
#let wochenplan(
..args,
body,
) = {
let (doc, page-init, tpl) = base-template(
type: "WP",
type-long: "Wochenplan",
title-block: doc => {
heading(level: 1, outlined: false, bookmarked: false, doc.title)
grid(
columns: (auto, 1fr),
align: center + horizon,
column-gutter: 5pt,
image(width: 1.5cm, "assets/calendar.svg"),
container(radius: 4pt, fill: theme.muted, stroke: 0pt)[
#set align(center)
#set text(1.2em, white)
#show heading: set text(white)
*#doc.from.display("[day].[month].[year]") bis #doc.to.display("[day].[month].[year]")*
],
)
},
_tpl: (
options: (
from: t.date(
pre-transform: t.coerce.date,
default: datetime.today(),
),
to: t.date(
optional: true,
pre-transform: (self, it) => {
if it != none {
return t.coerce.date(self, it)
} else {
let _today = datetime.today()
return datetime(
year: _today.year(),
month: _today.month(),
day: _today.day(),
)
}
},
),
),
aliases: (
von: "from",
bis: "to",
),
),
..args,
body,
)
{
show: page-init
tpl
}
}
#let gruppe(titel, beschreibung, body) = container(
radius: 6pt,
fill: theme.bg.muted,
stroke: 1.5pt + luma(120),
title-style: (boxed-style: (:)),
title: text(weight: "bold", hyphenate: false, size: .88em, titel),
)[#small(beschreibung)#container(fill: white, radius: 3pt, stroke: 1pt + luma(120), body)]
|
https://github.com/01mf02/jq-lang-spec | https://raw.githubusercontent.com/01mf02/jq-lang-spec/main/tour.typ | typst | #import "common.typ": example
= Tour of jq <tour>
This goal of this section is to convey an intuition about how jq functions.
The official documentation of jq is its user manual @jq-manual.
jq programs are called _filters_.
For now, let us consider a filter to be a function from a value to
a (lazy, possibly infinite) stream of values.
Furthermore, in this section, let us assume a value to be either
a boolean, an integer, or an array of values.
(We introduce the full set of JSON values in @json.)
The identity filter "`.`" returns a stream containing the input.#footnote[
The filters in this section can be executed on most UNIX shells by
`echo $INPUT | jq $FILTER`, where
`$INPUT` is the input value in JSON format and
`$FILTER` is the jq program to be executed.
Often, it is convenient to quote the filter; for example,
to run the filter "`.`" with the input value `0`,
we can run `echo 0 | jq '.'`.
In case where the input value does not matter,
we can also use `jq -n $FILTER`,
which runs the filter with the input value `null`.
We use jq 1.7.
]
Arithmetic operations, such as
addition, subtraction, multiplication, division, and remainder,
are available in jq.
For example, "`. + 1`" returns a stream containing the successor of the input.
Here, "`1`" is a filter that returns the value `1` for any input.
Concatenation is an important operator in jq:
The filter "`f, g`" concatenates the outputs of the filters `f` and `g`.
For example, the filter "`., .`" returns a stream containing the input value twice.
Composition is one of the most important operators in jq:
The filter "`f | g`" maps the filter `g` over all outputs of the filter `f`.
For example, "`(1, 2, 3) | (. + 1)`" returns `2, 3, 4`.
Arrays are created from a stream produced by `f` using the filter "`[f]`".
For example, the filter "`[1, 2, 3]`"
concatenates the output of the filters "`1`", "`2`", and "`3`" and puts it into an array,
yielding the value `[1, 2, 3]`.
The inverse filter "`.[]`" returns a stream containing the values of an array
if the input is an array.
For example, running "`.[]`" on the array `[1, 2, 3]` yields
the stream `1, 2, 3` consisting of three values.
We can combine the two shown filters to map over arrays;
for example, when given the input `[1, 2, 3]`,
the filter "`[.[] | (. + 1)]`" returns a single value `[2, 3, 4]`.
The values of an array at indices produced by `f` are returned by "`.[f]`".
For example, given the input `[1, 2, 3]`, the filter "`.[0, 2, 0]`"
returns the stream `1, 3, 1`.
Case distinctions can be performed with the filter "`if f then g else h end`".
For every value `v` produced by `f`, this filter
returns the output of `g` if `v` is true and the output of `h` otherwise.
For example, given the input `1`,
the filter "`if (. < 1, . == 1, . >= 1) then . else [] end`" returns `[], 1, 1`.
We can define filters by using the syntax "`def f(x1; ...; xn): g;`",
which defines an filter `f` taking `n` arguments by `g`,
where `g` can refer to `x1` to `xn`.
For example, jq provides the filter "`recurse(f)`" to calculate fix points,
which could be defined by "`def recurse(f): ., (f | recurse(f));`".
Using this, we can define a filter to calculate the factorial function, for example.
#example("Factorial")[
Let us define a filter `fac` that should return $n!$ for any input number $n$.
We will define `fac` using the fix point of a filter `update`.
The input and output of `update` shall be an array `[n, acc]`,
satisfying the invariant that the final output is `acc` times the factorial of `n`.
The initial value passed to `update` is the array "`[., 1]`".
We can retrieve `n` from the array with "`.[0]`" and `acc` with "`.[1]`".
We can now define `update` as "`if .[0] > 1 then [.[0] - 1, .[0] * .[1]] else empty end`",
where "`empty`" is a filter that returns an empty stream.
Given the input value `4`, the filter "`[., 1] | recurse(update)`" returns
`[4, 1], [3, 4], [2, 12], [1, 24]`.
We are, however, only interested in the accumulator contained in the last value.
So we can write "`[., 1] | last(recurse(update)) | .[1]`", where
"`last(f)`" is a filter that outputs the last output of `f`.
This then yields a single value `24` as result.
] <ex:fac>
Composition can also be used to bind values to _variables_.
The filter "`f as $x | g`" performs the following:
Given an input value `i`,
for every output `o` of the filter `f` applied to `i`,
the filter binds the variable `$x` to the value `o`, making it accessible to `g`, and
yields the output of `g` applied to the original input value `i`.
For example, the filter "`(0, 2) as $x | ((1, 2) as $y | ($x + $y))`"
yields the stream `1, 2, 3, 4`.
Note that in this particular case, we could also write this as "`(0, 2) + (1, 2)`",
because arithmetic operators such as "`f + g`" take as inputs
the Cartesian product of the output of `f` and `g`.
#footnote[
#set raw(lang: "haskell")
Haskell users might appreciate the similarity of the two filters
to their Haskell analoga
"`[0, 2] >>= (\x -> [1, 2] >>= (\y -> return (x+y)))`" and
"`(+) <$> [0, 2] <*> [1, 2]`", which both return
`[1, 2, 3, 4]`.
]
However, there are cases where variables are indispensable.
#example("Variables Are Necessary")[
jq defines a filter "`in(xs)`" that expands to "`. as $x | xs | has($x)`".
Given an input value `i`, "`in(xs)`" binds it to `$x`, then returns
for every value produced by `xs` whether its domain contains `$x` (and thus `i`).
Here, the domain of an array is the set of its indices.
For example, for the input
`1`, the filter
"`in([5], [42, 3], [])`" yields the stream
`false, true, false`,
because only `[42, 3]` has a length greater than 1 and thus a domain that contains `1`.
The point of this example is that
we wish to pass `xs` as input to `has`, but at the same point,
we also want to pass the input given to `in` as an argument to `has`.
Without variables, we could not do both.
]
Folding over streams can be done using `reduce` and `foreach`:
The filter "`reduce xs as $x (init; f)`" keeps
a state that is initialised with the output of `init`.
For every element `$x` yielded by the filter `xs`,
`reduce` feeds the current state to the filter `f`, which may reference `$x`,
then sets the state to the output of `f`.
When all elements of `xs` have been yielded, `reduce` returns the current state.
For example, the filter "`reduce .[] as $x (0; . + $x)`"
calculates the sum over all elements of an array.
Similarly, "`reduce .[] as $x (0; . + 1)`" calculates the length of an array.
These two filters are called "`add`" and "`length`" in jq, and
they allow to calculate the average of an array by "`add / length`".
The filter "`foreach xs as $x (init; f)`" is similar to `reduce`,
but also yields all intermediate states, not only the last state.
For example, "`foreach .[] as $x (0; . + $x)`"
yields the cumulative sum over all array elements.
Updating values can be done with the operator "`|=`",
which has a similar function as lens setters in languages such as Haskell
#cite(label("DBLP:conf/icfp/FosterPP08"))
#cite(label("DBLP:conf/popl/FosterGMPS05"))
#cite(label("DBLP:journals/programming/PickeringGW17")):
Intuitively, the filter "`p |= f`" considers any value `v` returned by `p` and
replaces it by the output of `f` applied to `v`.
We call a filter on the left-hand side of "`|=`" a _path expression_.
For example, when given the input `[1, 2, 3]`,
the filter "`.[] |= (. + 1)`" yields `[2, 3, 4]`, and
the filter "`.[1] |= (. + 1)`" yields `[1, 3, 3]`.
We can also nest these filters;
for example, when given the input `[[1, 2], [3, 4]]`,
the filter "`(.[] | .[]) |= (. + 1)`" yields `[[2, 3], [4, 5]]`.
However, not every filter is a path expression; for example,
the filter "`1`" is not a path expression because
"`1`" does not point to any part of the input value
but creates a new value.
Identities such as
"`.[] |= f`" being equivalent to "`[.[] | f]`" when the input value is an array, or
"`. |= f`" being equivalent to `f`,
would allow defining the behaviour of updates.
However, these identities do not hold in jq due the way it
handles filters `f` that return multiple values.
In particular, when we pass `0` to the filter "`. |= (1, 2)`",
the output is `1`, not `(1, 2)` as we might have expected.
Similarly, when we pass `[1, 2]` to the filter "`.[] |= (., .)`",
the output is `[1, 2]`, not `[1, 1, 2, 2]` as expected.
This behaviour of jq is cumbersome to define and to reason about.
This motivates in part the definition of more simple and elegant semantics
that behave like jq in most typical use cases
but eliminate corner cases like the ones shown.
We will show such semantics in @updates.
|
|
https://github.com/rangerjo/tutor | https://raw.githubusercontent.com/rangerjo/tutor/main/imgs/example.typ | typst | MIT License | #set page(
width: auto,
height: auto,
margin: (x: 0cm),
)
#table(
columns: 2,
stroke: none,
align: center,
[ Question Mode ], [ Solution Mode ],
[#rect(stroke: 2pt+blue,
image("../example/build/example_question_mode.svg", width: 12cm)
)],
[#rect(stroke: 2pt+green,
image("../example/build/example_solution_mode.svg", width: 12cm)
)]
)
|
https://github.com/lf-/typst-algorithmic | https://raw.githubusercontent.com/lf-/typst-algorithmic/main/algorithmic.typ | typst | // SPDX-FileCopyrightText: 2023 <NAME>
//
// SPDX-License-Identifier: MIT
/*
* Generated AST:
* (change_indent: int, body: ((ast | content)[] | content | ast)
*/
#let ast_to_content_list(indent, ast) = {
if type(ast) == "array" {
ast.map(d => ast_to_content_list(indent, d))
} else if type(ast) == "content" {
(pad(left: indent * 0.5em, ast),)
} else if type(ast) == "dictionary" {
let new_indent = ast.at("change_indent", default: 0) + indent
ast_to_content_list(new_indent, ast.body)
}
}
#let algorithm(..bits) = {
let content = bits.pos().map(b => ast_to_content_list(0, b)).flatten()
let table_bits = ()
let lineno = 1
while lineno <= content.len() {
table_bits.push([#lineno:])
table_bits.push(content.at(lineno - 1))
lineno = lineno + 1
}
table(
columns: (18pt, 100%),
// line spacing
inset: 0.3em,
stroke: none,
..table_bits
)
}
#let iflike_block(kw1: "", kw2: "", cond: "", ..body) = (
(strong(kw1) + " " + cond + " " + strong(kw2)),
// XXX: .pos annoys me here
(change_indent: 4, body: body.pos())
)
#let function_like(name, kw: "function", args: (), ..body) = (
iflike_block(kw1: kw, cond: (smallcaps(name) + "(" + args.join(", ") + ")"), ..body)
)
#let listify(v) = {
if type(v) == "list" {
v
} else {
(v,)
}
}
#let Function = function_like.with(kw: "function")
#let Procedure = function_like.with(kw: "procedure")
#let State(block) = ((body: block),)
/// Inline call
#let CallI(name, args) = smallcaps(name) + "(" + listify(args).join(", ") + ")"
#let Call(..args) = (CallI(..args),)
#let FnI(f, args) = strong(f) + " (" + listify(args).join(", ") + ")"
#let Fn(..args) = (FnI(..args),)
#let Ic(c) = sym.triangle.stroked.r + " " + c
#let Cmt(c) = (Ic(c),)
// It kind of sucks that Else is a separate block but it's fine
#let If = iflike_block.with(kw1: "if", kw2: "then")
#let While = iflike_block.with(kw1: "while", kw2: "do")
#let For = iflike_block.with(kw1: "for", kw2: "do")
#let Assign(var, val) = (var + " " + $<-$ + " " + val,)
#let Else = iflike_block.with(kw1: "else")
#let ElsIf = iflike_block.with(kw1: "else if", kw2: "then")
#let ElseIf = ElsIf
#let Return(arg) = (strong("return") + " " + arg,)
|
|
https://github.com/dismint/docmint | https://raw.githubusercontent.com/dismint/docmint/main/networks/pset1.typ | typst | #import "template.typ": *
#show: template.with(
title: "14.15 Problem Set #1",
subtitle: "<NAME>",
pset: true
)
= Problem 1
Let us the follow the convention for this problem that the first (top) row and (left) column specify the first node, and increase as they go down / right.
Since it is not explicitly stated, I will also make the assumption that there are no self-edges. This will be important in some calculations.
== (a)
We can consider each row as conveying information about the neighbors of that specific node. Then, to find the degree of the $n$th node, it simply suffices to add all the values in the $n$th row of the adjacency matrix. Thus we can accomplish this with matrix multiplication as follows:
$ bold(d) = boxed(bold(g) times bold(1)) $
In the above equation we are multiplying a $n times n$ matrix by a $n times 1$ vector, giving us the desired $n times 1$ dimensions for the vector $bold(d)$
== (b)
To get the total number of edges, we can take the sum of the degrees of each node, then divide by two to account for the fact that edges get double counted from both sides. Recall that the assumption has been made that there are no self-edges.
$ m = boxed(1/2 sum_(i,j) bold(g)_(i,j)) $
Alternatively we could have also notated this as $1^T dot (g times 1)$, or more simply $1^T dot bold(d)$
== (c)
Consider two rows in the $bold(g)$ matrix - by taking the dot product of binary vectors, we essentially determine how many positions both contain $1$, meaning that they have a shared edge. Thus to find the number of shared edges between two nodes $i, j$, simply take the dot product $bold(g)_i dot bold(g)_j$. Thus leads to our final formulation:
$ bold(N)_(i,j) = bold(g)_i dot bold(g)_j $
This of course, is the exact same thing as simply squaring the $bold(g)$ matrix.
$ bold(N) = boxed(bold(g) times bold(g)) $
Note that this has the consequence that the value of $bold(N)$ for a node and itself is its degree.
== (d)
Expanding off our answer from above, let us think about what happens when we further multiply by $bold(g)$, *cubing* the adjacency matrix.
Consider multiplying $bold(N)_i dot bold(g)_j$. For the $z$th element of each, we are essentially asking how many paths there are from $i$ to $z$ passing through some $x != i, z$ and then $z$ to $j$. What we really care about is the case where we loop back around and make a triangle, thus $j=i$. We should only care about the diagonal values of the resulting matrix as this is where that information lies.
Getting the answer by taking the trace would be an overestimate for a few reasons. Consider all the ways to count the triangle involving $a, b, c$
#enum(
enum.item(1)[
Fixing the starting point, we can run into both\
$a arrow.r b arrow.r c arrow.r a$\
$a arrow.r c arrow.r b arrow.r a$\
This accounts for a doubling in the total number of counted triangles.
],
enum.item(2)[
The cycle can start from any of the three nodes, leading to a tripling in the total number of counted triangles.
]
)
Therefore we conclude that we must take a sixth of this final number to get the accurate number of triangles.
$ \#"Triangles" = boxed(1 / 6 "Tr"(bold(g)^3)) $
= Problem 2
== (a)
#define(
title: "Betweenness Centrality"
)[
Recall that we define Betweenness Centrality (*BC*) for a node as the fraction of shortest paths between two arbitrary nodes that pass through this node, averaged over all pairs.
$ bold("BC")_k = sum_((i, j) : i != j, k != i, j) (P_k(i, j) \/ P(i, j)) / ((n-1)(n-2)) $
]
Since we are working with a tree, there are several nice simplifications that can be made.
As the graph is a tree, there is only one path between any two given nodes. Therefore $P(i, j)$ can be fixed to $1$
$ bold("BC")_k = sum_((i, j) : i != j, k != i, j) P_k(i, j) / ((n-1)(n-2)) $
With the new equation, we can now rephrase *BC* as "What fraction of paths contain $k$?". Alternatively this can also be phrased as $1 -$ "Fraction of paths that *don't* contain $k$". Let us work with this second definition, as that seems to be the rough form our desired answer takes.
For the disjoint regions, it is true that exactly one path existed in the original tree, so the path between one node in each region must have passed through $k$ previously. Conversely, it is also true that within the connected regions, $k$ did not impact the path between any two nodes since there already exists a path as the region is connected, and with exactly one path in a tree between two nodes, there cannot be an additional path passing through $k$. Therefore, it is sufficient to count the sum of pairs of nodes we can make with the restriction that both must be from the same region.
For the $m$th region, we can count the number of pairs of elements with $n_m (n_m-1)$, and we must take the total sum, leading us to:
$ sum_(m=1)^d n_m (n_m-1) $
However remember that we are taking the fraction of all paths so we end up with:
$ sum_(m=1)^d (n_m (n_m-1)) / ((n-1)(n-2)) $
And we finally remember that this is the inverse of the original desired quantity (the number of paths *including* $k$), so we must take the difference to $1$, resulting in the final form which matches the requested formula:
$ bold("BC")_k = boxed(1 - sum_(m=1)^d (n_m (n_m-1)) / ((n-1)(n-2))) $
== (b)
In a line graph, removing a node will always split the graph into at most $2$ pieces, depending on whether it is either the first / last node or one in the middle.
Let us assume that this graph contains at least two nodes and $i$ is zero-indexed. Then, the sizes of the two disjoins regions will be $i$ and $n-1-i$. Therefore the above formula can be simplified accounting for this new guarantee.
$ bold("BC")_i = boxed(1 - (i(i-1) + (n-1-i)(n-2-i)) / ((n-1)(n-2))) $
This expression can cancel the $(n-2)$ term out according to Wolfram Alpha, but I will leave the above as a sufficiently concise solution. Of course it only applies when we choose a node in the middle, as otherwise the terms can become negative. Thus if we pick the end, we instead have the simplified formula.
$ bold("BC")_i = 1 - ((n-1)(n-2)) / ((n-1)(n-2)) $
Thus, the centrality of an edge node is actually $0$, which makes sense as no shortest path in a line graph would ever pass through an edge except paths starting or ending from that edge (which are excluded from the calculation).
= Problem 3
== (a)
We want to take the sum across each degree multiplied by its chance of happening:
$ sum_i d_i dot "Chance to get" d_i $
Let us define this chance as the probability of picking an edge that connects to a node with degree $d_i$, divided by two since we need to pick the correct side of the edge. This works out nicely, even for cases where both ends of the edge are $d_i$ as the edge will get counted twice to make up for the incorrect fractional chance of picking it.
The chance we pick an edge that connects to a node with degree $d_i$ is tricker to derive:
+ $P(d_i) dot N$ is the number of nodes with degree $d_i$
+ $P(d_i) dot N dot d$ is the number of edges which have a $d_i$ degree endpoint.
+ $(P(d_i) dot N dot d_i) \/ M$ is the fraction of edges which have a $d_i$ degree endpoint.
+ $(P(d_i) dot N dot d_i) \/ (2 dot M)$ includes the likelihood of picking the correct end of the edge.
Therefore we now have the formula:
$ sum_i (d_i^2 dot P(d_i) dot N) / (2 dot M) $
We still need to get rid of the $M, N$ terms. To do this, observe that:
$ M = 1 / 2 sum_i P(d_i) dot d_i dot N $
Therefore we can make the following substitution in our equation:
$ M / N = (sum_i P(d_i) dot d_i dot N) / 2 $
And our equation now simplifies to:
$ sum_i (d_i^2 dot P(d_i) dot N dot 2) / (sum_i (P(d_i) dot d_i dot N) dot 2) = boxed(sum_i (P(d_i) dot d_i^2) / (sum_i P(d_i) dot d_i)) $
== (b)
The expected value in *(a)* was hard to calculate since a higher degree means there is an inherent higher chance to be picked, even more so than $P(d_i)$ would seem to indicate. Let us show that $E[D] >= E[X]$
We start with the fact that variance is always non-negative and work from there. Recall that $E[X] = sum_i P(d_i) dot d_i$
$ 0 <= "Var"[X] = sigma_X^2 = sum_i P(d_i)(d - E[X])^2 = sum_i P(d_i)(d^2+E[X]^2-2 dot d dot E[X]) $
Then notice the last step can be simplified as follows:
$
&= sum_i P(d_i) dot d^2 + sum_i P(d_i) dot E[X]^2 - sum_i P(d_i) dot 2 dot d dot E[X]\
&= sum_i P(d_i) dot d^2 + E[X]^2 - 2 dot E[X]^2\
&= sum_i P(d_i) dot d^2 - E[X]^2
$
After which we take the last couple steps to complete the proof:
$
0 &<= sum_i P(d_i) dot d^2 - E[X]^2\
E[X]^2 &<= sum_i P(d_i) dot d^2\
E[X] &<= sum_i (P(d_i) dot d^2) / (E[X])\
E[X] &<= E[D]
$
Thus we arrive at our desired conclusion, with the last step being made the same as the result of *(a)*
== (c)
We can simplify and show that:
$ sum_i d_i <= sum_i delta_i $
The left side can be reformatted since the sum of degrees is equal to two times the number of edges. The right side can be reformatted following a very similar style of logic, except this time instead of counting the edge twice back and forth, we need to count the ratio of degrees both ways.
$ sum_((i, j):i < j) 2 <= sum_((i, j):i < j) (d_i / d_j + d_j / d_i) $
Now all we need to do is to show $2 <= (d_i / d_j + d_j / d_i)$. Remember that since these are degrees, all numbers are positive. Let us start from a clearly true inequality and work onward.
$
(d_i - d_j)^2 &>= 0\
d_i^2 - 2 dot d_i dot d_j + d_j^2 &>= 0\
d_i^2 + d_j^2 &>= 2 dot d_i dot d_j\
d_i / d_j + d_j / d_i &>= 2
$
Thus we have shown that the inequality is satisfied for this problem. It turns out the last part of this proof actually works even if the numbers aren't always positive after graphing it on Desmos.
== (d)
Let us see how the idea of the friendship paradox is strengthened by the previous two parts.
*(b)* tells us that the expected degree of picking a random node is less than the expected value of picking an edge then picking a node. This speaks to the inherent bias there is in having more friends. You can only view your friends, and those friends are viewed through the connection (edge). As can be clearly seen in this part, there is a much heavier bias on being picked when you have a higher degree. This part can perhaps be summarized with the sentiment that the friendship paradox doesn't mean you have less friends than the average person, rather that you have less friends than *your* friends.
*(c)* tells us that the average degree of nodes in a graph is less than the average degree of its neighbors. This reinforces the fact that on average _your friends have more friends than you do_. Of course, this doesn't say anything about the magnitude of the difference, but nonetheless it is mathematically sound that there is a feeling of having less friends than your friends.
|
|
https://github.com/loqusion/typix | https://raw.githubusercontent.com/loqusion/typix/main/docs/recipes/declaring-a-shell-environment.md | markdown | MIT License | # Declaring a shell environment
You can automatically pull your project's dependencies into your shell by
declaring a [shell environment][nix-dev-declarative-shell] and then activating
it with [`nix develop`][nix-ref-develop] or [`direnv`][direnv].
Here's an example in a flake using Typix's
[`devShell`](../api/derivations/dev-shell.md):
```nix
{
outputs = { typix }: let
system = "x86_64-linux";
typixLib = typix.lib.${system};
watch-script = typixLib.watchTypstProject {/* ... */};
in {
# packages, apps, etc. omitted
devShells.${system}.default = typixLib.devShell {
fontPaths = [/* ... */];
virtualPaths = [/* ... */];
packages = [
watch-script
];
};
};
}
```
What this example does:
- Fonts added to [`fontPaths`](../api/derivations/dev-shell.md#fontpaths) will
be made available to `typst` commands via the `TYPST_FONT_PATHS` environment
variable.
- Files in [`virtualPaths`](../api/derivations/dev-shell.md#virtualpaths) will be
recursively symlinked to the current directory (only overwriting existing
files when
[`forceVirtualPaths`](../api/derivations/dev-shell.md#forcevirtualpaths) is
`true`).
- For convenience, the
[`typst-watch`](../api/derivations/watch-typst-project.md#scriptname) script
is added, which will run
[`watchTypstProject`](../api/derivations/watch-typst-project.md).
[direnv]: https://direnv.net/
[nix-dev-declarative-shell]: https://nix.dev/tutorials/first-steps/declarative-shell
[nix-ref-develop]: https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3-develop
|
https://github.com/linsyking/messenger-manual | https://raw.githubusercontent.com/linsyking/messenger-manual/main/appendix.typ | typst | #pagebreak()
= Appendix
== SOM Calls <sommsg>
`SOMMsg`s are top-level APIs (like system calls in OS) that can directly interact with the core. Users can send `SOMMsg` in any general model.
=== `SOMChangeScene`
*Definition.* `SOMChangeScene ( Maybe scenemsg ) String ( Maybe (Transition userdata) )`
*Definition.* `SOMChangeScene ( Maybe scenemsg ) String`
This message is used to change to another scene. Users need to provide the scene init data, the scene name, and the transition.
This message is used to change to another scene. Users need to provide the scene init data, the scene name.
=== `SOMPlayAudio`
*Definition.* `SOMPlayAudio Int String AudioOption`
This message is used to play an audio. It has three parameters: channel ID, audio name, and audio option. The channel ID is where this audio will be played. There might be multiple audios playing on the same channel. Audio name is what users define in the keys of `allAudio`.
`AudioOption` is defined in `Messenger.Audio.Base.elm`:
```elm
type AudioOption
= ALoop
| AOnce
```
`ALoop` means the audio will be played repeatedly. `AOnce` means the audio will be
played only once.
==== Example
Suppose we have two audio files `assets/bg.ogg` and `assets/se.ogg`.
First we need to import them to our projects, so edit `Lib/Resources.elm`:
```elm
allAudio : Dict.Dict String String
allAudio =
Dict.fromList
[ ( "bg", "assets/bg.ogg" )
, ( "se", "assets/se.ogg" )
]
```
This is very similar to `allTexture`.
After that, we decide to use 0 as the background music channel and 1 as the sound effect channel.
Then, when we want to play the background music `bg`, emit:
```elm
SOMPlayAudio 0 "bg" ALoop
```
And when we want to play the sound effect `se`, emit:
```elm
SOMPlayAudio 1 "se" AOnce
```
*Hint.* Users can use `newAudioChannel` to generate a unique channel ID.
=== `SOMStopAudio`
*Definition.* `SOMStopAudio Int`
This message is used to stop a channel. The parameter is the channel ID. If there are multiple audios playing on a channel, all of them will be stopped.
=== `SOMAlert`
*Definition.* `SOMAlert String`
This message is used to show an alert. The parameter is the content of the alert.
=== `SOMPrompt`
*Definition.* `SOMPrompt String String`
This message is used to show a #link("https://developer.mozilla.org/en-US/docs/Web/API/Window/prompt")[prompt]. Users can use this to get text input from the user. The first parameter is the name of the prompt, and the second parameter is the title of the prompt.
When the user clicks the OK button, user code will receive a `Prompt String String` message. The first parameter is the name of the prompt, and the second parameter is the user’s input.
=== `SOMSetVolume`
*Definition.* `SOMSetVolume Float`
This message is to change the volume. It should be a value in $[0, 1]$. Users could use a larger value but it will sound noisy.
=== `SOMSaveGlobalData`
*Definition.* `SOMSaveGlobalData`
Save global data (including user data) to local storage.
See @localstorage.
== Game Configurations
Users may want to change the settings in `MainConfig.elm` to match their demand. This section explains what each options in that configuration file means.
- `initScene`. The first scene users will see when start the game
- `initSceneMsg`. The message to start the first scene
- `virtualSize`. The virtual drawing size. Users may use whatever they like but think carefully about the ratio (Support 4:3 or 16:9? screens)
- `debug`. A debug flag. If turned on, users can press `F1` to change to a scene quickly and press `F2` to change volume during anytime in the game
- `background`. The background users see. Default is a transparent background
- `timeInterval`. The update strategy. See @tick
- `initGlobalData` and `saveGlobalData`. See @localstorage
== Messenger CLI Commands <cli>
You can also use `messenger <command> --help` to view help.
=== Scene
Create a scene.
Usage: `messenger scene [OPTIONS] NAME`
Arguments:
- `name`. The name of the scene
- `--raw`. Use raw scene without layers
- `--proto`, `-p`. Create a sceneproto
- `--init`, `-i`. Create a `Init.elm` file
=== Init
Initialize a Messenger project.
Usage: `messenger init [OPTIONS] NAME`
Arguments:
- `name`. The name of project
- `--template-repo`, `-t`. Use customized repository for cloning templates.
- `--template-tag`, `-b`. The tag or branch of the repository to clone.
=== Layer
Create a layer.
Usage: `messenger layer [OPTIONS] NAME LAYER`
Arguments:
- `name`. The name of the scene
- `layer`. The name of the layer
- `--with-component`, `-c`. Use components in this layer
- `--cdir`, `-cd`. Directory of components in the scene
- `--proto`, `-p`. Create layer in sceneproto
- `--init`, `-i`. Create a `Init.elm` file
=== Level
Create a level.
Usage: `messenger level [OPTIONS] SCENEPROTO NAME`
Arguments:
- `sceneproto`. The name of the sceneproto
- `name`. The name of the level
=== Component
Create a component.
Usage: `messenger component [OPTIONS] SCENE NAME`
Arguments:
- `scene`. The name of the scene
- `name`. The name of the component
- `--cdir`, `-cd`. Directory to store components
- `--proto`, `-p`. Create component in sceneproto
- `--init`, `-i`. Create a `Init.elm` file
== Roadmap
This sections contains some ideas we'd like to implement in future versions of Messenger. We welcome users to post feature request in our Messenger repository's issue.
=== Multi-pass Updater
Some components may want to do some operations after all other components have finished. This is the _second-pass_ updater. We plan to extend this idea further to support _multi-pass_ updater. Components may update _any_ number of passes in one event update.
=== Advanced Component View
Users might want to have `List (Renderable, Int)` instead of `(Renderable, Int)` (In fact, this is what Reweave does). A use-case is that a component may have some part behind the player and some other part in front of the player.
=== Unified custom element
Unify `elm-canvas` and audio system.
=== Asset Manager
Design a better asset manager that helps manage all the assets, including audios, images, and other data.
=== On-demand Asset Loading
Users can load or pre-load assets when they want to, not at the beginning of the game.
== Acknowledgement
We express great gratitude to the FOCS Messenger team. Members are #link("<EMAIL>")[linsyking], #link("<EMAIL>")[YUcxovo], #link("<EMAIL>")[matmleave]. We also express sincere gratitude to all students using Messenger. |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/superb-pci/0.1.0/README.md | markdown | Apache License 2.0 | # superb-pci
Template for [Peer Community In](https://peercommunityin.org/) (PCI) submission and [Peer Community Journal](https://peercommunityjournal.org/) (PCJ) post-recommendation upload.
The template is as close as possible to the LaTeX one.
## Usage
To use this template in Typst, simply import it at the top of your document.
```
#import "@preview/superb-pci:0.1.0": *
```
Alternatively, you can start using this template from the command line with
```
typst init @preview/superb-pci:0.1.0 my-superb-manuscript-dir
```
or directly in the web app by clicking "Start from template".
Please see the main Readme about Typst packages [https://github.com/typst/packages](https://github.com/typst/packages).
## Configuration
This template exports the `pci` function with the following named arguments:
- `title`: the paper title
- `authors`: array of author dictionaries. Each author must have the `name` field, and can have the optional fields `orcid`, and `affiliations`.
- `affiliations`: array of affiliation dictionaries, each with the keys `id` and `name`. All correspondence between authors and affiliations is done manually.
- `abstract`: abstract of the paper as content
- `doi`: DOI of the paper displayed on the front page
- `keywords`: array of keywords displayed on the front page
- `correspondence`: corresponding address displayed on the front page
- `numbered_sections`: boolean, whether sections should be numbered
- `pcj`: boolean, provides a way to remove the front page and headers/footers for upload to the Peer Community Journal. `[default: false]`
The template will initialize your folder with a sample call to the `pci` function in a show rule and dummy content as an example.
If you want to change an existing project to use this template, you can add a show rule like this at the top of your file:
```typst
#import "@preview/superb-pci:0.1.0": *
#show: pci.with(
title: [Sample for the template, with quite a very long title],
abstract: lorem(200),
authors: (
(
name: "<NAME>",
orcid: "0000-0000-0000-0001",
affiliations: "#,1"
),
(
name: "<NAME>",
orcid: "0000-0000-0000-0001",
affiliations: "#,2",
),
(
name: "<NAME>",
affiliations: "2",
),
(
name: "<NAME>",
orcid: "0000-0000-0000-0001",
affiliations: "1,3"
),
),
affiliations: (
(id: "1", name: "Rue sans aplomb, Paris, France"),
(id: "2", name: "Center for spiced radium experiments, United Kingdom"),
(id: "3", name: "Bruce's Bar and Grill, London (near Susan's)"),
(id: "#", name: "Equal contributions"),
),
doi: "https://doi.org/10.5802/fake.doi",
keywords: ("Scientific writing", "Typst", "PCI", "Example"),
correspondence: "<EMAIL>",
numbered_sections: false,
pcj: false,
)
// Your content goes here
```
You might also need to use the `table_note` function from the template.
## To do
Some things that are not straightforward in Typst yet that need to be added in the futures:
- [ ] line numbers
- [ ] switch equation numbers to the left
|
https://github.com/Dr00gy/Typst-thesis-template-for-VSB | https://raw.githubusercontent.com/Dr00gy/Typst-thesis-template-for-VSB/main/thesis_template/pages.typ | typst | #let titlePage(
thesisTitle,
thesisDescription,
fullName,
supervisor,
type: "bachelor", // bachelor, bachelor-practice, master or phd
year: datetime.today().year(),
) = {
// Overwrite some global rules
set par(
first-line-indent: 0cm,
justify: false,
)
move(
dx: -8mm,
context(
image(
if text.lang == "en" {"logos/FEI EN.svg"} else {"logos/FEI CZ.svg"},
height: 3cm,
)
)
)
heading(outlined: false, level: 2)[#thesisTitle]
v(1.5em)
set text(spacing: .3em)
text(size: 14pt)[#thesisDescription]
v(2em)
text(size: 20pt)[#fullName]
align(bottom)[
#set text(size: 14pt)
#context([
#if type == "bachelor" {
if text.lang == "en" [Bachelor thesis] else [Bakalářská práce]
} else if type == "bachelor-practice" {
if text.lang == "en" [Bachelor professional practice] else [Bakalářská praxe]
} else if type == "master" {
if text.lang == "en" [Master thesis] else [Diplomová práce]
} else if type == "phd" {
if text.lang == "en" [PhD thesis] else [Disertační práce]
}
#if text.lang == "en" [Supervisor:] else [Vedoucí práce:]
])
#supervisor
Ostrava, #year
]
}
// Pages before Contents
#let abstracts(
czechAbstract, englishAbstract,
czechKeywords, englishKeywords,
slovakAbstract: none, slovakKeywords: none,
quote: none,
acknowledgment: none,
abstractSpacing: 2.5cm,
) = {
//show heading: set block(spacing: 1em)
// Abstract
grid(
rows: (auto, auto, auto),
row-gutter: abstractSpacing,
{
text({
heading(outlined: false, level: 2)[Abstrakt]
czechAbstract
heading(outlined: false, level: 2)[Klíčová slova]
czechKeywords.join(", ")
}, lang: "cs")
},
{
text({
heading(outlined: false, level: 2)[Abstract]
englishAbstract
heading(outlined: false, level: 2)[Keywords]
englishKeywords.join(", ")
}, lang: "en")
},
if slovakAbstract != none and slovakKeywords != none {
text({
heading(outlined: false, level: 2)[Abstrakt]
slovakAbstract
heading(outlined: false, level: 2)[Kľúčové slová]
slovakKeywords.join(", ")
}, lang: "sk")
},
)
// Acknowledgement
if acknowledgment != none {
pagebreak()
if quote != none {
quote
}
align(bottom)[
#heading(outlined: false, level: 2)[
#context(if text.lang == "en" [Acknowledgment] else [Poděkování])
]
#acknowledgment
]
}
}
|
|
https://github.com/Enter-tainer/mino | https://raw.githubusercontent.com/Enter-tainer/mino/master/typst-package/mino.typ | typst | MIT License | #import "@preview/jogs:0.2.3": compile-js, call-js-function
#let mj-src = read("./mino.js")
#let mj-bytecode = compile-js(mj-src)
#let get-text(src) = {
if type(src) == str {
src
} else if type(src) == content {
src.text
}
}
#let decode-fumen(fumen) = call-js-function(mj-bytecode, "mino", fumen)
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/break-continue-03.typ | typst | Other | // Test joining with continue.
#let x = for i in range(5) {
"a"
if calc.rem(i, 3) == 0 {
"_"
continue
}
str(i)
}
#test(x, "a_a1a2a_a4")
|
https://github.com/drupol/master-thesis | https://raw.githubusercontent.com/drupol/master-thesis/main/src/thesis/theme/common/titlepage.typ | typst | Other | #import "metadata.typ": *
#let titlepage(
title: "",
subtitle: "",
university: "",
faculty: "",
degree: "",
program: "",
supervisor: "",
advisors: (),
author: "",
authorOrcId: "",
doi: none,
startDate: none,
submissionDate: none,
rev: none,
shortRev: none,
builddate: none,
) = {
set page(
margin: (top: 1cm, left: 1cm, right: 1cm, bottom: 1cm),
numbering: none,
number-align: center,
header: {
place(top + left)[
#pad(top: 10pt)[
#set text(size: 4em)
#include "../UMONS-fs-logo.typ"
]
]
place(top + right)[
#pad(top: 10pt)[
#set text(size: 4em)
#include "../UMONS-logo.typ"
]
]
},
footer: align(
center,
text(font: sans-font)[
#faculty #sym.diamond.filled.small
#university #sym.diamond.filled.small
20, Place du Parc #sym.diamond.filled.small
B-7000 Mons
],
),
)
set text(
font: body-font,
size: 12pt,
lang: "en",
)
place(center + horizon, dy: 3em)[
#align(center, text(font: sans-font, 2em, weight: 700, title))
#if subtitle != none {
align(center, text(font: sans-font, 1em, weight: 700, subtitle))
}
#align(center, text(font: sans-font, 1.3em, weight: 100, view))
#grid(
columns: 3,
gutter: 1em,
align: (right, left, left),
strong("Author"),
":",
{
link("https://orcid.org/" + authorOrcId)[#author#box(
image(
"../../../../resources/images/ORCIDiD_iconvector.svg",
width: 10pt,
),
)]
},
strong("Supervisor"),
":",
supervisor,
..if advisors != none {
(strong("Advisors"), ":", advisors.join(", "))
},
..if startDate != none {
(strong("Academic year"), ":", startDate)
},
..if submissionDate != none {
(strong("Submission date"), ":", submissionDate)
},
..if builddate != "" {
(strong("Build date"), ":", builddate)
},
..if doi != none {
(
strong("DOI"),
":",
link(
"https://doi.org/" + doi,
doi,
),
)
},
..if shortRev != "" {
(
strong("Revision"),
":",
link(
"https://codeberg.org/p1ld7a/master-thesis/commit/" + rev,
shortRev,
),
)
},
)
]
}
|
https://github.com/WooSeongChoi/perl-5-study | https://raw.githubusercontent.com/WooSeongChoi/perl-5-study/main/main.typ | typst | #import "@preview/ilm:1.2.1": *
#show: ilm.with(
title: [Perl 5 배우기],
author: "Learning Perl",
figure-index: (enabled: true),
table-index: (enabled: true),
listing-index: (enabled: true)
)
#set text(
font: "NanumGothic",
size: 10pt
)
#include "./chapters/chapter01/introduction.typ"
|
|
https://github.com/Ttajika/class | https://raw.githubusercontent.com/Ttajika/class/main/microecon/quiz_bank.typ | typst | #import "functions.typ": *
#set text(font: "<NAME>")
#show strong: set text(font: "<NAME>")
//see https://typst.app/docs/reference/foundations/calc
#let Q2-price1 = 120
#let Q2-price2 = 100
#let Q2-dem1 = 3020
#let Q2-dem2 = 5240
#let cat_list =("Term1",)
Quiz_createで問題のリストを作成する. Quiz = Quiz_createとするとQuizが問題のリストを格納する変数
- quiz関数で,形式に合った問題を作成する. 以下その各種引数の説明
- 最初の引数はid: これがラベルにもなるので,問題を引用するときは @Q2 のようにできる.回答のラベルはAをつけて @Q2A
- なので別のidにAをつけたものをidにするとラベルが重複しエラーが出るので注意.
- question: 問題文. [] の中に入力する
- answer: 解答. [] の中に入力する
- commentary: 解説. [] の中に入力する
- point: 配点. 整数を入力する. 合計点を計算するシステムは未実装.
#let Quiz = Quiz_create(
quiz("Q1", question:[
ある会社が運営するテーマパークがある.日々の営業に必要な費用は30億円,これまでにかかった建設費は3200億円である.ある日,このテーマパークの所有権を320億円で買いたいという人が現れたがその会社はその申し出を断った.また,テーマパークを潰してデパートを立てれば建設費や運営費などを差し引いても350億円の利益が上がるという.しかしこの会社はテーマパークの運営を続けている.また,この会社は1000億円でならこのテーマパークの所有権を売るかと言う問いにそうだと答えている.この会社が利益追求するものだとするとき,テーマパークの収入は最大いくら *以下* であると考えられるか答えなさい.ただし,問題文以外の事情は無視して良い.
], answer:[
], point:10, show-answer:true, category:"Term1"),
quiz("Q2", question:[
ある財について,価格が#Q2-price1\円から#Q2-price2\円に変化したところ,需要量が#Q2-dem1\から#Q2-dem2\に変化した.このときの需要の価格弾力性を求めなさい.
], answer:[
$#calc.abs((Q2-price2 - Q2-price1))/#calc.abs((Q2-dem2 - Q2-dem1))$
], point:12, category:"Term1"),
quiz("Q3", question:[需要関数が$D(p)=5-p/2$,総費用関数が$C(q)=5q$であるとき,独占企業の利潤を最大にする価格を求めなさい.],
answer:[難しくない], commentary:[], point:12, category:"Term1"),
quiz("Q4", question:[卵の市場を考える.鳥インフルエンザによって卵の生産費用が上昇した.このとき,市場均衡における価格と取引量がどのように変化すると考えられるか,図を用いて答えなさい], answer:[難しくない], commentary:[], point:12, category:"Term1"),
quiz("Q5", question:[豊作貧乏が発生しうる理由を図を用いて文章で説明しなさい.
], answer:[難しくない], commentary:[], point:12, category:"Term1"),
quiz("Q6", question:[
ある年のアーティストのCDの価格と取引量が次のようになっていた
- 1月1日: 価格1000, 取引量1000
- 1月30日: 価格800, 取引量1700
また,1月15日にはそのアーティストが著名な賞を受賞し,人気が高まった.
以上の事柄からわかることとして最も適当なものを以下の中から一つ選び,その理由を説明しなさい.
+ CDの需要曲線は右下がりである.
+ CDの需要曲線は右上がりである.
+ CDの供給曲線は右下がりである.
+ CDの供給曲線は右上がりである.
ただし,需要と供給のパターンは以下の4つであるとする.
#image("img/patterns2.svg")
], answer:[難しくない], commentary:[], point:12, category:"Term1"),
quiz("Q7", question:[トゥイードゥルディーとトゥイードゥルダムは双子で見分けがつかないが,好みが違う.
- トゥイードゥルディーはケーキに600円,コーヒーに100円の支払意思額,
- トゥイードゥルダムはケーキに700円,コーヒーに500円の支払意思額を持っている.
このとき,彼らの支払う金額の合計が最大になるようなメニューを考えなさい.
], answer:[難しくない], commentary:[], point:12, category:"Term1"),
quiz("Q8",question:[
ある時点における需要曲線が下図の点線で表されるとする.このとき,あるショックが起き,どんな価格についても需要量が増加したという.このとき,ショック後の需要曲線として最も適当なものを次の図から選びなさい.ただしショック後の需要曲線は実線で描いている.
#image("img/Q8_img.png")
],answer:[],point:3, category:"Term1"),
quiz("Q9",question:[A氏はX社に入社すると2000万円,Y社に入社すると1500万円,Z社に入社すると3000万円の収入を得ることができるとする.彼はこのうちX社とY社には入社できるがZ社には入社できない.また同時に二社以上には就職できないものとする. 彼は今まで教育費用に7億円を費やしてきた.一方でA氏は現在就職していない.就職せずにニートでいることの効用は0である.このことから就職することによって心理的な費用がかかるようである.この心理的な費用はいくら以上であると考えられるか答えなさい.ただしこの問題文に出てきていない要素は無視して良いものとする.], category:"Term1"),
quiz("Q10",question:[従量税が$t$だけ課される財を考える.このとき,課税の死荷重損失の大きさを図示しなさい.
],category:"Term1"),
quiz("Q11",question:[鶏卵の価格と取引量が次のようになっていた
- 1月1日: 価格100, 取引量140
- 3月1日: 価格120, 取引量100
- 5月1日: 価格200, 取引量80
また,1月15日は鳥インフルエンザにより大量の鶏の殺処分が行われ,3月15日には鶏の餌代の価格が上昇したという.
以上の事柄からわかることとして最も適当なものを以下の中から一つ選び,その理由を説明しなさい.
+ 鶏卵の需要曲線は右下がりである.
+ 鶏卵の需要曲線は右上がりである.
+ 鶏卵の供給曲線は右下がりである.
+ 鶏卵の供給曲線は右上がりである.
ただし,需要と供給のパターンは以下の4つであるとする.
#image("img/patterns.png")],category:"Term1"),
quiz("Q12",question:[ある航空会社が航空機の運行のため,燃料を10億円分購入した.しかし,その直後,COVID2019の流行により,しばらくの間,燃料が使用される見込みがなくなった.また,全世界的に燃料の需要が低下したため,この燃料を転売することは二年間,不可能となった.一方,この燃料を用いたその他の事業は可能である.その他事業は複数種類あり,事業によってこの燃料を使う量は異なる.このときの燃料を購入した費用は固定費用と言えるか,また埋没費用と言えるか,答えなさい.],category:"Term1"),
quiz("Q13",question:[第1財と第2財しか購入しない消費者を考える.2024年時点では,第1財に全ての所得を使った場合,第1財を三十単位購入可能であった.また,2024年時点では消費者は第2財を三単位購入している.2025年時点では第1財の価格が増加した一方,所得も増加した.結果として,2025年には第1財に全ての所得を使った結果,第1財を三十単位購入することが可能である.このとき,この消費者の効用は2024年から25年にかけて増加するか,図を用いて理由を含めて答えなさい.ただし,2024年の意思決定と2025年の意思決定は独立であるとし,また消費者の無差別曲線は全く変わらないものとする.],category:"Term2"),
quiz("Q14", question:[財が二種類しかないとする.一つの財が上級財である場合,もう一つの財は下級財である.これは正しいか?], category:"Term2"),
quiz("Q15", question:[財が二種類しかないとする.一つの財が下級財である場合,もう一つの財は上級財である.これは正しいか?], category:"Term2"),
quiz("Q16", question:[消費プラン$w$と$w'$が無差別であるとはどう言うことか,効用関数$u$を用いて答えなさい.
], category:"Term2"),
quiz("Q17", question:[効用を最大化する消費プランにおいて,無差別曲線と予算制約線が接する理由を図を用いて説明しなさい.], category:"Term2"),
quiz("Q18", question:[次のマウスを用いた架空の実験を考える.マウスのケージにはレバーが二つあり,そのレバーを押すと一定の量のエサが出る.それぞれのレバーをAとBと呼ぶ.レバーAを1回押すと通常のエサが1粒でてくるが,レバーBを押すとキニーネ入りのエサが3粒でる. レバーを押すことができる回数はAとBの合計で100回と決まっている.
+ マウスがどちらのレバーを押すかという問題は,消費者理論における効用最大化問題と解釈できる.レバーを押した時に出てくるエサの粒の個数とレバーを押すことができる回数はそれぞれ消費者の効用最大化問題の何に対応するかを理由を含めて答えなさい.
+ レバーを押すことができる回数が100回から120回に増えた.このとき,マウスがレバーAを押す回数は増加し,レバーBを押す回数は減少した.この行動の変化を無差別曲線と予算制約線を用いて図示しなさい.
+ 2の結果から通常のエサとキニーネ入りのエサについて言えることのうち最も適当なものを次の中から選び,その理由を答えなさい.
#set enum(numbering: "a.")
+ 通常のエサはキニーネ入りのエサの粗補完財である.
+ 通常のエサはキニーネ入りのエサの粗代替材である.
+ 通常のエサは正常財(上級財)でキニーネ入りのエサは劣等財(下級財)である.
+ 通常のエサは劣等財(下級財)でキニーネ入りのエサは正常財(上級財)である.], category:"Term2"),
quiz("Q19", question:[無差別曲線が原点に向かって凸であるとする.いま,マグロとエビの寿司を何貫づつ食べるかと言う問題を考える.「マグロ1貫とエビ3貫の消費プラン」と「マグロ3貫とエビ1貫の消費プラン」が無差別であるとする.このとき,「マグロ2貫,エビ2貫と言う消費プラン」について言えることについて最も適当なものは次のうちどれか?
#set enum(numbering: "a.")
+ 「マグロ2貫,エビ2貫の消費プラン」は「マグロ1貫とエビ3貫の消費プラン」と「マグロ3貫とエビ1貫の消費プラン」のどちらよりも好ましい
+ 「マグロ2貫,エビ2貫の消費プラン」は「マグロ1貫とエビ3貫の消費プラン」と「マグロ3貫とエビ1貫の消費プラン」のどちらよりも好ましくない
+ 「マグロ2貫,エビ2貫の消費プラン」は「マグロ1貫とエビ3貫の消費プラン」より好ましいが,「マグロ3貫とエビ1貫の消費プラン」より好ましくない
+ 「マグロ2貫,エビ2貫の消費プラン」は「マグロ1貫とエビ3貫の消費プラン」より好ましくないが,「マグロ3貫とエビ1貫の消費プラン」より好ましい
+ 上記のうちいずれもいえない.], category:"Term2"),
quiz("Q20", question:[粗補完財と粗代替財の違いを例を挙げて述べなさい.], category:"Term2"),
quiz("Q21", question:[代替財と粗代替財の違いを例を挙げて述べなさい.], category:"Term2"),
quiz("Q22", question:[2種類しか財がないとする.このとき,一方が必需財であればもう一方は贅沢財である.これは常に正しいか?], category:"Term2"),
quiz("Q23", question:[二日間しか生きない人の貯蓄問題を考える.1日目の消費と2日目の消費が粗代替財でかつ両方とも上級財であるとき,利子率の増加は貯蓄を増加させるかどうか説明しなさい.], category:"Term2"),
quiz("Q24", question:[消費と余暇の選択問題を考える.余暇が下級財であるとき,時給の増加は余暇を増やすかどうか,説明しなさい.], category:"Term2"),
quiz("Q25", question:[ある価格と所得の組み合わせで下級財になっている財があるとする.このとき,効用関数をうまく作ればどんな価格と所得の組み合わせでも下級財になるようにできる.これは正しいか?], category:"Term2"),
quiz("Q26", question:[ギッフェン財は下級財である.これは正しいか?], category:"Term2"),
quiz("Q27", question:[財1と財2が代替財でも粗補完財になる可能性はある.これは正しいか?], category:"Term2"),
quiz("Q28", question:[財1と財2が代替財であるが粗補完財であるとする.このときどちらかの財は下級財である.これは正しいか?], category:"Term2"),
quiz("Q29", question:[財が2種類しかないとする.財1がギッフェン財であれば,財2は財1の粗補完財である.これは正しいか?], category:"Term2"),
quiz("Q30", question:[#kuranenv[以下の文章の空欄ア〜カを埋めなさい.
総費用関数を$C(q)= 7 q^3-5 q^2+ 3q+16$とする.
このとき,可変費用(変動費用)は #kuran() である.
価格が,$ #kuran() (q_1)^2- #kuran() q_1+ #kuran()$ を下回れば,操業を停止した方が良い.また,価格が $ #kuran(n:2) (q_2)^2- #kuran(n:3) q_2+ #kuran(n:4)$ を下回ると利潤がマイナスになる.
ただし,$q_1$は方程式,#kuran() の解のうち正のものであり,$q_2$は方程式,#kuran() の解のうち,実数のものである.]], category:"Term3"),
quiz("Q31", question:[$y$だけの財を生産するために必要な生産要素の投入組(生産プラン)について,費用を最小にする点において等産出量曲線(等量曲線)と等費用線が接する理由を説明しなさい.], category:"Term3"),
quiz("Q32", question:[#kuranenv[以下の文章の空欄アとイを埋めなさい.ただしどちらにも数字が入る.
総費用関数を$C(q)= 2 q^3-4 q^2+ 7q+16$とする.
このとき,固定費用は #kuran() である.
また,価格が,$#kuran() $ を下回れば,操業を停止した方が良い.
]], category:"Term3"),
quiz("Q33", question:[同じ生産量を生産するとき,長期の費用は必ず短期の費用を下回るか同じかである.これは正しいか?], category:"Term3"),
quiz("Q34", question:[生産関数が規模に関して収穫逓増であれば,価格受容者の企業にとっては生産要素を無限に投入することができるのであればそうするのが良い.これは正しいか?], category:"Term3"),
quiz("Q35", question:[生産関数が規模に関して収穫逓減であれば,価格受容者の企業にとっては生産要素を0にするのが良い.これは正しいか?], category:"Term3"),
quiz("Q36", question:[ある価格と生産要素価格において利潤が正であるとする.このとき,生産関数が規模に関して収穫逓増であれば,価格受容者の企業にとっては生産要素を無限に投入することができるのであればそうするのが良い.これは正しいか?], category:"Term3"),
quiz("Q37", question:[企業の利潤最大化問題では限界生産物と実質要素価格が等しくなるが,消費者の効用最大化問題では限界効用と価格が等しくなるとは限らない.この違いが生じる理由は何か?], category:"Term3"),
quiz("Q38", question:[#kuranenv[いま,経済の構成員がアダム,イヴ,ルツの三人だとする.可能な選択肢はx, y, z, w, vの5つであるとしよう.各選択肢から得られる各構成員の効用は次の表の通りであるとする.
#align(center)[
#table(columns:(4em,auto,auto,auto,auto,auto),
[名前],[x],[y],[z],[w],[v],
[アダム],[2],[2],[1],[8],[1],
[イヴ],[3],[8],[6],[9],[3],
[ルツ],[4],[4],[20],[1],[4]
)
]
+ 功利主義型社会厚生を最大にする選択肢を全て選びなさい.
+ ナッシュ型社会厚生を最大にする選択肢を全て選びなさい.
+ マキシミン型社会厚生を最大にする選択肢を全て選びなさい.
+ パレート効率的な選択肢を全て選びなさい.]], category:"Term4"),
quiz("Q39", question:[第一厚生定理(厚生経済学の第一基本定理)の主張を1行で説明しなさい.
], category:"Term4"),
quiz("Q40", question:[エッジワースボックス中の配分について,パレート効率的な配分とそうでない配分を図示し,それらがなぜそう言えるかを説明しなさい.
], category:"Term4"),
quiz("Q41", question:[財が2種類しかないとする.財$x$の需要関数が,$d_x (p_x,p_y)=20-4p_x+p_y$, 財$y$の需要関数が$d_y (p_x,p_y)=12+p_x-4p_y$とし,財$x$の供給量が$5$, 財$y$の供給量が$3$だとする.このとき,市場均衡の財$x$と$y$の価格を求めなさい.], category:"Term4"),
quiz("Q42", question:[第一厚生定理が成り立つ理由をエッジワースボックスを用いて説明しなさい.], category:"Term4"),
quiz("Q43", question:[第二厚生定理(厚生経済学の第一基本定理)の主張を1行で説明しなさい.], category:"Term4"),
quiz("Q44", question:[第二厚生定理が成り立つ理由をエッジワースボックスを用いて説明しなさい.
], category:"Term4"),
quiz("Q45", question:[純粋交換経済におけるワルラス均衡とはどのようなものか,消費者が二人,財の数が二人のケースで説明しなさい.], category:"Term4"),
quiz("Q46", question:[固定費用が負のとき,損益分岐点と操業停止点の関係はどのようになるか?], category:"Term3"),
quiz("Q47", question:[ある財の需要曲線が需要法則を満たすとする.その財の価格が120円から100円に価格が変化したとき,需要量は4000から4800に変化した. この財の価格が120円から130円に変化するとき,需要量の増加量として最も適当なものは以下のいずれか?
+ $800$
+ $400$
+ $130$
+ $-800$
], category:"Term1"),
quiz("Q48", question:[短期の企業の決定を考える.以下のうち,損益分岐点の価格と操業停止点の両方を変化させるのはどれか?ただし,総費用関数は変わらないものとする.
+ 補助金を生産量にかかわらず一定金額を与える.
+ 赤字が発生したとき,その30%を補填する補助金を与える.
+ 操業を停止すること(つまり生産量を0にすること)に対して,固定費用の30%の補助金を与える.
+ 生産量が0でない限り,固定費用の30%の補助金を与える.
], category:"Term3"),
quiz("Q49", question:[規模に関して収穫一定であるとき,利潤を最大にする要素投入量の組み合わせは一般に複数ある.これは正しいか?その理由を説明しなさい.ただし,要素投入量は正であると考える.], category:"Term3"),
quiz("Q50", question:[消費と余暇の選択問題を考える.消費と余暇が代替関係にあるとする.このとき余暇が時給の増加に伴って増えるのであれば余暇は上級財であるか?説明しなさい.], category:"Term2"),
quiz("Q51", question:[以下のある学生の意見を聞いて,その疑問を解消しなさい.
「消費と余暇の選択問題を考えるとき時給は実質余暇の価格と考えられます.時給が増えると余暇が増えるということは価格が増えると需要が増えるということなので余暇はギッフェン財っぽいですね.でもこのとき余暇は上級財ですよね.しかしギッフェン財は下級財とおっしゃいました.矛盾しませんか?」
], category:"Term2"),
quiz("Q52", question:[第一厚生定理より,市場均衡の配分がパレート効率的であるということは,市場均衡以外の配分を持ってきたとき,これを市場均衡の配分に変えれば全員が得をすることができる.これは正しいか?], category:"Term4"),
quiz("Q53", question:[第二厚生定理より,市場均衡以外の配分を持ってきたとき,全員が得をするような別の配分を見つけることができて,それは市場均衡の配分である.これは正しいか?], category:"Term4"),
quiz("Q54", question:[第二厚生定理より,市場均衡以外の配分を持ってきたとき,全員が得をするような別の配分を見つけることができる.これは正しいか?], category:"Term4"),
quiz("Q55", question:[財が二種類しかないとする.財1と財2が代替的で,財2が下級財とする.このとき財2は財1の粗補完財である.これは正しいか?], category:"Term2")
)
#tests_gen(Quiz, style:"both")
|
|
https://github.com/mintyfrankie/brilliant-CV | https://raw.githubusercontent.com/mintyfrankie/brilliant-CV/main/template/modules_en/skills.typ | typst | Apache License 2.0 | // Imports
#import "@preview/brilliant-cv:2.0.3": cvSection, cvSkill, hBar
#let metadata = toml("../metadata.toml")
#let cvSection = cvSection.with(metadata: metadata)
#cvSection("Skills")
#cvSkill(
type: [Languages],
info: [English #hBar() French #hBar() Chinese],
)
#cvSkill(
type: [Tech Stack],
info: [Tableau #hBar() Python (Pandas/Numpy) #hBar() PostgreSQL],
)
#cvSkill(
type: [Personal Interests],
info: [Swimming #hBar() Cooking #hBar() Reading],
)
|
https://github.com/tibs245/template-typst-CV | https://raw.githubusercontent.com/tibs245/template-typst-CV/main/README.md | markdown | # Mon CV en Typst
[Exemple of result in release V1](https://github.com/tibs245/template-typst-CV/releases/download/v1/CV-Thibault.Barske-blue.pdf)
> If you click on release V1 you can see differents colors generations examples
## Pourquoi faire mon CV en Typst ?
1. Mon dossier de compétences sous word à mal vieilli
- Template cassé
- Problème de polices
- Changement de template compliqué
- Transfert de données fastidieux
2. La réutilisation : J'ai la possibilité de générer ce CV a partir de fichiers
YML qui peuvent alimenter des pages web et autres
3. Je peux modifier mes templates facilement
- Pour m'adapter a une entreprise ou l'image que je souhaites renvoyer
4. Versionning via Git
5. Si vous faites des templates de CV plus élaboré après je pourrai m'en
reservir facilement 😄
## Quick Start
- Copy / Paste "author_example.yml" in "author.yml" with your own data
- Replace skills with ours in `main.typ`
- You can change your color with `primaryColor` in template
- Delete all missions and add yours
- Edit template with yours preferences ☺️
## To improve
- Organize icons by category
- Add yaml for all skill and use store to add in main.typ _(Better to improve
collaboration)_
- Add `skillsToImprove` logics
- Add automatics summary for relevant mission
- Add annotation for all missions relevant
- I need to think it again
|
|
https://github.com/RY997/Thesis | https://raw.githubusercontent.com/RY997/Thesis/main/thesis_typ/appendix.typ | typst | MIT License | *User 1*
- What do you think of the overall Iris exercise adaptation feature? Can it be useful for the real work of editor or is it a redundant feature and not very helpful?
- I find the Iris exercise adaptation feature quite useful, particularly for editing programming exercises. It offers a convenient way to customize exercises to different skill levels, making content more effective. This feature is far from redundant; it enhances the relevance and challenge of exercises, which is essential for both students and tutors/instructors.
- What needs to be improved? Which aspect is good?
- Iris sometimes struggled to interpret my prompts accurately, which could be improved. While its response time could be faster, I find it less critical. However, Iris's ability to adapt exercises remains as a strong point.
- What kind of tasks will you prefer to do by yourself instead of using Iris?
- For tasks like making minor manual code adjustments, such as removing or replacing snippets within functions, I'd prefer to handle them myself rather than using Iris.
- What kind of tasks will you prefer to let Iris do it?
- I'd let Iris handle tasks such as generating additional test cases, elaborating problem statements with clear explanations, and adding comments to the code for better understanding.
*User 2*
So what IRIS is good at is all the small stuff, like generating the algorithm for sorting etc and generating a nice theme and good explanantions for methods or tasks. What it does bad is that it fails with the consistent changes to all the files. I would maybe do it so that you could have more steps in between to check what IRIS is doing (good or bad) and also be able to specify better that it only has to change one part (e.g. a function)
In general i would say: Change the editor before IRIS and try to copy what copilot in VSCode does
*User 3*
The first request I made to Iris (to change from Binary to Jump Search) was handled well. However, it seems that it didn't understand the subsequent task at all. If it performs as it did with the first request, I would use it for significant changes, such as themes, and types of exercises changes, and would handle minor issues myself. But if it behaves as it did with all other requests I made, except the first one, then I wouldn't use it at all.
Overall, it seems that the understanding of my requests didn't work out as expected. It appeared that the second change to an exercise was consistently unsuccessful.
*User 4*
Iris is good at adapting exercise to meet specific and actionable requirements, making her the ideal choice for this scenario. However, for minor adjustments, I prefer handling them by myself.
*User 5*
The response time of Iris is somewhat slow, and I'm unable to resize Iris from the right side; only the left side can be adjusted, which is a bit inconvenient. |
https://github.com/chilingg/kaiji | https://raw.githubusercontent.com/chilingg/kaiji/main/template/main.typ | typst | Other | #let font_size_list = (16pt, 12pt, 9pt, 7pt);
#let font_cfg = (
font: "Source Han Serif",
weight: "light",
tracking: 0.1em,
lang: "zh"
)
#let sans_font_cfg = (
font: "Source Han Sans",
weight: "medium"
)
#let thin_line = 0.6pt
#let normal_line = 1pt
#let parenthese_numbers = ("⑴","⑵","⑶","⑷","⑸","⑹","⑺","⑻","⑼","⑽","⑾","⑿","⒀","⒁","⒂","⒃","⒄","⒅","⒆","⒇")
#let base_style(body) = [
#set page(margin: (x: 6em))
#set text(..font_cfg, size: font_size_list.at(2), tracking: 0.1em)
#set par(justify: true, leading: 0.8em)
#set list(indent: 0em, marker: none)
#set enum(indent: 0em)
#set line(stroke: thin_line)
#set rect(stroke: thin_line)
#show figure.where(
kind: image,
): it => box()[
#let n = counter(figure.where(kind: image)).at(it.location()).at(0)
#text(..sans_font_cfg, tracking: 0em)[⬜#(n + 500) #it.caption.body]
#it.body
]
#let chapter_title_interval = 32pt
#show heading.where(level: 1): it => {
if it.outlined {
return block(above: chapter_title_interval, below: chapter_title_interval)[
#set text(..sans_font_cfg, size: font_size_list.at(0), weight: "medium", tracking: 0em)
#counter(heading).display()
#it.body
]
} else {
let trancking = if it.has("label") and it.label == <wide_title> {
2em
} else {
0em
}
return text(size: font_size_list.at(0), weight: "medium", tracking: trancking)[#it.body]
}
}
#show heading.where(level: 2): it => {
if it.outlined {
return block(above: chapter_title_interval, below: chapter_title_interval)[
#set text(..sans_font_cfg, size: font_size_list.at(1), weight: "medium", tracking: 0em)
#counter(heading).display()
#it.body
]
} else {
let trancking = if it.has("label") and it.label == <wide_title> {
2em
} else {
0em
}
return text(size: font_size_list.at(1), weight: "medium", tracking: trancking)[#it.body]
}
}
#show heading.where(level: 3): it => [
#set text(..sans_font_cfg, size: font_size_list.at(2))
#it.body
]
#show <sans_font>: it => [
#set text(..sans_font_cfg)
#it
]
#show <center>: it => {
align(center)[#it]
}
#body
]
#let main_body(body) = [
#set page(
numbering: "1",
footer: locate(loc => {
let place = if calc.odd(counter(page).at(loc).first()){
right
} else {
left
}
return align(place)[#counter(page).display()]
})
)
#set heading(
outlined: true,
numbering: (..nums) => "61" + nums
.pos()
.map(str)
.join("-"),
)
#base_style(body)
]
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/pad_01.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Pad can grow.
#pad(left: 10pt, right: 10pt)[PL #h(1fr) PR]
|
https://github.com/MultisampledNight/flow | https://raw.githubusercontent.com/MultisampledNight/flow/main/src/callout.typ | typst | MIT License | #import "gfx.typ"
#import "palette.typ": *
#let _callout(body, accent: fg, marker: none) = {
let body = if marker == none {
body
} else {
let icon = gfx.markers.at(marker).icon
grid(
columns: (1.5em, auto),
gutter: 0.5em,
align: (right + horizon, left),
icon(invert: false),
body,
)
}
block(
stroke: (left: accent),
inset: (
left: if marker == none { 0.5em } else { 0em },
y: 0.5em,
),
body,
)
}
#let question = _callout.with(
accent: status.unknown,
marker: "?",
)
#let remark = _callout.with(
accent: status.remark,
marker: "i",
)
#let hint = _callout.with(
accent: status.hint,
marker: "o",
)
#let caution = _callout.with(
accent: status.urgent,
marker: "!",
)
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/024_Shadows%20over%20Innistrad.typ | typst | #import "@local/mtgset:0.1.0": conf
#show: doc => conf("Shadows over Innistrad", doc)
#include "./024 - Shadows over Innistrad/001_Under the Silver Moon.typ"
#include "./024 - Shadows over Innistrad/002_A Gaze Blank and Pitiless.typ"
#include "./024 - Shadows over Innistrad/003_Unwelcome.typ"
#include "./024 - Shadows over Innistrad/004_Sacrifice.typ"
#include "./024 - Shadows over Innistrad/005_The Mystery of Markov Manor.typ"
#include "./024 - Shadows over Innistrad/006_The Drownyard Temple.typ"
#include "./024 - Shadows over Innistrad/007_Promises Old and New.typ"
#include "./024 - Shadows over Innistrad/008_Liliana's Indignation.typ"
#include "./024 - Shadows over Innistrad/009_Games.typ"
#include "./024 - Shadows over Innistrad/010_The Lunarch Inquisition.typ"
#include "./024 - Shadows over Innistrad/011_Stories and Endings.typ"
#include "./024 - Shadows over Innistrad/012_I Am Avacyn.typ"
|
|
https://github.com/lucannez64/Notes | https://raw.githubusercontent.com/lucannez64/Notes/master/Physique_Mecanique_1.typ | typst | #import "@preview/bubble:0.1.0": *
#import "@preview/fletcher:0.4.3" as fletcher: diagram, node, edge
#import "@preview/cetz:0.2.2": canvas, draw, tree
#import "@preview/cheq:0.1.0": checklist
#import "@preview/typpuccino:0.1.0": macchiato
#import "@preview/wordometer:0.1.1": *
#import "@preview/tablem:0.1.0": tablem
#show: bubble.with(
title: "Physique Mecanique 1",
subtitle: "18/09/2024",
author: "<NAME>",
affiliation: "EPFL",
year: "2024/2025",
class: "Génie Mécanique",
logo: image("JOJO_magazine_Spring_2022_cover-min-modified.png"),
)
#set page(footer: context [
#set text(8pt)
#set align(center)
#text("page "+ counter(page).display())
]
)
#set heading(numbering: "1.1")
#show: checklist.with(fill: luma(95%), stroke: blue, radius: .2em)
= Rappels
La résistance à la mise en mouvement dépend de la géométrie.
$mat(p, F; L_0, M_0)$
$L_0 = m vec(v) and d$
= Action-Réaction
$ arrow(F)^(i arrow j) = - arrow(F)^(j arrow i) $
$ m_1dot(arrow(v))_1 = arrow(F)_1^("ext") + arrow(F)^(2 arrow 1) + arrow(F)^(3 arrow 1) $
$ m_2dot(arrow(v))_2 = arrow(F)_2^("ext") + arrow(F)^(1 arrow 2) + arrow(F)^(3 arrow 2) $
$ m_3dot(arrow(v))_3 = arrow(F)_3^("ext") + arrow(F)^(2 arrow 3) + arrow(F)^(1 arrow 3) $
= Moment de force
$ arrow(M)_0 colon.eq sum_alpha arrow(O P)_alpha and arrow(F)_alpha $
= Moment cinétique
$ arrow(L_0) colon.eq sum_alpha arrow(O P)_alpha and m arrow(v)_alpha $
= Loi de Newton en rotation
$ d/(d t) (arrow(L)_0) = arrow(M)_0 $
$ = sum dot(arrow( O P)) and m arrow(v) + sum arrow(O P) and m dot(arrow(v)) $
$ = sum arrow(v) and m arrow(v) + sum arrow(O P) and arrow(F) $
= Dérivée du produit scalaire
$ d/(d t) (arrow(a) dot arrow(b) ) = dot(arrow(a)) dot arrow(b) + arrow(a) dot dot(arrow(b))) $
= Dérivée du produit vectoriel
$ d/(d t) (arrow(a) and arrow(b) ) = dot(arrow(a)) and arrow(b) + arrow(a) and dot(arrow(b))) $
= Exercice le singe et la balle
$ dot(v_x) = 0 $
$ dot(v_z) = -g $
$ dot(x) = C_1 $
$ dot(z) = -g t + C_2 $
$ x = C_1 t + E $
$ z = -g/2 t² + C_2 t + E_2 $
$ C_1 = v_(x 0) = v_0 cos(theta) $
$ C_2 = v_(z 0) = v_0 sin(theta) $
$ E = x_0 $
$ E_2 = z_0 $
|
|
https://github.com/valentinvogt/npde-summary | https://raw.githubusercontent.com/valentinvogt/npde-summary/main/src/chapters/01.typ | typst | #import "../setup.typ": *
#show: thmrules
= Second-Order Scalar Elliptic Boundary Value Problems
<ch:01>
#counter(heading).step(level: 2)
== Quadratic Minimization Problems
<sub:quadratic-minimization-problems>
#v(0.2cm)
In the following, let $V$ be a vector space over $bb(R)$.\
#mybox(
"Linear forms",
)[
$ell : V arrow.r bb(R)$ is a #emph[linear form / linear functional] $arrow.l.r.double.long$
#neq(
$ ell (alpha u + beta v) = alpha ell (u) + beta ell (v) wide forall u , v in V , forall alpha , beta in RR $,
)
]
#v(-1cm)
#let bs = colMath("s", rgb("2900d3"))
#let bt = colMath("t", rgb("2900d3"))
#mybox(
"Bilinear forms",
)[
$a : V times V arrow.r bb(R)$ is a #emph[bilinear form] $arrow.l.r.double.long$
#neq(
$ a \( &bs u_1 + u_2 , bt v_1 + v_2 \)\
& = bs bt dot.op a (u_1 , v_1) + bs a (u_1 , v_2) + bt a (u_2 , v_1) + a (u_2 , v_2) forall u_1 , u_2 , v_1 , v_2 in V , forall bs , bt in bb(R) $,
)
]
#v(-1cm)
#mybox(
"Positive definiteness",
)[
A bilinear form $a : V times V arrow.r bb(R)$ is #emph[positive definite] if
$ u in V \\ { bold(0) } arrow.l.r.double.long a (u , u) > 0 $
It is #emph[positive semi-definite] if
$ a (u , u) & gt.eq 0 quad forall u in V $
]
#v(-1.1cm)
#mybox(
"Quadratic Functional",
)[
A #emph[quadratic functional] $J : V arrow.r bb(R)$ is defined by
#neq($ J (u) := 1 / 2 a (u , u) - ell (u) + c, quad u in V $)
where $a : V times V arrow.r bb(R)$ a symmetric bilinear form, $ell : V arrow.r bb(R)$ a
linear form and $c in bb(R)$.
]
#v(-1cm)
#mybox(
"Continuity of linear form",
)[
A linear form $ell : V arrow.r bb(R)$ is #emph[continuous / bounded] on $V$, if
#neq(
$ exists C > 0 quad lr(|ell (v)|) lt.eq C norm(v) quad forall v in V, $,
) <eq:continuity-linear-form>
where $norm(dot)$ is a norm on $V$.
]
== Sobolev Spaces
<sub:sobolev-spaces>
When we solve a minimization problem, we first need to define the space of
functions in which we look for the solution. For example, in Physics, we
generally want the solution to be continuous. E.g, a function describing the
shape of an elastic string should not have jumps.
It turns out that the correct space to describe our minimization problems is the #strong[Sobolev space];.
For functions $u$ in a Sobolev space, the bilinear form $a$ in the quadratic
functional is well defined (i.e., $a(u,u)<oo$). Hence the space in which we look
for minimizers is determined by the given quadratic functional. To select the
space for your problem, follow the guideline ...
#emph[Choose the largest space such that the problem is well defined];.
#mybox(
"Sobolev Spaces",
)[
$H^1_0 (Omega)$ is a vector space with norm
$ |v|_(H^1) := (integral_Omega norm(grad v)^2 dif bx)^(1 / 2) $
$H^1 (Omega)$ is another vector space with norm
$ norm(v)^2_(H^1) := norm(v)^2_(L^2) + |v|^2_(H^1) $
Note that $|dot|_(H^1)$ is not a norm on the space $H^1 (Omega)$, but a
seminorm.
Both spaces contain all functions for which the respective norm is finite (and, in the case
of $H^1_0 (Omega)$, which satisfy the boundary condition 0 on $partial Omega$).
]
*Alternative notation* for norms includes $norm(dot)_0$ for $norm(dot)_(L^2)$ and $|dot|_1$ for $|dot|_(H^1)$.
If the quadratic minimization problem is well defined, we get the following
lemma for existence and uniqueness of minimizers:
#theorem(
number: "1.3.3.6", "Existence of minimizers in Hilbert spaces",
)[
On a real Hilbert space $V$ with norm $norm(dot)_a$ for any
$norm(dot)_a$-bounded linear functional $ell : V arrow.r bb(R)$, the quadratic
minimization problem
#neq($ u_(\*) & = op("argmin", limits: #true)_(v in V) J (v)\
J (v) & := 1 / 2 norm(v)_a^2 - ell (v) $)
has a unique solution.
] <thm:existence-minimizer-hilbert>
Note that here, we use the bilinear form to define the norm
$norm(u)_a = sqrt(a (u , u))$. The main point is that we can see the bilinear
form of the quadratic minimization problem as the norm of some Sobolev space.
The above theorem guarantees that a solution exists in this space if the linear
form is bounded.
For checking boundedness we can often use Cauchy--Schwarz
(@eq:cauchy-schwarz-integrals) and Poincaré--Friedrichs
(@thm:poincare-friedrichs).
#pagebreak(weak: true)
== Linear Variational Problem
<sub:linear-variational-problem>
#definition(
number: "1.4.1.6", "Linear Variational Problem",
)[
Let $V$ be a vector (function) space, $mhat(V) subset V$ an affine space, and $V_0 subset V$ the
associated subspace. The equation
#neq(
$ #text("Find") u in mhat(V) med #text("such that") a (u , v) = ell (v) quad forall v in V_0 $,
) <eq:linear-variational-problem>
is called a (generalized) #emph[linear variational problem];, if
- $a : V times V_0 arrow.r bb(R)$ is a bilinear form
- $ell : V_0 arrow.r bb(R)$ is a linear form
]
@thm:existence-minimizer-hilbert tells us that the minimization problem has a
solution, but knowing that a solution exists is of course not enough: We want to
find it, but an infinite-dimensional minimization problem is hard to solve. To
make it easier, we reformulate the problems in a linear variational form
@eq:linear-variational-problem, which is quite close to something we can solve
numerically. To do this transformation, we use the following equivalence:
#theorem(
number: "1.4.1.8", "Equivalence of quadratic
minimization and linear variational problem",
)[
For a (generalized) quadratic functional $J (v) = 1 / 2 a (v , v) - ell (v) + c$ on
a vector space $V$ and with a symmetric positive definite bilinear form $a : V times V arrow.r bb(R)$ the
following is equivalent:
- The quadratic minimization problem for $J (v)$ has the unique minimizer $u_(\*) in mhat(V)$ over
the affine subspace $mhat(V) = g + V_0 , g in V$.
- The linear variational problem $ u in mhat(V) quad a (u , v) = ell (v) &quad forall v in V_0 $ has
the unique solution $u_(\*) in mhat(V).$
]<thm:variational-problem-equiv>
Note that the trial space $mhat(V)$, from which we pick a solution, and the test
space $V_0$ can be different. For an example of different trial and test spaces,
see @sub:boundary-conditions.
#pagebreak(weak: true)
== Boundary Value Problems
<sub:boundary-value-problems>
#lemma(
number: "1.5.2.1", "General product rule", ..unimportant,
)[
For all $bold(j) in (C^1 (overline(Omega)))^d , v in C^1 (overline(Omega))$ holds
#neq(
$ div (bold(j) v) = v div bold(j) + bold(j) dot.op grad v quad upright("in") Omega $,
)
] <thm:general-product-rule>
#lemma(
number: "1.5.2.4", "Gauss' Theorem",
)[
Let $bold(n) : partial Omega arrow.r bb(R)^d$ denote the exterior unit normal
vector field on $partial Omega$ and $dif S$ denote integration over a surface. We
have
#neq(
$ integral_Omega div bold(j (x)) dif bx = integral_(partial Omega) bold(j (x) dot.op n (x)) dif S (bx) quad forall bold(j) in (C_(upright(p w))^1 (overline(Omega)))^d $,
)
] <thm:gauss-theorem>
#lemma(
number: "1.5.2.7", "Green's first formula",
)[
For all vector fields $bold(j) in (C^1_"pw" (overline(Omega)))^d$ and functions $v in C^1_"pw" (overline(Omega))$ holds
#neq(
$ integral_Omega bold(j) dot.op grad v dif bx = - integral_Omega div bold(j) thin v dif bx + integral_(partial Omega) bold(j dot.op n) thin v dif S $,
)
] <thm:greens-formula>
#lemma(
number: "1.5.3.4", "Fundamental lemma of the calculus of variations",
)[
If $f in L^2 (Omega)$ satisfies
#neq(
$ integral_Omega f (bx) v (bx) dif bx = 0 quad forall v in C_0^oo (Omega), $,
)
then $f equiv 0$.
] <thm:fund-lemma>
We have seen that minimizing a quadratic functional is equivalent to solving a
linear variational problem @eq:linear-variational-problem. The variational
problem is called the #strong[weak form];. We can transform it (with extra
smoothness requirements) into the problem's #strong[strong form];, an elliptic
BVP (PDE with boundary conditions).
#tip-box(
"Weak to strong",
)[
+ Use @thm:greens-formula to get rid of derivatives on $v$ (e.g. turn $grad u dot grad v$ into $-div(grad u) +...$
+ Use properties of the test space (usually that $v=0$ on $partial Omega$) to get
rid of boundary terms
+ Use @thm:fund-lemma to remove the integrals and test functions
]
#pagebreak(weak: true)
#counter(heading).step(level: 2)
== Boundary Conditions
<sub:boundary-conditions>
For 2nd-order elliptic BVPs we need boundary conditions to get a unique
solution. To be more precise, we need #strong[exactly one] of the following
boundary conditions on every part of $partial Omega$.
#mybox("Main boundary conditions for 2nd-order elliptic BVPs")[
- #strong[Dirichlet]: $u$ is fixed to be
$g : partial Omega arrow.r bb(R)$
$ u = g quad upright("on") thin partial Omega $
- #strong[Neumann]: the flux $bold(j) = - kappa (bx) grad u$
through $partial Omega$ is fixed with
$h : partial Omega arrow.r bb(R)$
$ bold(j dot.op n) = - h quad upright("on") thin partial Omega $
- #strong[Radiation]: flux depends on $u$ with an increasing function
$Psi : bb(R) arrow.r bb(R)$
$ bold(j dot.op n) = Psi (u) quad upright("on") thin partial Omega $
]
In the weak form, Dirichlet conditions have to be imposed directly on the
*trial* space. The test space needs to be set to 0 wherever Dirichlet conditions
are given ("_Don't test where the solution is known_"). For example, trial and
test spaces for a standard Dirichlet problem are
$ V &= Set(u in C^1(Omega)&, && u=g "on" partial Omega) \
V_0 &= Set(v in C^1(Omega)&, && v=0 "on" partial Omega) $
Dirichlet BCs are called #strong[essential boundary conditions];.
Neumann conditions, which are only enforced through some term in the variational
equation, are called #strong[natural boundary conditions];.
There are some constraints on the boundary data:
#subtle-box[
- #strong[Admissible Dirichlet Data];: Dirichlet boundary values need to be
continuous.
- #strong[Admissible Neumann Data];: $h$ needs to be in $L^2 (Omega)$
(can be discontinuous)
]
The following theorem is frequently needed when dealing with integrals over the
boundary:
#theorem(
number: "1.9.0.19", title: "Theorem", "Multiplicative trace inequality",
)[
#neq(
$ exists C = C (Omega) > 0 : norm(u)_(L^2(partial Omega)) lt.eq C norm(u)_(L^2(Omega)) dot.op norm(u)_(H^1(Omega)) quad forall u in H^1 (Omega) $,
)
] <thm:mult-trace-inequality>
#pagebreak(weak: true)
== Second-Order Elliptic Variational Problems
<sub:second-order-elliptic-variational-problems>
We have seen how we can get from a minimization problem via a variational
problem to a BVP. Now we want to move in the opposite direction: from a PDE with
boundary conditions to a variational problem.
#tip-box(
"Strong to weak",
)[
+ Test the PDE with (multiply by $v$) and integrate over $Omega$
+ Use @thm:greens-formula to "shift" one derivative from $u$ to $v$ (e.g., from $-div(grad u)$ to $grad u dot grad v + ...$)
+ Use Neumann BC on boundary terms ($grad u dot n = h$)
+ Pick Sobolev trial/test spaces $V,V_0$ such that
- $a(u,u)$ is finite for $u in V,V_0$
- boundary conditions are satisfied ($u=g$ in $V$ $=>$ $v=0$ in $V_0$)
To fulfill the first condition, we can define the "base" space for both trial
and test as $Set(v, a(v,v)<oo)$, which is equal to $H^1$ for the usual $Delta u = f$ problem.
If there are extra (e.g., boundary) terms in $a$, try to bound these with the $H^1$ norm.
]
For Neumann problems there is a #strong[compatibility condition];. If we choose
test function $v equiv 1$ we get the requirement
$ - integral_(partial Omega) h dif S = integral_Omega f dif bx $
for the existence of solutions. Additionally, the solution of Neumann problems
is unique only up to constants. To address this we can use the constrained
function space
$ H_(\*)^1 (Omega) := { v in H^1 (Omega) : integral_Omega v dif bx = 0 } $
#theorem(
number: "1.8.0.20", title: "Theorem", [Second Poincaré--Friedrichs inequality],
)[
If $Omega subset bb(R)^d$ is bounded and connected, then
#neq(
$ exists C = C (Omega) > 0 : norm(u)_0 lt.eq C "diam" #h(-0.1pt) (Omega) thin norm(grad u)_0 quad forall u in H_(\*)^1 (Omega) $,
)
] <thm:poincare-friedrichs>
This theorem tells us that (under some conditions), the $L^2$ norm of functions
from this space is bounded by the $H^1$-seminorm.
|
|
https://github.com/The-Notebookinator/notebookinator | https://raw.githubusercontent.com/The-Notebookinator/notebookinator/main/themes/radial/components/decision-matrix.typ | typst | The Unlicense | #import "../colors.typ": *
#import "/utils.typ"
#let decision-matrix = utils.make-decision-matrix((properties, data) => {
set align(center)
let winning-row
for (index, choice) in data.values().enumerate() {
if choice.total.highest {
winning-row = index + 2
}
}
table(
stroke: none,
columns: properties.len() + 2,
fill: (_, row) => {
if row == winning-row { green }
else if calc.odd(row) { surface-3 }
else if calc.even(row) { surface-1 }
},
// Top line
table.hline(stroke: (cap: "round", thickness: 2pt)),
// Blank column to account for names of choices
[],
// Print out all the properties
..for property in properties {
([ *#property.name* ],)
},
// Last box in the row
[*Total*],
// Print out the data for each choice
..for (index, choice) in data {
(
[#index],
..for property in properties {
let value = choice.at(property.name)
([#value.weighted],)
},
[#choice.total.weighted]
)
},
//..for result in data {
// Override the fill if the choice has the highest score
//let cell = if choice.values.total.highest { cellx.with(fill: green) } else { cellx }
//(cell[*#choice.name*], ..for value in choice.values {
//(cell[#value.at(1).value],)
//})
//},
// Bottom line
table.hline(stroke: (cap: "round", thickness: 2pt)),
)
})
|
https://github.com/xdoardo/co-thesis | https://raw.githubusercontent.com/xdoardo/co-thesis/master/thesis/chapters/imp/analysis/dia.typ | typst | #import "/includes.typ":*
#import "@preview/prooftrees:0.1.0"
#let bisim = "≋"
#let conv(c, v) = { $#c arrow.b.double #v$ }
#let div(c) = { $#c arrow.t.double$ }
#let fails(c) = { $#c arrow.zigzag$ }
#linebreak()
=== Definite initialization analysis<subsection-imp-analysis_optimization-dia>
The first transformation we describe is *definite initialization analysis*. In
general, the objective of this analysis is to ensure that no variable is ever
used before being initialized, which is exactly the only kind of failure we
chose to model.
==== Variables and indicator functions<subsubsection-imp-dia-vars>
This analysis deals with variables. Before delving into its details, we show
first a function to compute the set of variables used in arithmetic and boolean
expressions. The objective is to come up with a _set_ of identifiers that appear
in the expression: we chose to represent sets in Agda using characteristic
functions, which we simply define as parametric functions from a parametric set
to the set of booleans, that is ```hs CharacteristicFunction = A -> Bool```;
later, we will instantiate this type for identifiers, giving the resulting type
the name of ```hs VarsSet```. First, we give a (parametric) notion of members
equivalence (that is, a function ```hs _==_ : A -> A -> Bool```); then, we the
usual operations on sets (insertion, union, and intersection) and the usual
definition of inclusion for characteristic functions.
#mycode(label: <code-charfun>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Data/CharacteristicFunction.agda#L13")[
//typstfmt::off
```hs
module Data.CharacteristicFunction {a} (A : Set a) (_==_ : A -> A -> Bool) where
-- ...
CharacteristicFunction : Set a
CharacteristicFunction = A -> Bool
-- ...
∅ : CharacteristicFunction
∅ = λ _ -> false
_↦_ : (v : A) -> (s : CharacteristicFunction) -> CharacteristicFunction
(v ↦ s) x = (v == x) ∨ (s x)
_∪_ : (s₁ s₂ : CharacteristicFunction) -> CharacteristicFunction
(s₁ ∪ s₂) x = (s₁ x) ∨ (s₂ x)
_∩_ : (s₁ s₂ : CharacteristicFunction) -> CharacteristicFunction
(s₁ ∩ s₂) x = (s₁ x) ∧ (s₂ x)
_⊆_ : (s₁ s₂ : CharacteristicFunction) -> Set a
s₁ ⊆ s₂ = ∀ x -> (x-in-s₁ : s₁ x ≡ true) -> s₂ x ≡ true
```
//typstfmt::on
]
#theorem(
name: "Equivalence of characteristic functions",
label: <thm-cf-equiv>
)[
(using the *Axiom of extensionality*)
#mycode(proof: <proof-cf-equiv>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Data/CharacteristicFunction.agda#L48")[
//typstfmt::off
```hs
cf-ext : ∀ {s₁ s₂ : CharacteristicFunction}
(a-ex : ∀ x -> s₁ x ≡ s₂ x) -> s₁ ≡ s₂
```
//typstfmt::on
]]
#theorem(name: "Neutral element of union",
label: <thm-if-neutral-union>)[
#mycode("https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Data/CharacteristicFunction.agda#L58")[
//typstfmt::off
```hs
∪-∅ : ∀ {s : CharacteristicFunction} -> (s ∪ ∅) ≡ s
```
//typstfmt::on
]]
#theorem(name: "Update inclusion")[
#mycode("https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Data/CharacteristicFunction.agda#L61")[
//typstfmt::off
```hs
↦=>⊆ : ∀ {id} {s : CharacteristicFunction} -> s ⊆ (id ↦ s)
```
//typstfmt::on
]]
#theorem(name: "Transitivity of inclusion", label: <thm-cf-trans> )[
#mycode("https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Data/CharacteristicFunction.agda#L87")[
//typstfmt::off
```hs
⊆-trans : ∀ {s₁ s₂ s₃ : CharacteristicFunction} -> (s₁⊆s₂ : s₁ ⊆ s₂)
-> (s₂⊆s₃ : s₂ ⊆ s₃) -> s₁ ⊆ s₃
```
//typstfmt::on
]]
We will also need a way to get a ```hs VarsSet``` from a ```hs Store```, which
is shown in @code-store-domain.
#mycode(label: <code-store-domain>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Syntax/Vars.agda#L35")[
//typsfmt::off
```hs
dom : Store -> VarsSet
dom s x with (s x)
... | just _ = true
... | nothing = false
```
//typstfmt::on
]
==== Realization<subsubsection-imp-dia-vars>
Following @concrete-semantics, the first formal tool we need is a way to
compute the set of variables mentioned in expressions, shown in
@code-avars and @code-bvars. We also need a function to compute the set of variables that
are definitely initialized in commands, which is shown in @code-cvars.
#grid(
columns: 2,
//align: center + horizon,
//auto-vlines: false,
//auto-hlines: false,
gutter: 5pt,
mycode(label: <code-avars>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Syntax/Vars.agda#L16")[
//typstfmt::off
```hs
avars : (a : AExp) -> VarsSet
avars (const n) = ∅
avars (var id) = id ↦ ∅
avars (plus a₁ a₂) =
(avars a₁) ∪ (avars a₂)
```
//typstfmt::on
],
mycode(label: <code-bvars>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Syntax/Vars.agda#L21")[
//typstfmt::off
```hs
bvars : (b : BExp) -> VarsSet
bvars (const b) = ∅
bvars (le a₁ a₂) =
(avars a₁) ∪ (avars a₂)
bvars (not b) = bvars b
bvars (and b b₁) =
(bvars b) ∪ (bvars b₁)
```
//typstfmt::on
])
#mycode(label: <code-cvars>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Syntax/Vars.agda#L27")[
//typstfmt::off
```hs
cvars : (c : Command) -> VarsSet
cvars skip = ∅
cvars (assign id a) = id ↦ ∅
cvars (seq c c₁) = (cvars c) ∪ (cvars c₁)
cvars (ifelse b cᵗ cᶠ) = (cvars cᵗ) ∩ (cvars cᶠ)
cvars (while b c) = ∅
```
//typstfmt::on
]
It is worth to reflect upon the definition of @code-cvars. This
code computes the set of _initialized_ variables in a command `c`; as
done in @concrete-semantics, we construct this set of initialized variables in
the most conservative way possible: of course, `skip` does not have any initialized
variable and `assign id a` adds `id` to the set of initialized variables.
However, when considering composite commands, we must consider that, except for
`seq c c₁`, not every branch of execution is taken; this means that we cannot
know statically whether `ifelse b cᵗ cᶠ` will lead to the execution to the
execution of `cᵗ` or `cᶠ`, we thus take the intersection of their initialized
variables, that is we compute the set of variables that will be surely
initialized wheter one or the other executes. The same reasoning applies to
`while b c`: we cannot possibly know whether or not `c` will ever execute, thus
we consider no new variables initialized.
At this point it should be clear that as `cvars c` computes the set of
initialized variables in a conservative fashion, it is not necessarily true
that the actual execution of the command will not add additional variables:
however, knowing that if the evaluation of a command in a store $sigma$
converges to a value $sigma'$, that is $#conv([$c$, $sigma$], $sigma'$)$ then by
@lemma-ceval-store-tilde[Lemma] $"dom" sigma subset.eq "dom" sigma'$;
this allows us to show the following lemma.
#lemma(label: <lemma-ceval-sc-subeq>)[
Let $c$ be a command and $sigma$ and $sigma'$ be two stores. Then
#align(center,
$#conv($"ceval" c space sigma$, $sigma'$) -> ("dom" sigma_1 union
("cvars" c)) space subset.eq ("dom" sigma')$)
#mycode(proof: <proof-ceval-sc-subeq>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Semantics/BigStep/Functional/Properties.agda#L133")[
//typstfmt::off
```hs
ceval⇓=>sc⊆s' : ∀ (c : Command) (s s' : Store) (h⇓ : (ceval c s) ⇓ s')
-> (dom s ∪ (cvars c)) ⊆ (dom s')
```
//typstfmt::on
]]
We now give inference rules that inductively build the relation that embodies
the logic of the definite initialization analysis, shown in @imp-dia-rel. In
Agda, we define a datatype representing the relation of type
//typstfmt::off
```hs Dia : VarsSet -> Command -> VarsSet -> Set```,
//typstfmt::on
which is shown in @code-dia. @lemma-ceval-sc-subeq[Lemma]
will allow us to show that there is a relation between the `VarsSet` in the
`Dia` relation and the actual stores that are used in the execution of a
command.
#figure(
tablex(
columns: 2,
align: center + horizon,
auto-vlines: false,
auto-hlines: false,
prooftrees.tree(prooftrees.axi[], prooftrees.uni[Dia v skip v]),
prooftrees.tree(
prooftrees.axi[avars $a$ $subset.eq$ $v$],
prooftrees.uni[Dia $v$ (assign $id$ $a$) ($id ↦ v$)],
),
prooftrees.tree(
prooftrees.axi(pad(bottom: 4pt, [Dia $v_1$ $c_1$ $v_2$])),
prooftrees.axi(pad(bottom: 4pt, [Dia $v_2$ $c_2$ $v_3$])),
prooftrees.nary(2)[Dia $v_1$ (seq $c_1$ $c_2$) $v_3$],
),
prooftrees.tree(
prooftrees.axi(pad(bottom: 2pt, [bvars $b$ $subset.eq$ $v$])),
prooftrees.axi(pad(bottom: 2pt, [Dia $v$ $c^t$ $v^t$])),
prooftrees.axi(pad(bottom: 2pt, [Dia $v$ $c^f$ $v^f$])),
prooftrees.nary(
3,
)[#pad(top: 2pt, [Dia $v$ (if $b$ then $c^t$ else $c^f$) ($v^t sect v^f$)])],
),
colspanx(2)[
#prooftrees.tree(
prooftrees.axi(pad(bottom: 3pt, [bvars $b$ $subset.eq$ $v$])),
prooftrees.axi(pad(bottom: 3pt, [Dia $v$ $c$ $v_1$])),
prooftrees.nary(2)[Dia $v$ (while $b$ $c$) $v$],
)
],
),
caption: "Inference rules for the definite initialization analysis",
supplement: "Table",
)<imp-dia-rel>
#mycode(label: <code-dia>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Analysis/DefiniteInitialization.agda#L22")[
//typstfmt::off
```hs
data Dia : VarsSet -> Command -> VarsSet -> Set where
skip : ∀ (v : VarsSet) -> Dia v (skip) v
assign : ∀ a v id (a⊆v : (avars a) ⊆ v) -> Dia v (assign id a) (id ↦ v)
seq : ∀ v₁ v₂ v₃ c₁ c₂ -> (relc₁ : Dia v₁ c₁ v₂) ->
(relc₂ : Dia v₂ c₂ v₃) -> Dia v₁ (seq c₁ c₂) v₃
if : ∀ b v vᵗ vᶠ cᵗ cᶠ (b⊆v : (bvars b) ⊆ v) -> (relcᶠ : Dia v cᶠ vᶠ) ->
(relcᵗ : Dia v cᵗ vᵗ) -> Dia v (ifelse b cᵗ cᶠ) (vᵗ ∩ vᶠ)
while : ∀ b v v₁ c -> (b⊆s : (bvars b) ⊆ v) ->
(relc : Dia v c v₁) -> Dia v (while b c) v
```
//typstfmt::on
]
What we want to show now is that if ```hs Dia``` holds, then the evaluation of
a command $c$ does not result in an error: while
@thm-adia-safe and @thm-bdia-safe show
that if the variables in an arithmetic expression or a boolean expression are
contained in a store the result of their evaluation cannot be a failure (i.e.
they result in "just" something, as it cannot diverge),
@thm-dia-safe shows that if ```hs Dia``` holds, then the
evaluation of a program failing is absurd: therefore, by
@post-exec, the program either diverges or converges to some
value.
#theorem( name: "Safety of arithmetic expressions", label: <thm-adia-safe>)[
#mycode(proof: <proof-adia-safe>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Analysis/DefiniteInitialization.agda#L47")[
//typstfmt::off
```hs
adia-safe : ∀ (a : AExp) (s : Store) (dia : avars a ⊆ dom s)
-> (∃ λ v -> aeval a s ≡ just v)
```
//typstfmt::on
]]
#theorem(name: "Safety of boolean expressions", label: <thm-bdia-safe>)[
#mycode(proof: <proof-bdia-safe>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Analysis/DefiniteInitialization.agda#L60")[
//typstfmt::off
```hs
bdia-safe : ∀ (b : BExp) (s : Store) (dia : bvars b ⊆ dom s)
-> (∃ λ v -> beval b s ≡ just v)
```
//typstfmt::on
]]
#theorem(
name: "Safety of definite initialization for commands",
label: <thm-dia-safe>
)[
#mycode(proof: <proof-dia-safe>, "https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Analysis/DefiniteInitialization.agda#L117")[
//typstfmt::off
```hs
dia-safe : ∀ (c : Command) (s : Store) (v v' : VarsSet) (dia : Dia v c v')
(v⊆s : v ⊆ dom s) -> (h-err : (ceval c s) ↯) -> ⊥
```
//typstfmt::on
]]
We now show an idea of the proof (the full proof, in Agda, is in
@proof-dia-safe), examining the two base cases `c ≡ skip` and `c ≡ assign id a`
and the coinductive case `c ≡ while b c'`. The proof for the base cases is, in
words, based on the idea that the evaluation cannot possibly go wrong: note
that by the hypotheses, we have that `(ceval c s) ↯`, which we can express in
math as $ceval space c space sigma bisim now nothing$.
#show figure.where(kind: "boxenv"): set block(breakable: true)
#proof[
1. Let $c$ be the command `skip`. Then, for any store $sigma$, by the
definition of `ceval` in @code-ceval and by the inference rule $arrow.b.double$skip in
@imp-commands-semantics, the evaluation of $c$ in the store $sigma$ must be
#align(center, $ceval "skip" sigma eq.triple now (just sigma)$)
Given the hypothesis that #fails([c, $sigma$]), we now have that it must be
$now nothing bisim now (just sigma )$, which is false for any $sigma$, making the hypothesis
#fails([c, $sigma$]) impossible.
2. Let $c$ be the command `assign id a`, for some identifier $id$ and
arithmetic expression $a$. By the hypothesis, we have that it must be $"Dia"
v space (assign id a) space v'$ for some $v$ and $v'$, which entails that
the variables that appear in $a$, which we named $"avars" a$, are all
initialized in $v$, that is $"avars" a subset.eq v$; this and the
hypothesis that $v subset.eq "dom" sigma$ imply by @thm-cf-trans
that $"avars" a subset.eq "dom" sigma$.
By @thm-adia-safe, with the assumption that $"avars" a subset.eq "dom" sigma$,
it must be $aeval a sigma eq.triple just n$ for some $n : ZZ$. Again, by the
definition of `ceval` in @code-ceval and by the inference rule $arrow.b.double$assign
in @imp-commands-semantics, the evaluation of $c$ in the store $sigma$ must be
#align(center, $ceval (assign id a) space sigma eq.triple now (just ("update"
id n space sigma))$)
and, as before, by the hypothesis that $c$ fails it must thus be that $now
nothing bisim now (just ("update" id n space sigma))$, which is impossible for any $sigma$,
making the hypotesis #fails([c]) impossible.
3. Let $c$ be the command `while b c'` for some boolean expression $b$ and
some command $c'$. By @thm-bdia-safe, with the assumption that $"bvars" b
subset.eq "dom" sigma$, it must be $"beval" b space sigma eq.triple "just" v$ for some
$v : BB$.
#linebreak()
If $v eq.triple "false"$, then by the definition of `ceval` in
@code-ceval and by the inference rule $arrow.b.double$while-false in
@imp-commands-semantics, the evaluation of $c$ in the store $sigma$ must be
#align(center, $ceval ("while" b space c') space sigma eq.triple now ("just" sigma)$)
making the hypothesis that the evaluation of $c$ fails impossible.
#linebreak()
If, instead, $v eq.triple "true"$, we must evaluate $c'$ in $sigma$.
The case $c' eq.triple now nothing$ is impossible by the inductive hypothesis.
#linebreak()
If $c' eq.triple now (just sigma')$ for some $sigma'$, then, by recursion, it must be
#align(center, [```hs dia-sound (while b c) s' v v dia (⊆-trans v⊆s (ceval⇓=>⊆ c s s' (≡=>≋ eq-ceval-c))) w↯```])
#linebreak()
Finally, if $c' eq.triple "later" x$ for some $x$, then we can prove inductively that
#mycode("https://github.com/ecmma/co-thesis/blob/master/agda/lujon/Imp/Analysis/DefiniteInitialization.agda#L165", proof: <proof-dia-sound-while-later>)[
//typstfmt::off
```hs
dia-sound-while-later : ∀ {x : Thunk (Delay (Maybe Store)) ∞} {b c} {v}
(l↯⊥ : (later x)↯ -> ⊥) (dia : Dia v (while b c) v)
(l⇓s=>⊆ : ∀ {s : Store} -> ((later x) ⇓ s) -> v ⊆ dom s)
(w↯ : (bind (later x) (λ s -> later (ceval-while c b s))) ↯) -> ⊥
```
//typstfmt::on
] ]
The proof works by unwinding, inductively, the assumption that #fails([c]): if
it fails, then $ceval space c space sigma$ must eventually converge to $"now" space "nothing"$.
The proof thus works by showing base cases and, in the case of $"seq" space c_1 space c_2$
and $"while" space b space c' space eq.triple space "if" space b space "then" space ("seq" space c' space ("while" space b space c')) space "else" space "skip"$,
showing that by inductive hypotesis $c_1$ or $c'$ cannot possibly fail; then,
the assumption becomes that it is the second command ($c_2$ or $"while" space b space c'$)
that fails, which we can inductively show absurd.
#show figure.where(kind: "boxenv"): set block(breakable: false)
|
|
https://github.com/DieracDelta/presentations | https://raw.githubusercontent.com/DieracDelta/presentations/master/polylux/book/src/utils/side-by-side.typ | typst | #import "../../../polylux.typ": *
#set page(paper: "presentation-16-9")
#set text(size: 40pt)
#polylux-slide[
#side-by-side[
#lorem(7)
][
#lorem(10)
][
#lorem(5)
]
]
|
|
https://github.com/MattiaOldani/Informatica-Teorica | https://raw.githubusercontent.com/MattiaOldani/Informatica-Teorica/master/capitoli/calcolabilità/12_riconoscibilità_automatica_insiemi.typ | typst | #import "../alias.typ": *
#import "@preview/lemmify:0.1.5": *
#let (
theorem, lemma, corollary,
remark, proposition, example,
proof, rules: thm-rules
) = default-theorems("thm-group", lang: "it")
#show: thm-rules
#show thm-selector("thm-group", subgroup: "theorem"): it => block(
it,
stroke: red + 1pt,
inset: 1em,
breakable: true
)
#show thm-selector("thm-group", subgroup: "proof"): it => block(
it,
stroke: green + 1pt,
inset: 1em,
breakable: true
)
= Riconoscibilità automatica di insiemi
Proviamo a dare una _gradazione_ sul livello di risoluzione dei problemi. Vogliamo capire se un dato problema:
- può essere risolto;
- non può essere risolto completamente (meglio di niente);
- non può essere risolto.
Costruiamo un programma che classifichi gli elementi di un insieme, quindi ci dica se un certo numero naturale appartiene o meno all'insieme.
Un insieme $A subset.eq NN$ è *riconoscibile automaticamente* se esiste un programma $P_A$ che classifica correttamente *ogni* elemento di $NN$ come appartenente o meno ad $A$, ovvero
$ x in NN arrow.long.squiggly P_A (x) = cases(1 & "se" x in A, 0 quad & "se" x in.not A) quad . $
Il programma $P_A$ deve essere:
- *corretto*: classifica correttamente gli elementi che riceve in input;
- *completo*: classifica tutti gli elementi di $NN$, nessuno escluso.
_Tutti gli insiemi sono automaticamente riconoscibili? Quali insiemi sono automaticamente riconoscibili? Quali invece non lo sono?_
Siamo certi che non tutti gli insiemi siano automaticamente riconoscibili, infatti grazie al concetto di _cardinalità_ sappiamo che:
- i sottoinsiemi di $NN$ sono densi come $RR$;
- io non ho $RR$ programmi quindi sicuramente c'è qualche insieme che non lo è.
== Insiemi ricorsivi
Un insieme $A subset.eq NN$ è un *insieme ricorsivo* se esiste un programma $P_A$ che si arresta su ogni input classificando correttamente gli elementi di $NN$ in base alla loro appartenenza o meno ad $A$.
Equivalentemente, ricordando che la funzione caratteristica di $A subset.eq NN$ è la funzione $ chi_A : NN arrow.long {0,1} $ tale che $ chi_A (x) = cases(1 "se" x in A, 0 "se" x in.not A) quad , $ diciamo che l'insieme $A$ è ricorsivo se e solo se $ chi_A in cal(T) . $
Che $chi_A$ sia totale è banale, perché tutte le funzioni caratteristiche devono essere definite su tutto $NN$. Il problema risiede nella calcolabilità di queste funzioni.
Le due definizioni date sono equivalenti:
- il programma $P_A$ implementa $chi_A$, quindi $chi_A in cal(T)$ perché esiste un programma che la calcola;
- $chi_A in cal(T)$ quindi esiste un programma $P_A$ che la implementa e che soddisfa la definizione data sopra.
=== Ricorsivo vs Decidibile
Spesso, si dice che _un insieme ricorsivo è un insieme decidibile_, ma è solo abuso di notazione. Questo è dovuto al fatto che ad ogni insieme $A subset.eq NN$ possiamo associare il suo *problema di riconoscimento*, così definito:
- Nome: $"RIC"_A$.
- Istanza: $x in NN$.
- Domanda: $x in A$?
La sua funzione soluzione $ Phi_("RIC"_A) : NN arrow.long {0,1} $ è tale che $ Phi_("RIC"_A) (x) = cases(1 & "se" x in A, 0 quad & "se" x in.not A) quad . $
Notiamo che la semantica del problema è proprio la funzione caratteristica, quindi $Phi_("RIC"_A) = chi_A$. Se $A$ è ricorsivo, allora la sua funzione caratteristica è ricorsiva totale, ma lo sarà anche la funzione soluzione $Phi$ e, di conseguenza, $"RIC"_A$ è decidibile.
=== Decidibile vs Ricorsivo
Simmetricamente, sempre con abuso di notazione, si dice che _un problema di decisione è un problema ricorsivo_. Questo perché ad ogni problema di decisione $Pi$ possiamo associare $A_Pi$ *insieme delle sue istanze a risposta positiva*.
Dato il problema
- Nome: $Pi$.
- Istanza: $x in D$.
- Domanda: $p(x)$?
definiamo $ A_Pi = {x in D bar.v Phi_Pi (x) = 1} "con" Phi_Pi (x) = 1 equiv p(x) $ insieme delle istanze a risposta positiva di $Pi$. Notiamo che, se $Pi$ è decidibile allora $Phi_Pi in cal(T)$, quindi esiste un programma che calcola questa funzione. La funzione in questione è quella che riconosce automaticamente l'insieme $A_Pi$, quindi $A_Pi$ è ricorsivo.
== Insiemi non ricorsivi
Per trovare degli insiemi non ricorsivi cerco nei problemi di decisione non decidibili. L'unico problema di decisione non decidibile che abbiamo visto è il *problema dell'arresto ristretto* $arresto(ristretto)$.
- Nome: $arresto(ristretto)$.
- Istanza: $x in NN$.
- Domanda: $phi_(ristretto) (x) = phi_x (x) arrow.b$?
Definiamo l'insieme delle istanze a risposta positiva di $arresto(ristretto)$ $ A = {x in NN bar.v phi_x (x) arrow.b}. $ Questo non può essere ricorsivo: se lo fosse, avrei un programma ricorsivo totale che mi classifica correttamente se $x$ appartiene o meno ad $A$, ma abbiamo dimostrato che il problema dell'arresto ristretto non è decidibile, quindi $A$ non è ricorsivo.
== Relazioni ricorsive
$R subset.eq NN times NN$ è una *relazione ricorsiva* se e solo se l'insieme $R$ è ricorsivo, ovvero:
- la sua funzione caratteristica $chi_R$ è tale che $chi_R in cal(T)$, oppure
- esiste un programma $P_R$ che, presi in ingresso $x,y in NN$ restituisce $1$ se $(x R y)$, $0$ altrimenti.
Un'importante relazione ricorsiva è la relazione $ R_P = {(x,y) in NN^2 bar.v P "su input" x "termina in" y "passi"} . $
È molto simile al problema dell'arresto, ma non chiedo se $P$ termina in generale, chiedo se termina in $y$ passi. Questa relazione è ricorsiva e per dimostrarlo costruiamo un programma che classifica $R_P$ usando:
- $U$ interprete universale;
- *clock* per contare i passi di interpretazione;
- *check del clock* per controllare l'arrivo alla quota $y$.
Definiamo quindi il programma $ overset(U,tilde) = U + "clock" + "check clock" $ tale che $ overset(U,tilde) equiv & "input"(x,y) \ & U(P,x) + "clock" \ & "ad ogni passo di" U(P,x): \ & quad "if clock" > y: \ & quad quad "output"(0) \ & quad "clock"++; \ & "output"("clock" == y) quad . $
Nel sistema RAM, ad esempio, per capire se l'output è stato generato o meno osservo se il PC, contenuto nel registro $L$, è uguale a 0.
Riprendiamo il problema dell'arresto ristretto: _come possiamo esprimere $A = {x in NN bar.v phi_x (x) arrow.b}$ attraverso la relazione ricorsiva $R_ristretto$?_
Possiamo definire l'insieme $ B = {x in NN bar.v exists y in NN bar.v (x R_ristretto y)}. $
Notiamo come $A = B$:
- $A subset.eq B$: se $x in A$ il programma codificato con $x$ su input $x$ termina in un certo numero di passi. Chiamiamo $y$ il numero di passi. $ristretto(x)$ termina in $y$ passi, ma allora $x R_ristretto y$ e quindi $x in B$;
- $B subset.eq A$: se $x in B$ esiste $y$ tale che $x R_ristretto y$, quindi $ristretto(x)$ termina in $y$ passi, ma allora il programma $ristretto = x$ su input $x$ termina, quindi $x in A$.
== Insiemi ricorsivamente numerabili
Un insieme $A subset.eq NN$ è *ricorsivamente numerabile* se è *automaticamente listabile*: esiste una _routine_ $F$ che, su input $i in NN$, dà in output $F(i)$ come l'$i$-esimo elemento di $A$.
Il programma che lista gli elementi di $A$ è: $ P equiv & i := 0; \ & "while" (1 > 0) space { \ & quad "output"(F(i)) \ & quad i := i + 1; \ & } $
Per alcuni insiemi non è possibile riconoscere tutti gli elementi che gli appartengono, ma può essere che si conosca un modo per elencarli. Alcuni insiemi invece non hanno nemmeno questa proprietà.
Se il meglio che posso fare per avere l'insieme A è listarlo con P, _come posso scrivere un algoritmo che "tenta di riconoscere" A?_ Questo algoritmo deve listare tutti gli elementi senza un clock di timeout: se inserissi un clock avrei un insieme ricorsivo per la relazione $R_P$ mostrata in precedenza.
Vediamo il programma di *massimo riconoscimento*: $ P equiv & "input"(x) \ & i := 0; \ & "while" (F(i) eq.not x) \ & quad i := i + 1; \ & "output"(1) quad . $
Come viene riconosciuto l'insieme $A$? $ x in NN arrow.long.squiggly P(x) = cases(1 & "se" x in A, "LOOP" quad & "se" x in.not A) quad . $
Vista la natura di questa funzione, gli insiemi ricorsivamente numerabili sono anche detti *insiemi parzialmente decidibili/riconoscibili* o *semidecidibili*.
Se avessi indicazioni sulla monotonia della routine $F$ allora avrei sicuramente un insieme ricorsivo. In questo caso non assumiamo niente quindi rimaniamo negli insiemi ricorsivamente numerabili.
=== Definizione formale
L'insieme $A subset.eq NN$ è *ricorsivamente numerabile* se e solo se:
- $A = emptyset.rev$ oppure
- $A = immagine(f)$, con $f : NN arrow.long NN in cal(T)$, ovvero $A = {f(0), f(1), f(2), dots}$.
Visto che $f$ è ricorsiva totale esiste un/a programma/routine $F$ che la implementa e che usiamo per il parziale riconoscimento di $A$: questo programma, se $x in A$, prima o poi mi restituisce 1, altrimenti entra in loop.
È come avere un _libro con infinite pagine_, su ognuna delle quali compare un elemento di $A$. Il programma di riconoscimento $P$, grazie alla routine $F$, non fa altro che sfogliare le pagine $i$ di questo libro alla ricerca di $x$:
- se $x in A$ prima o poi $x$ compare nelle pagine del libro come $F(i)$;
- se $x in.not A$ sfoglio il libro all'infinito, non sapendo quando fermarmi.
=== Caratterizzazioni
#theorem(numbering: none)[
Le seguenti definizioni sono equivalenti:
+ $A$ è ricorsivamente numerabile, con $A = immagine(f)$ e $f in cal(T)$ funzione ricorsiva totale;
+ $A = dominio(f)$, con $f in cal(P)$ funzione ricorsiva parziale;
+ esiste una relazione $R subset.eq NN^2$ ricorsiva tale che $A = {x in NN bar.v exists y in NN bar.v (x,y) in R}$.
]
#proof[
\ Per dimostrare questi teoremi dimostriamo che $1 arrow.long.double 2 arrow.long.double 3 arrow.long.double 1$, creando un'implicazione ciclica.
#block(
fill: rgb("#9FFFFF"),
inset: 8pt,
radius: 4pt,
[$1 arrow.long.double 2$]
)
Sappiamo che $A = immagine(f)$, con $f in cal(T)$, è ricorsivamente numerabile, quindi esistono la sua routine di calcolo $f$ e il suo algoritmo di parziale riconoscimento $P$, definiti in precedenza. Vista la definizione di $P$, abbiamo che $ phi_P (x) = cases(1 "se" x in A, bot "se" x in.not A) quad , $ ma allora $A = dominio(phi_P)$: il dominio è l'insieme dei valori dove la funzione è definita, in questo caso proprio l'insieme $A$. Inoltre, $phi_P in cal(P)$ perché ho mostrato che esiste un programma $P$ che la calcola.
#block(
fill: rgb("#9FFFFF"),
inset: 8pt,
radius: 4pt,
[$2 arrow.long.double 3$]
)
Sappiamo che $A = dominio(f)$, con $f in cal(P)$, quindi esiste un programma $P$ tale che $phi_P = f$. Considero allora la relazione $ R_P = {(x,y) in NN^2 bar.v P "su input" x "termina in" y "passi"}, $ che abbiamo dimostrato prima essere ricorsiva. Definiamo $ B = {x in NN bar.v exists y bar.v (x,y) in R_P}. $ Dimostriamo che A = B. Infatti:
- $A subset.eq B$: se $x in A$ allora su input $x$ il programma $P$ termina in un certo numero di passi $y$, visto che $x$ è nel "dominio" di tale programma. Vale allora $(x,y) in R_P$ e quindi $x in B$;
- $B subset.eq A$: se $x in B$ allora per un certo $y$ ho $(x,y) in R_P$, quindi $P$ su input $x$ termina in $y$ passi, ma visto che $phi_P (x) arrow.b$ allora $x$ sta nel dominio di $f = phi_P$, quindi $x in A$.
#block(
fill: rgb("#9FFFFF"),
inset: 8pt,
radius: 4pt,
[$3 arrow.long.double 1$]
)
Sappiamo che $A = {x in NN bar.v exists y bar.v (x,y) in R}$, con $R$ relazione ricorsiva.
Assumiamo che $A eq.not emptyset.rev$ e scegliamo $a in A$, sfruttando l'assioma della scelta. Definiamo ora la funzione $t : NN arrow.long NN$ come $ t(n) = cases(cantorsin(n) quad & "se" (cantorsin(n), cantordes(n)) in R, a & "altrimenti") quad . $
Visto che $R$ è una relazione ricorsiva esiste un programma $P_R$ che categorizza ogni numero naturale, ma allora la funzione $t$ è ricorsiva totale. Infatti, possiamo scrivere il programma $ P equiv & "input"(n) \ & x := cantorsin(n); \ & y := cantordes(n); \ & "if" (P_R (x,y) == 1) \ & quad "output"(x) \ & "else" \ & quad "output"(a) $ che implementa la funzione $t$, quindi $phi_P = t$.
Dimostriamo che $A = immagine(t)$. Infatti:
- $A subset.eq immagine(t)$: se $x in A$ allora $(x,y) in R$, ma allora $t(cantor(x,y)) = x$, quindi $x in immagine(t)$;
- $immagine(t) subset.eq A$: se $x in immagine(t)$ allora:
- se $x = a$ per l'assioma della scelta $a in A$ quindi $x in A$;
- se $x = cantorsin(n)$, con $n = cantor(x,y)$ per qualche $y$ tale che $(x,y) in R$, allora $x in A$ per definizione di $A$.
]
Grazie a questo teorema abbiamo tre caratterizzazioni per gli insiemi ricorsivamente numerabili e possiamo sfruttare la formulazione che ci è più comoda.
Nell'esperienza del Prof. <NAME>, è molto utile e comodo il punto 2. In ordine:
+ scrivo un programma $P$ che restituisce $1$ su input $x in NN$, altrimenti va in loop se $x in.not A$: $ P(x) = cases(1 quad & "se" x in A, bot & "se" x in.not A) quad ; $
+ la semantica di $P$ è quindi tale che: $ phi_P (x) = cases(1 "se" x in A, bot "se" x in.not A) quad ; $
+ la funzione calcolata è tale che $ phi_P in cal(P), $ visto che il programma che la calcola è proprio $P$, mentre l'insieme $A$ è tale che $ A = dominio(phi_P); $
+ $A$ è ricorsivamente numerabile per il punto 2.
== Insiemi ricorsivamente numerabili ma non ricorsivi
Un esempio di insieme che non è ricorsivo, ma è ricorsivamente numerabile, è identificato dal problema dell'*arresto ristretto*.
Infatti, l'insieme $ A = {x in NN bar.v phi_x (x) arrow.b} $ non è ricorsivo, altrimenti il problema dell'arresto ristretto sarebbe decidibile.
Tuttavia, questo insieme è *ricorsivamente numerabile*: infatti, il programma $ P equiv & "input"(x) \ & U(x,x); \ & "output"(1) $ decide parzialmente $A$. Come possiamo vedere, se $x in A$ allora $phi_x (x) arrow.b$, ovvero l'interprete universale $U$ termina, e il programma $P$ restituisce $1$, altrimenti non termina.
Di conseguenza $ phi_P (x) = cases(1 & "se" phi_U (x,x) = phi_x (x) arrow.b, bot quad & "altrimenti") quad . $
Dato che $A = dominio(phi_P in cal(P))$ posso applicare la seconda caratterizzazione data nella lezione precedente per dimostrare che l'insieme $A$ è un insieme ricorsivamente numerabile.
Alternativamente, possiamo dire che $ A = {x in NN bar.v phi_x (x) arrow.b} = {x in NN bar.v exists y in NN bar.v (x,y) in R_ristretto}, $ con $ R_ristretto = {(x,y) bar.v ristretto "su input" x "termina entro" y "passi"} $ relazione ricorsiva. Qui possiamo sfruttare la terza caratterizzazione degli insiemi ricorsivamente numerabili.
Come sono messi i due insiemi?
#theorem(numbering: none)[
Se $A subset.eq NN$ è ricorsivo allora è ricorsivamente numerabile.
]
#proof[
\ Se $A$ è ricorsivo esiste un programma $P$ che è in grado di riconoscerlo, ovvero un programma che restituisce $1$ se $x in A$, altrimenti restituisce $0$.
Il programma $P$ è del tipo $ P equiv & "input"(x) \ & "if"(P_A(x) == 1) \ & quad quad "output"(1) \ & "else" \ & quad quad "while"(1>0); quad . $
La semantica di questo programma è $ phi_(P_A) (x) = cases(1 & "se" x in A, bot quad & "se" x in.not A) quad , $ ma allora $A$ è il dominio di una funzione ricorsiva parziale, quindi $A$ è ricorsivamente numerabile per la seconda caratterizzazione.
]
Poco fa abbiamo mostrato come $A = {x in NN bar.v phi_x (x) arrow.b}$ sia un insieme ricorsivamente numerabile ma non ricorsivo, ma allora vale $ "Ricorsivi" subset "Ricorsivamente numerabili" . $
#figure(
image("assets/ricorsivi-rnumerabili-1.svg", width: 50%)
)
_Esistono insiemi che non sono ricorsivamente numerabili?_
== Chiusura degli insiemi ricorsivi
Cerchiamo di sfruttare l'operazione di complemento di insiemi sui ricorsivamente numerabili per vedere di che natura è l'insieme $ A^C = {x in NN bar.v phi_x (x) arrow.t} . $
#theorem(numbering: none)[
La classe degli insiemi ricorsivi è un'Algebra di Boole, ovvero è chiusa per complemento, intersezione e unione.
]
#proof[
\ Siano $A,B$ due insiemi ricorsivi. Allora esistono dei programmi $P_A, P_B$ che li riconoscono o, equivalentemente, esistono $chi_A, chi_B in cal(T)$.
È facile dimostrare che le operazioni di unione, intersezione e complemento sono facilmente implementabili da programmi che terminano sempre. Di conseguenza, $ A union B, A sect B, A^C $ sono ricorsive.
Vediamo questi tre programmi:
- *complemento* $ P_(A^C) equiv & "input"(x) \ & "output"(1 overset(-, .) P_A (x)) . $
- *intersezione* $ P_(A sect B) equiv & "input"(x) \ & "output"(min(P_A (x), P_B (x))) . $
- *unione* $ P_(A union B) equiv & "input"(x) \ & "output"(max(P_A (x), P_B (x))) . $
Allo stesso modo possiamo trovare le funzioni caratteristiche delle tre operazioni:
- $chi_(A^C) (x) = 1 overset(-, .) chi_A (x)$;
- $chi_(A sect B) = chi_A (x) dot chi_B (x)$;
- $chi_(A union B) = 1 overset(-, .) (1 overset(-, .) chi_A (x))(1 overset(-, .) chi_B (x))$.
Tutte queste funzioni sono ricorsive totali, quindi anche le funzioni $A^C, A sect B, A union B$ sono ricorsive totali.
]
Ora, però, vediamo un risultato molto importante riguardante nello specifico il complemento dell'insieme dell'arresto $A^C$ che abbiamo definito prima.
#theorem(numbering: none)[
$A^C$ non è ricorsivo.
]
#proof[
\ Se $A^C$ fosse ricorsivo, per la proprietà di chiusura dimostrata nel teorema precedente, avremmo $ (A^C)^C = A $ ricorsivo, il che è assurdo.
]
Ricapitolando abbiamo:
- $A = {x : phi_x (x) arrow.b}$ ricorsivamente numerabile, ma non ricorsivo;
- $A^C = {x : phi_x (x) arrow.t}$ non ricorsivo.
_L'insieme $A^C$ Potrebbe essere ricorsivamente numerabile?_
#theorem(numbering: none)[
Se $A$ è ricorsivamente numerabile e $A^C$ è ricorsivamente numerabile allora $A$ è ricorsivo.
]
#proof[\
\ *INFORMALE*
Essendo $A$ e $A^C$ ricorsivamente numerabili, esistono due libri con infinite pagine su ognuna delle quali compare un elemento di $A$ (_primo libro_) e un elemento di $A^C$ (_secondo libro_).
Per decidere l'appartenenza di $x$ ad $A$, possiamo utilizzare il seguente procedimento:
+ $"input"(x)$;
+ apriamo i due libri alla prima pagina;
- se $x$ compare nel libro di $A$, stampa $1$,
- se $x$ compare nel libro di $A^C$, stampa $0$,
- se $x$ non compare su nessuna delle due pagine, voltiamo la pagina di ogni libro e ricominciamo.
Questo algoritmo termina sempre dato che $x$ o sta in $A$ o sta in $A^C$, quindi prima o poi verrà trovato su uno dei due libri.
Ma allora questo algoritmo riconosce $A$, quindi $A$ è ricorsivo.
*FORMALE*
Essendo $A$ e $A^C$ ricorsivamente numerabili, esistono $f,g in cal(T)$ tali che $A = immagine(f) and A^C = immagine(g)$. Sia $f$ implementata dal programma $F$ e $g$ dal programma $G$. Il seguente programma riconosce $A$: $ P equiv & "input"(x) \ & i:= 0; \ & "while"("true") \ & quad quad "if" (F(i)=x) "output"(1); \ & quad quad "if" (G(i)=x) "output"(0); \ & quad quad i := i + 1; $
Questo algoritmo termina per ogni input, in quanto $x in A$ o $x in A^C$. Possiamo concludere che l'insieme $A$ è ricorsivo.
]
Concludiamo immediatamente che $A^C$ *non* può essere ricorsivamente numerabile.
In generale, questo teorema ci fornisce uno strumento molto interessante per studiare le caratteristiche della riconoscibilità di un insieme $A$:
- se $A$ non è ricorsivo, potrebbe essere ricorsivamente numerabile;
- se non riesco a mostrarlo, provo a studiare $A^C$;
- se $A^C$ è ricorsivamente numerabile, allora per il teorema possiamo concludere che $A$ non è ricorsivamente numerabile.
#v(12pt)
#figure(
image("assets/ricorsivi-rnumerabili-2.svg", width: 50%)
)
#v(12pt)
== Chiusura degli insiemi ricorsivamente numerabili
#theorem(numbering: none)[
La classe degli insiemi ricorsivamente numerabili è chiusa per unione e intersezione, ma non per complemento.
]
#proof[
\ Per complemento, abbiamo mostrato che $A = {x : phi_x (x) arrow.b}$ è ricorsivamente numerabile, mentre $A^C = {x : phi_x (x) arrow.t}$ non lo è.
Siano $A,B$ insiemi ricorsivamente numerabili. Esistono, perciò, $f, g in cal(T) bar.v A = immagine(f) and B = immagine(g)$. Sia $f$ implementata da $F$ e $g$ implementata da $G$. Siano
#grid(
columns: (50%, 50%),
align(center)[
$ P_i equiv & "input"(x); \ & i := 0; \ & "while"(F(i) eq.not x) \ & quad i++; \ & i := 0; \ & "while"(G(i) eq.not x) \ & quad i++; \ & "output"(1); $
],
align(center)[
$ P_u equiv & "input"(x); \ & i := 0; \ & "while"("true") \ & quad "if" (F(i) = x) \ & quad quad "output"(1); \ & quad "if" (G(i) = x) \ & quad quad "output"(1); \ & quad i++; $
]
)
i due programmi che calcolano rispettivamente $A sect B$ e $A union B$. Le loro semantiche sono
#grid(
columns: (50%, 50%),
align(center)[
$ phi_(P_i) = cases(1 & "se" x in A sect B, bot quad & "altrimenti") $
],
align(center)[
$ phi_(P_u) = cases(1 & "se" x in A union B, bot quad & "altrimenti") $
]
)
da cui ricaviamo che
#grid(
columns: (50%, 50%),
align(center)[
$ A sect B = dominio(phi_P' in cal(P)) $
],
align(center)[
$ A union B = dominio(phi_P'' in cal(P)) $
]
)
I due insiemi sono quindi ricorsivamente numerabili per la seconda caratterizzazione.
]
== Teorema di Rice
Il *teorema di Rice* è un potente strumento per mostrare che gli insiemi appartenenti a una certa classe non sono ricorsivi.
Sia ${phi_i}$ un SPA. Un insieme (_di programmi_) $I subset.eq NN$ è un *insieme che rispetta le funzioni* se e solo se $ (a in I and phi_a = phi_b) arrow.long.double b in I . $
In sostanza, $I$ rispetta le funzioni se e solo se, data una funzione calcolata da un programma in $I$, allora $I$ contiene tutti i programmi che calcolano quella funzione. Questi insiemi sono detti anche *chiusi per semantica*.
Per esempio, l'insieme $I = {x in NN bar.v phi_x (3) = 5}$ rispetta le funzioni. Infatti, $ underbracket(a in I, phi_a (3) = 5) and underbracket(phi_a = phi_b, phi_b (3) = 5) arrow.double b in I . $
#theorem(
name: "Teorema di Rice",
numbering: none
)[
Sia $I subset.eq NN$ un insieme che rispetta le funzioni. Allora $I$ è ricorsivo solo se $I = emptyset.rev$ oppure $I = NN$.
]
Questo teorema ci dice che gli insiemi che rispettano le funzioni non sono mai ricorsivi, tolti i casi banali $emptyset.rev$ e $NN$.
#proof[
Sia $I$ insieme che rispetta le funzioni con $I eq.not emptyset.rev$ e $I eq.not NN$. Assumiamo per assurdo che $I$ sia ricorsivo.
Dato che $I eq.not emptyset.rev$, esiste almeno un elemento $a in I$. Inoltre, dato che $I eq.not NN$, esiste almeno un elemento $overline(a) in.not I$.
Definiamo la funzione $t : NN arrow.long NN$ come: $ t(n) = cases(overline(a) quad & "se" n in I, a & "se" n in.not I) . $
Sappiamo che $t in cal(T)$ dato che è calcolabile dal programma $ P equiv & "input"(x); \ & "if"(P_I (n) = 1) \ & quad "output"(overline(a)); \ & "else" \ & quad "output"(a) $
Visto che $t in cal(T)$, il _teorema di ricorsione_ assicura che in un SPA ${phi_i}$ esiste $d in NN$ tale che $ phi_d = phi_t(d) . $
Per tale $d$ ci sono solo due possibilità rispetto a $I$:
- se $d in I$, visto che $I$ rispetta le funzioni e $phi_d = phi_t(d)$ allora $t(d) in I$. Ma $t(d in I) = overline(a) in.not I$, quindi ho un assurdo;
- se $d in.not I$ allora $t(d) = a in I$ ma $I$ rispetta le funzioni, quindi sapendo che $phi_d = phi_t(d)$ deve essere che $d in I$, quindi ho un assurdo.
Assumere $I$ ricorsivo ha portato ad un assurdo, quindi $I$ non è ricorsivo.
]
=== Applicazione
Il teorema di Rice suggerisce un approccio per stabilire se un insieme $A subset.eq NN$ non è ricorsivo:
+ mostrare che $A$ rispetta le funzioni;
+ mostrare che $A eq.not emptyset.rev$ e $A eq.not NN$;
+ $A$ non è ricorsivo per Rice.
=== Limiti alla verifica automatica del software
Definiamo:
- *specifiche*: descrizione di un problema e richiesta per i programmi che devono risolverlo automaticamente. Un programma è _corretto_ se risponde alle specifiche;
- *problema*: _posso scrivere un programma $V$ che testa automaticamente se un programma sia corretto o meno?_
Il programma che vogliamo scrivere ha semantica $ phi_V (P) = cases(1 & "se" P "è corretto", 0 quad & "se" P "è errato") quad . $
Definiamo $ "PC" = {P bar.v P "è corretto"} . $ Osserviamo che esso rispetta le funzioni: infatti, $ underbracket(P in "PC", P "corretto") and underbracket(phi_P = phi_Q, Q "corretto") arrow.long.double Q in "PC" $
Ma allora PC non è ricorsivo. Dato ciò, la correttezza dei programmi non può essere testata automaticamente. Esistono, però, dei casi limite in cui è possibile costruire dei test automatici:
- specifiche del tipo _"nessun programma è corretto"_ generano $"PC" = emptyset.rev$;
- specifiche del tipo _"tutti i programmi sono corretti"_ generano $"PC" = NN$.
Entrambi gli insiemi PC sono ovviamente ricorsivi e quindi possono essere testati automaticamente.
Questo risultato mostra che non è possibile verificare automaticamente le *proprietà semantiche* dei programmi (_a meno di proprietà banali_).
|
|
https://github.com/ckunte/m-one | https://raw.githubusercontent.com/ckunte/m-one/master/inc/cosfunc.typ | typst | = Cosine interaction
While reviewing the changes introduced in the new ISO 19902:2020 standard@iso19902_2020, this one jumped at me:
#quote()[
tubular member strength formulae for combined axial and bending loading now of cosine interaction form instead of previously adopted linear interaction;
]
In ISO 19902:2020, the combined unity check for axial (tension | compression) + bending takes the following general expression:
$ U_m = 1 - cos(pi / 2 (gamma_(R,t|c) sigma_(t|c)) / f_(t|y c)) + (gamma_(R,b) sqrt(sigma^2_(b,y)) + sigma^2_(b,z)) / f_b $
This form of unity check has existed since 1993 in API RP-2A LRFD@api_rp2a_lrfd, 1st edition, and whose introduction into ISO 19902:2020 is briefly described in $section$A13.3.2 and $section$A13.3.3. This form makes its presence felt throughout _$section$13 Strength of tubular members_.#footnote[This form, i.e., 1 - cos(x) occurs in as many as eleven equations, viz., Eq. 13.3-1, 13.3-2, 13.3-4, 13.3-8, 13.3-18, 13.3-19, 13.3-21, 13.3-23, 13.4-7, 13.4-13, and 13.4-19 in ISO 19902:2020. Curiously, this is not applied to dented tubes in §13.7.3, whose combined UC expression(s) remains like before.]
Previously, _Um_ in ISO 19902:2007 was expressed as:
$ U_m = gamma_(R,t|c) sigma_(t|c) / f_(t|y c) + gamma_(R,b) sqrt(sigma^2_(b,y) + sigma^2_(b,z)) / f_b $
The reduction of _Um_ in the first equation is notable, see Figure below. For example, if the axial unity check value (x) is, say, 0.2, then its contribution is reduced to $0.05 (= 1 - cos(pi / 2 x)$. Remember `cos()` is in radians.
#figure(
image("/img/tuc_under_cosint.svg", width: 100%),
caption: [
Axial utilisation versus axial component under cosine interaction in the combined utilisation expression
]
) <cf1>
#let cosint = read("/src/cosint.py")
#{linebreak();raw(cosint, lang: "python")}
$ - * - $ |
|
https://github.com/ThatOneCalculator/riesketcher | https://raw.githubusercontent.com/ThatOneCalculator/riesketcher/main/riesketcher.typ | typst | MIT License | #import "@preview/cetz:0.2.2"
/// Draw a Riemann sum of a function, and optionally plot the function.
///
/// - fn (function): The function to draw a Riemann sum of.
/// - domain (array): Tuple of the domain of fn. If a tuple value is auto, that
/// value is set to start/end.
/// - start (number): Where to start drawing bars.
/// - end (number): Where to end drawing bars.
/// - n (number): Number of bars
/// - y-scale (number): Y scale of bars.
/// - method (string): Where points are derrived from. Can be "left", "mid"/"midpoint", or "right".
/// - transparency (number): Transparency fill of bars.
/// - dot-radius (number): Radius of dots.
/// - plot (boolean): Whether to add plot of the function.
/// - plot-grid (boolean): Show grid on plot.
/// - plot-x-tick-step (number): X tick step of plot.
/// - plot-y-tick-step (number): Y tick step of plot.
/// - positive-color (color): Color of positive bars.
/// - negative-color (color): Color of negative bars.
/// - plot-line-color (color): Color of plotted line.
#let riesketcher(
fn,
start: 0,
end: 10,
domain: (auto, auto),
n: 10,
y-scale: 1,
method: "left",
transparency: 40%,
dot-radius: 0.15,
plot: true,
plot-grid: false,
plot-x-tick-step: auto,
plot-y-tick-step: auto,
positive-color: color.green,
negative-color: color.red,
plot-line-color: color.blue,
size: (5, 5),
) = {
// Adjust the function domain if set to auto
if domain.at(0) == auto { domain.at(0) = start }
if domain.at(1) == auto { domain.at(1) = end }
let horizontal-hand-offset = 0%
if method == "right" {
horizontal-hand-offset = 100%
}
else if method == "mid" or method == "midpoint" {
horizontal-hand-offset = 50%
}
let col-trans(color, opacity) = {
let space = color.space()
space(..color.components(alpha: false), opacity)
}
let delta = end - start
let bar-width = (end - start) / n
let bar-position = if method == "left" {
"start"
} else if method == "right" {
"end"
} else {
"center"
}
let bar-y = range(0, n).map(x => {
let x = start + bar-width * (x + horizontal-hand-offset / 100%)
(x, fn(x))
})
let positive-bar-style = (
fill: col-trans(positive-color.lighten(70%).darken(8%), transparency),
stroke: col-trans(positive-color.darken(30%), 90%) + 1.1pt
)
let negative-bar-style = (
: ..positive-bar-style,
fill: col-trans(negative-color.lighten(70%).darken(8%), transparency),
stroke: col-trans(negative-color.darken(30%), 90%) + 1.1pt
)
let positive-dot-style = (
stroke: black,
fill: positive-color
)
let negative-dot-style = (
: ..positive-dot-style,
fill: negative-color,
)
cetz.plot.plot(
size: size,
x-grid: plot-grid,
y-grid: plot-grid,
axis-style: if plot { "school-book" } else { none },
x-tick-step: plot-x-tick-step,
y-tick-step: plot-y-tick-step,
{
for (x, y) in bar-y {
cetz.plot.add-bar(((x, y),),
bar-width: bar-width,
bar-position: bar-position,
style: if y >= 0 { positive-bar-style } else { negative-bar-style })
}
if plot {
cetz.plot.add(
domain: domain,
x => fn(x),
style: (stroke: plot-line-color + 1.5pt))
}
for (x, y) in bar-y {
cetz.plot.add(((x, y),),
mark: "o",
style: (stroke: none),
mark-size: dot-radius,
mark-style:if y >= 0 { positive-dot-style } else { negative-dot-style })
}
})
}
|
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 33