0
Follow
0
View

Given the distribution of functions, how to use metropolis algorithm to obtain target sample functions

eg2013 注册会员
2023-02-28 02:59

This answer quotes ChatGPT

The

Metropolis-Hastings algorithm is a Monte Carlo method for generating random samples with a specific distribution. It can be used to get the probability distribution of a target sample function.

Here are the steps to get the target sample function using the Metropolis-Hastings algorithm:

1. Select a starting point, $x_0$, which can be any value in the target distribution function. < br / > $2, define a candidate distribution function q(x | y) $, $x $is new samples, $y $is the current sample. < br / > $3, from the candidate distribution function q(x | y) $sampling to get a new sample is $x '$. < br / > 4, calculating acceptance rate $\ alpha(x, y) = / min/left(1 \ frac {p(x ') q(y | x ')} {p(y) q(x '} \ | y) right) $, where $p(x) $is the target distribution function, $p(y) $is the distribution function of the current sample.
5, accept the new sample $x'$with the probability of acceptance rate $\alpha(x,y)$, that is, generate a random number $u$, if $u \le \alpha(x,y)$, accept the new sample, otherwise continue to use the current sample $y$.
6. Repeat Steps 3-5 until you have the required number of samples. < br / > 7, Metropolis - Hastings algorithm is the key step is to define the candidate distribution function $q(x | y) $, its form can affect the performance of the algorithm. In general, you can use either a Gaussian or a uniform distribution as a candidate distribution function, depending on the shape of the target distribution function.

The resulting sample sequence can be used to estimate the expected value, variance, and other statistics of the target distribution function.

ddhhhnj 注册会员
2023-02-28 02:59

the reference answer GPT ᴼ ᴾ ᴱ ᴺ ᴬ ᴵ < br / > Metropolis - Hastings algorithm is a kind of based on markov chain monte carlo(MCMC) random sampling method, Used to sample from the given distribution $p(x)$. It can be used for probability distributions in any dimension, including complex distributions that are difficult to calculate in higher dimensional Spaces.

The

Metropolis-Hastings algorithm consists of two steps: state transition and acceptance of rejection. Assuming that the current state is $x_t$and the state to be transferred to is $x_{t+1}$, the state transition can be calculated as follows:

    < li >

    $q from a proposal distribution(x_ {t + 1} | x_t) sampling in $$x_ {t + 1} $.

    < / li > < li >

    accept rate(acceptance thewire) $\ alpha(x_t, x_ {t + 1}) = / min/left {1, \frac{p(x_{t+1})q(x_t|x_{t+1})}{p(x_t)q(x_{t+1}|x_t)}\right}$

    < / li > < li >

    $\ alpha in probability(x_t, x_ {t + 1}) $to accept the new state $x_ {t + 1} $, $x_t $or keep the original state.

In this way, the samples obtained finally can approximately obey $p(x)$after continuous state transfer.

Specifically, if we know the distribution $q(x)$of a function $f(x)$, we can use the Metropolis-Hastings algorithm to sample from the distribution $p(x)$. Where $p(x) \propto f(x)$is a constant multiple of $f(x)$. The constant here can be calculated by normalizing the constant.

Specifically, we need to do the following steps:
1. Select an initial state $x_0$and set the number of iterations $T$.

2. For each iteration $t=1,2... ,T$, and perform the following steps:

    < li >

    from $q(x_ {t} | x_} {t - 1) sampling a proposed $x_t $$.

    < / li > < li >

    accept rate $\ alpha(x_ {1} t -, x_t) = / min/left {1 \ frac {f(x_t) q(x_ {1} t - | x_t)} {f(x_ {1} t -) q(x_t | x_ {t - 1})}} \ right $.

    < / li > < li >

    $\ alpha in probability(x_ {1} t -, x_t) $accept $x_t $, otherwise keep $x_ {1} t - $stays the same.

3. Repeat Step 2 until the number of iterations $T$is reached.

The resulting sample is sampled from the target distribution $p(x) \propto f(x)$. Note that in practice, we need to choose the appropriate $q(x)$in order to converge more efficiently to the target distribution $p(x)$.

fyb3526642 注册会员
2023-02-28 02:59

The answer content partially references GPT, GPT_Pro for better problem solving
Metropolis algorithm is a random search method that can be used to extract samples from a distribution of known functions. It uses the idea of Markov chain, mainly through iterative sampling, constantly changing the location of sampling points, so as to gradually optimize the optimal sample.

First, we need to define a Markov chain whose basic form consists of three parts: the initial state, the state transition probability, and the sampling process. Among them, the initial state is the initial sample; State transition probability is the transition probability from the current sample to the next sample; And the sampling process is taking a sample from a distribution of a given function.

In the Metropolis algorithm, the steps for establishing a Markov chain are as follows:
1. Random initial samples are generated over a finite computational region based on the distribution of known functions.
2. Iterate for many times and generate the latter sample continuously.
3. According to the difference between the function value of the current sample and the function value of the next sample, calculate the corresponding conversion probability.
4. According to the conversion probability, a random number is generated at random and compared with the conversion probability to decide whether to take the current sample as the last sample.
5. Repeat the above steps until all Markov chains have been traversed to obtain the optimized target sample.

// Metropolis算法
// 初始化
// 从已知函数的分布中生成随机初始样本
x = sample(distribution) 
// 迭代多次
for(i=0;i// 生成后一个样本 
    x_next = sample(distribution)  
    // 计算当前样本和后一个样本之间的函数差异 
    delta = f(x_next) - f(x) 
    // 计算对应的转换概率 
    p = min(1, exp(delta))  
    // 随机生成一个0-1之间的随机数 
    r = random(0,1) 
    // 判断是否采用当前样本作为后一个样本 
    if (p>r){ 
        x = x_next  // 采用当前样本作为后一个样本 
    } 
} 
// 输出目标样本 
target_sample = x  

If the answer is helpful, please accept it.

About the Author

Question Info

Publish Time
2023-02-28 02:59
Update Time
2023-02-28 02:59