When you perform a hypothesis test in statistics, a p-value helps you determine the significance of your results.
-
A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject the null hypothesis.
-
A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.
-
p-values very close to the cutoff (0.05) are considered to be marginal (could go either way). Always report the p-value so your readers can draw their own conclusions.
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
x=[[1],[2],[3],[7],[5]]
y=[1,2,3,4,5]
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(x, y)
regr.predict(4)
# Plot outputs
plt.scatter(x, y, color=’black’)
plt.plot(x, y, color=’blue’, linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
var=sm.OLS(x,y)
ss=var.fit()
ss.summary()
Dep. Variable: | y | R-squared: | 0.927 |
---|---|---|---|
Model: | OLS | Adj. R-squared: | 0.909 |
Method: | Least Squares | F-statistic: | 51.16 |
Date: | Mon, 08 Jan 2018 | Prob (F-statistic): | 0.00202 |
Time: | 18:22:40 | Log-Likelihood: | -7.7047 |
No. Observations: | 5 | AIC: | 17.41 |
Df Residuals: | 4 | BIC: | 17.02 |
Df Model: | 1 | ||
Covariance Type: | nonrobust |
coef | std err | t | P>|t| | [0.025 | 0.975] | |
---|---|---|---|---|---|---|
x1 | 1.2182 | 0.170 | 7.152 | 0.002 | 0.745 | 1.691 |
Omnibus: | nan | Durbin-Watson: | 2.850 |
---|---|---|---|
Prob(Omnibus): | nan | Jarque-Bera (JB): | 1.307 |
Skew: | 1.252 | Prob(JB): | 0.520 |
Kurtosis: | 2.956 | Cond. No. | 1.00 |