When you perform a hypothesis test in statistics, a pvalue helps you determine the significance of your results.

A small pvalue (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject the null hypothesis.

A large pvalue (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.

pvalues very close to the cutoff (0.05) are considered to be marginal (could go either way). Always report the pvalue so your readers can draw their own conclusions.
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
x=[[1],[2],[3],[7],[5]]
y=[1,2,3,4,5]
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(x, y)
regr.predict(4)
# Plot outputs
plt.scatter(x, y, color=’black’)
plt.plot(x, y, color=’blue’, linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
var=sm.OLS(x,y)
ss=var.fit()
ss.summary()
Dep. Variable:  y  Rsquared:  0.927 

Model:  OLS  Adj. Rsquared:  0.909 
Method:  Least Squares  Fstatistic:  51.16 
Date:  Mon, 08 Jan 2018  Prob (Fstatistic):  0.00202 
Time:  18:22:40  LogLikelihood:  7.7047 
No. Observations:  5  AIC:  17.41 
Df Residuals:  4  BIC:  17.02 
Df Model:  1  
Covariance Type:  nonrobust 
coef  std err  t  P>t  [0.025  0.975]  

x1  1.2182  0.170  7.152  0.002  0.745  1.691 
Omnibus:  nan  DurbinWatson:  2.850 

Prob(Omnibus):  nan  JarqueBera (JB):  1.307 
Skew:  1.252  Prob(JB):  0.520 
Kurtosis:  2.956  Cond. No.  1.00 