Python

[Python] kaggle #2 House Prices - Advanced Regression Techniques with GPT

데사전지 2023. 3. 20. 10:22
반응형

House Prices - Advanced Regression Techniques는 Kaggle에서 주최하는 대회 중 하나입니다. 이 대회는 Ames, Iowa의 부동산 가격 데이터셋을 활용하여 집값을 예측하는 것을 목표로 합니다.

 

Python을 사용하여 이 대회에 참가하려면, 다음과 같은 단계를 따르면 됩니다.

 

# 라이브러리 불러오기
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression, Lasso, Ridge, ElasticNet
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor
from sklearn.model_selection import GridSearchCV, KFold

# 데이터 불러오기
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv")

# 전처리
# 결측치 처리
train = train.fillna(train.mean())
test = test.fillna(test.mean())

# 이상치 제거
train = train[train.GrLivArea < 4500]

# 변수 변환
train["GrLivArea"] = np.log1p(train["GrLivArea"])
test["GrLivArea"] = np.log1p(test["GrLivArea"])

# 변수 선택
corr_matrix = train.corr()
top_corr_features = corr_matrix.index[abs(corr_matrix["SalePrice"]) > 0.5]
train = train[top_corr_features]

# 모델링
X_train = train.drop("SalePrice", axis=1)
y_train = train["SalePrice"]

# 그리드 서치 및 교차 검증
kf = KFold(n_splits=5, shuffle=True, random_state=42)

# Linear Regression
model_lr = LinearRegression()
param_lr = {}
grid_lr = GridSearchCV(model_lr, param_lr, cv=kf, scoring='neg_mean_squared_error')
grid_lr.fit(X_train, y_train)
scores_lr = np.sqrt(-grid_lr.best_score_)

# Lasso Regression
model_lasso = Lasso()
param_lasso = {"alpha": [0.001, 0.01, 0.1]}
grid_lasso = GridSearchCV(model_lasso, param_lasso, cv=kf, scoring='neg_mean_squared_error')
grid_lasso.fit(X_train, y_train)
scores_lasso = np.sqrt(-grid_lasso.best_score_)

# Ridge Regression
model_ridge = Ridge()
param_ridge = {"alpha": [1, 10, 100]}
grid_ridge = GridSearchCV(model_ridge, param_ridge, cv=kf, scoring='neg_mean_squared_error')
grid_ridge.fit(X_train, y_train)
scores_ridge = np.sqrt(-grid_ridge.best_score_)

# Elastic Net
model_en = ElasticNet()
param_en = {"alpha": [0.001, 0.01, 0.1], "l1_ratio": [0.5, 0.7, 0.9]}
grid_en = GridSearchCV(model_en, param_en, cv=kf, scoring='neg_mean_squared_error')
grid_en.fit(X_train, y_train)
scores_en = np.sqrt(-grid_en.best_score_)

# Random Forest
model_rf = RandomForestRegressor()
param_rf = {"n_estimators": [50, 100, 200], "max_depth": [5, 10, 15], "min_samples_leaf": [1, 3, 5]}
grid_rf = GridSearchCV(model_rf, param_rf, cv=kf, scoring='neg_mean_squared_error')
grid_rf.fit(X_train, y_train)
scores_rf = np.sqrt(-grid_rf.best_score_)

# 예측
X_test = test[X_train.columns]

y_pred_lr = grid_lr.best_estimator_.predict(X_test)
y_pred_lasso = grid_lasso.best_estimator_.predict(X_test)
y_pred_ridge = grid_ridge.best_estimator_.predict(X_test)
y_pred_en = grid_en.best_estimator_.predict(X_test)
y_pred_rf = grid_rf.best_estimator_.predict(X_test)

# 최종 예측 결과 생성
y_pred_final = (y_pred_lr + y_pred_lasso + y_pred_ridge + y_pred_en + y_pred_rf) / 5

# 제출
submission = pd.DataFrame({
"Id": test["Id"],
"SalePrice": y_pred_final
})
submission.to_csv('submission.csv', index=False)
반응형